text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
Sometimes we want to be able to combine several different criteria to select elements from arrays or tables. So far we have used *boolean* Series and arrays to select rows. This works fine when we have some simple criterion, such as whether the value in the column or array is greater than 10. For example, consider the [students ratings dataset](https://matthew-brett.github.io/cfd2019/data/rate_my_professors) dataset. Download the data file via [disciplines_SI.xlsx](https://matthew-brett.github.io/cfd2019/data/disciplines_SI.xlsx). ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline # Make plots look a little bit more fancy plt.style.use('fivethirtyeight') # Read the Excel format data file ratings = pd.read_excel('disciplines_SI.xlsx') ratings.head() ``` We can select the rows from this table where the Easiness rating was above the median, using a *boolean series*: ``` easiness = ratings['Easiness'] is_gt_median = easiness > np.median(easiness) is_gt_median.head() above_median = ratings[is_gt_median] above_median.head() ``` What if we wanted to select the rows that were between the 25th and 75th percentile? Here's how to get the percentile values. ``` q25 = np.quantile(easiness, 0.25) q75 = np.quantile(easiness, 0.75) print(q25, q75) ``` We can do this more neatly with [unpacking](using_minimize): ``` q25, q75 = np.quantile(easiness, [0.25, 0.75]) print(q25, q75) ``` Now we want to select the rows where the Easiness score is between these values. We can do this the long way round, by selecting twice: ``` # Select values above the 25th percentile. above_q25 = ratings[easiness > q25] # There are now fewer Easiness values, so we have to get the values remaining. q25_easiness = above_q25['Easiness'] # Select values below the 75th percentile. between_25_75 = above_q25[q25_easiness < q75] between_25_75.head() ``` Another, neater way of doing this is to make a single Boolean Series that has True only if the Easiness value is *both* above the 25th percentile *and* below the 75th percentile. This is called a *logical and*. To do this we can make a Boolean Series for each of these two criteria: ``` # True if Easiness is above 25th percentile. is_gt_q25 = easiness > q25 # Show the first 10 values is_gt_q25.head(10) # True if Easiness is below 75th percentile. is_lt_q75 = easiness < q75 # Show the first 10 values is_lt_q75.head(10) ``` We can combine these two with Numpy functions. The function we need in this case is `np.logical_and`. `np.logical_and` can work on Pandas Series, or on Numpy arrays. We will use the term *sequence* for something that can be a Pandas Series or a Numpy array. `np.logical_and` combines the two input sequences into a new sequence that only has True in positions where *both* of the input sequences have a True in the corresponding position: ``` is_between_25_75 = np.logical_and(is_gt_q25, is_lt_q75) is_between_25_75.head(10) ``` It might be easier to see what is going on if we make some small test arrays: ``` a = np.array([True, True, False, False]) b = np.array([True, False, True, False]) ``` We can show these conveniently as a DataFrame: ``` ab = pd.DataFrame() ab['first input'] = a ab['second input'] = b ab ``` Before you look, try to work out what you would get from `np.logical_and(a, b)`. Remember, the rule is, the result will have True where the corresponding element from *both* `a` and `b` are True, and False otherwise. Here's something to keep you entertained while you are thinking: ``` from IPython.display import YouTubeVideo YouTubeVideo("gdJWZxPW45c") ``` The result: ``` np.logical_and(a, b) ``` Here are the two input columns and the result, displayed as a data frame, to show them nicely: ``` ab['and result'] = np.logical_and(a, b) ab ``` Check that you agree with Python's results for combining `is_gt_q25` and `is_lt_q75` in the same way. Here's a display showing `is_gt_q25`, `is_lt_q75` and the result of `logical_and`: ``` qbools = pd.DataFrame() qbools['is_gt_q25'] = is_gt_q25 qbools['is_lt_q75'] = is_lt_q75 qbools['and_result'] = np.logical_and(is_gt_q25, is_lt_q75) qbools.head(10) ``` We can use the combined Boolean series from `logical_and` to select the rows that we want: ``` betweeners = ratings[np.logical_and(is_gt_q25, is_lt_q75)] betweeners.head() ``` Notice that we only have rows where there is a corresponding True value in the result of the `logical_and`, and therefore, that we only have rows that are above the 25th percentile, and below the 75th percentile. You may not be surprised to know there is an equivalent function to `logical_and` called `logical_or`. Like `logical_and` this returns a Boolean sequence of the same length as the input sequences. There is a True in the output sequence where *one or both* of the input sequences have True in the corresponding positions. ``` a b np.logical_or(a, b) ab['or result'] = np.logical_or(a, b) ab ``` We can use this function to find all the rows that have Easiness ratings above the 75th percentile or below the 25th percentile: ``` easy_or_hard = ratings[np.logical_or(easiness < q25, easiness > q75)] easy_or_hard.head() ```
github_jupyter
``` %load_ext autoreload %autoreload 2 import sys, os import h5py import numpy as np from ffn.inference import storage from ffn.inference.segmentation import clear_dust, make_labels_contiguous, clean_up #from cc3d import connected_components import neuroglancer import cloudvolume def hdf5_to_cloudvolume(): pass import matplotlib.pyplot as plt ``` # INPUT Probably the only part of the script you need to change anything ``` #RAW data information raw_data = 'training_data.h5' raw_data_dset = 'image' #usually don't change ##FFN segmentation information seg_folder = './results/inference_01/' seg_seed = (0,0,0) #RESOLUTION information xy_res = 1200 z_res = 1200 #PROCESSING information cleanup = 0 #0 for nothing , 1 for SS-RAF, 2 for michal, 3 for michal+raf min_particle = 10 ``` # PROCESSING PART Please don't touch!!! ``` #don't touch this lines h5file = h5py.File(raw_data, 'r+') image = h5file[raw_data_dset] print('Raw data info:') print(image.shape, image.dtype) #don't touch this lines seg, _ = storage.load_segmentation(seg_folder,seg_seed,allow_cpoint=True) print('Label data info:') print(seg.shape, seg.dtype) #don't change seg_cleanup = np.zeros(seg.shape) seg_cleanup = seg[:,:,:] if(cleanup==0): pass elif(cleanup==1): labels_in = np.zeros(seg.shape,dtype='uint8') labels_in[seg!=0] = 1 labels_out = connected_components(labels_in) seg_cleanup = clear_dust(labels_out,min_particle) elif(cleanup==2): clean_up(seg_cleanup, min_size=min_particle) elif(cleanup==3): clean_up(seg_cleanup, min_size=min_particle) labels_in = np.zeros(seg_cleanup.shape,dtype='uint8') labels_in[seg_cleanup!=0] = 1 seg_cleanup = connected_components(labels_in) else: print("put a valid cleanup option!!") ids = np.unique(seg, return_counts=1) ids0 = np.unique(seg_cleanup,return_counts=1) print ('Original dataset info:') print(seg.shape,seg.dtype) print ('Number of features:') print (len(ids[0])) print ('Cleanup dataset info:') print(seg_cleanup.shape,seg_cleanup.dtype) print ('Number of features:') print (len(ids0[0])) ``` # Visualization Always execute viewer.close() after you leave otherwise you may leave hell-spawns on your memory. ``` voxel=(z_res,xy_res,xy_res) viewer = neuroglancer.Viewer() with viewer.txn() as s: s.layers['image'] = neuroglancer.ImageLayer( source=neuroglancer.LocalVolume(data=image, voxel_size=voxel, volume_type='image')) s.layers['segmentation_clean'] = neuroglancer.SegmentationLayer( source=neuroglancer.LocalVolume(data=seg_cleanup, voxel_size=voxel, volume_type='segmentation'),segments=ids0[0]) s.layers['segmentation_original'] = neuroglancer.SegmentationLayer( source=neuroglancer.LocalVolume(data=seg, voxel_size=voxel, volume_type='segmentation'),segments=ids[0]) print(viewer.get_viewer_url()) del viewer ``` # Tiff stack save!! ``` # dxchange.write_tiff_stack(image,'./stacks/results/cube_', dtype='uint8', overwrite=True) # dxchange.write_tiff_stack(labels_out,'./stacks/results/seg_', dtype='uint64', overwrite=True) ``` # WELCOME TO DISNEYLAND ## Where dreams come true ``` from knossos_utils import knossosdataset #magic happens here.. Magic happens everywhere.. knossos_location = './kcube/' knossos_conf = 'knossos.conf' knossos_new_anno = 'inference01' #don't touch bellow kd = knossosdataset.KnossosDataset() kd.initialize_from_knossos_path(knossos_location+knossos_conf) box_offset = [0,0,0] #don't touch, for future use kd.from_matrix_to_cubes(offset=box_offset, data=seg, as_raw=False, kzip_path=os.path.join(knossos_location,knossos_new_anno)) ```
github_jupyter
# Install GPFlow ``` !pip3 install gpflow ``` # Import package ``` %reload_ext autoreload %autoreload 2 from typing import Tuple, Optional import tempfile import pathlib import datetime import io import matplotlib.pyplot as plt from matplotlib.ticker import LinearLocator, FormatStrFormatter import numpy as np import tensorflow as tf import gpflow from gpflow.config import default_float, default_jitter from gpflow.ci_utils import ci_niter from gpflow.utilities import to_default_float, triangular, positive from gpflow.models import GPModel from gpflow.models.training_mixins import InputData, OutputData, RegressionData from gpflow.models.training_mixins import ExternalDataTrainingLossMixin from gpflow.types import MeanAndVariance import warnings from scipy.cluster.vq import kmeans import time current_milli_time = lambda: int(round(time.time() * 1000)) warnings.filterwarnings("ignore") ``` # Define SWSGP ``` class SWSGP(GPModel, ExternalDataTrainingLossMixin): def __init__(self, kernel, likelihood, Z, num_data, q_mu=None, q_sqrt=None, **kwargs): super().__init__(kernel, likelihood, mean_function=None, num_latent_gps=1) self.num_data = num_data self.num_ip = len(Z) self.inducing_positions = gpflow.models.util.inducingpoint_wrapper(Z) self.init_variational_parameters(q_mu, q_sqrt) return def init_variational_parameters(self, q_mu, q_sqrt): q_mu = np.zeros((self.num_ip, 1)) if q_mu is None else q_mu q_sqrt = np.eye(self.num_ip) if q_sqrt is None else q_sqrt self.q_mu = gpflow.Parameter(q_mu, dtype=default_float()) self.q_sqrt = gpflow.Parameter(q_sqrt, dtype=triangular()) return def get_active_id_ip(self, Xnew, H): # compute the euclidean distance between X and Z RX = tf.reshape(tf.reduce_sum(tf.square(Xnew), axis=1), [-1, 1]) # [N, 1] RZ = tf.reshape(tf.transpose(tf.reduce_sum(tf.square(self.inducing_positions.Z), axis=1)), [1, -1]) # [1, M] XZ = tf.matmul(Xnew, self.inducing_positions.Z, transpose_b=True) # [N, M] minus_distance = -(RX - 2 * XZ + RZ) # [N, M] _, active_id_ip = tf.nn.top_k(input=minus_distance, k=H, sorted=False) # [N, H] return active_id_ip def get_active_params(self, active_id_ip): active_Z = tf.gather(self.inducing_positions.Z, active_id_ip) # [H, D] active_q_mu = tf.gather(self.q_mu, active_id_ip) # [H, 1] active_s_sqrt = tf.gather(self.q_sqrt, active_id_ip) # [H, M] active_s = tf.matmul(active_s_sqrt, active_s_sqrt, transpose_b=True) active_s = active_s + default_jitter() * tf.eye(tf.shape(active_q_mu)[0], dtype=default_float()) active_q_sqrt = tf.linalg.cholesky(active_s) # [H, H] return active_Z, active_q_mu, active_q_sqrt def predict_f_each_sample(self, Xnew, active_Z, active_q_mu, active_q_sqrt, L=None): if L is None: KHH = self.kernel.K(active_Z) + default_jitter() * tf.eye(tf.shape(active_q_mu)[0], dtype=default_float()) # [H, H] L = tf.linalg.cholesky(KHH) # [H, H] KHX = self.kernel.K(active_Z, Xnew) LiKHX = tf.linalg.triangular_solve(L, KHX, lower=True) # L^{-1} * KZX AT = tf.linalg.triangular_solve(tf.transpose(L), LiKHX, lower=False) # LT^{-1} * L^{-1} * KHX # Compute fmean = KXH * KHH^{-1} * q_mu fmean = tf.matmul(AT, active_q_mu, transpose_a=True) # [N, 1] # Compute fvar = diag(KXX) + diag(KXH * KHH^{-1} * active_S * KHH^{-1} * KHX) - diag(KXH * KHH^{-1} * KHX) fvar1 = tf.reshape(self.kernel.K_diag(Xnew), [-1, 1]) # [N, 1] fvar2 = tf.reshape(tf.reduce_sum(tf.square(tf.matmul(active_q_sqrt, AT, transpose_a=True)), axis=0), [-1, 1]) # [N, 1] fvar3 = tf.reshape(tf.reduce_sum(tf.square(LiKHX), axis=0), [-1, 1]) # [N, 1] fvar = fvar1 + fvar2 - fvar3 # [N, 1] return fmean, fvar def prior_kl(self, active_Z, active_q_mu, active_q_sqrt): KHH = self.kernel.K(active_Z) + default_jitter() * tf.eye(tf.shape(active_q_mu)[0], dtype=default_float()) L = tf.linalg.cholesky(KHH) log_det_KHH = tf.reduce_sum(tf.math.log(tf.square(tf.linalg.diag_part(L)))) q_sqrt = tf.squeeze(active_q_sqrt) log_det_S = tf.reduce_sum(tf.math.log(tf.square(tf.linalg.diag_part(active_q_sqrt)))) trace = tf.reduce_sum(tf.square(tf.linalg.triangular_solve(L, active_q_sqrt, lower=True))) mahalanobis = tf.reduce_sum(tf.square(tf.linalg.triangular_solve(L, active_q_mu, lower=True))) H = tf.cast(tf.shape(active_q_mu)[0], dtype=default_float()) kl = 0.5 * (log_det_KHH - log_det_S - H + trace + mahalanobis) return kl, L def build_elbo(self, data: RegressionData, H: int): X, Y = data list_ll, list_kl = [], [] list_active_id_ip = self.get_active_id_ip(X, H) mini_batch_size = X.get_shape().as_list()[0] for id_data in range(mini_batch_size): x = tf.gather_nd(params=X, indices=[[id_data]]) y = tf.gather_nd(params=Y, indices=[[id_data]]) active_id_ip = tf.transpose(tf.gather_nd(params=list_active_id_ip, indices=[id_data])) # [H] active_Z, active_q_mu, active_q_sqrt = self.get_active_params(active_id_ip=active_id_ip) kl, L = self.prior_kl(active_Z, active_q_mu, active_q_sqrt) fmean, fvar = self.predict_f_each_sample(x, active_Z, active_q_mu, active_q_sqrt, L=L) var_exp = self.likelihood.variational_expectations(fmean, fvar, y) # [1, 1] ll = tf.reduce_sum(var_exp) # return the sum not scale # () list_ll.append(ll) list_kl.append(kl) num_data = tf.cast(self.num_data, dtype=default_float()) mini_batch_size = tf.cast(tf.shape(X)[0], dtype=default_float()) scale = num_data / mini_batch_size ell_term = tf.add_n(list_ll) * scale kl_term = tf.add_n(list_ll) / mini_batch_size return ell_term - kl_term def predict_f(self, Xnew: InputData, H: int) -> MeanAndVariance: list_fmean, list_fvar = [], [] list_active_id_ip = self.get_active_id_ip(Xnew, H) mini_batch_size = Xnew.get_shape().as_list()[0] for id_data in range(mini_batch_size): x = tf.gather_nd(params=Xnew, indices=[[id_data]]) active_id_ip = tf.transpose(tf.gather_nd(params=list_active_id_ip, indices=[id_data])) # [H] active_Z, active_q_mu, active_q_sqrt = self.get_active_params(active_id_ip=active_id_ip) fmean, fvar = self.predict_f_each_sample(x, active_Z, active_q_mu, active_q_sqrt) list_fmean.append(fmean) list_fvar.append(fvar) fmeans = tf.reshape(tf.concat(list_fmean, axis=0), [-1, 1]) # [N, 1] fvars = tf.reshape(tf.concat(list_fvar, axis=0), [-1, 1]) # [N, 1] return fmeans, fvars def build_predict(self, Xnew: InputData, H: int) -> MeanAndVariance: f_mean, f_var = self.predict_f(Xnew=Xnew, H=H) return self.likelihood._predict_mean_and_var(f_mean, f_var) def maximum_log_likelihood_objective(self, data: RegressionData, H:int) -> tf.Tensor: return self.build_elbo(data, H) ``` # Generate data ``` train_size, test_size = 500, 200 low, high = -1.0, 1.0 train_x_np = np.reshape(np.random.uniform(low=low, high=high, size=train_size), (-1, 1)) train_y_np = np.sin(12 * train_x_np) + 0.66 * np.cos(25 * train_x_np) test_x_np = np.reshape(np.linspace(start=low, stop=high, num=test_size), (-1, 1)) test_y_np = np.sin(12 * test_x_np) + 0.66 * np.cos(25 * test_x_np) plt.plot(train_x_np, train_y_np, 'kx') plt.plot(test_x_np, test_y_np, 'g') ``` # Training process ``` # reset graph tf.compat.v1.reset_default_graph() gpflow.config.set_default_float(np.float64) np.random.seed(0) tf.random.set_seed(0) n_epochs = 10000 evaluate_model_interval = 100 num_ip = 32 H = 4 train_x = tf.constant(train_x_np, dtype=default_float()) train_y = tf.constant(train_y_np, dtype=default_float()) test_x = tf.constant(test_x_np, dtype=default_float()) test_y = tf.constant(test_y_np, dtype=default_float()) train_dataset = tf.data.Dataset.from_tensor_slices((train_x, train_y)) test_dataset = tf.data.Dataset.from_tensor_slices((test_x, test_y)) batch_size = 16 prefetch_size = tf.data.experimental.AUTOTUNE shuffle_buffer_size = train_size // 2 num_batches_per_epoch = train_size // batch_size original_train_dataset = train_dataset train_dataset = (train_dataset.repeat().prefetch(prefetch_size).shuffle(buffer_size=shuffle_buffer_size).batch(batch_size)) print(f"prefetch_size={prefetch_size}") print(f"shuffle_buffer_size={shuffle_buffer_size}") print(f"num_batches_per_epoch={num_batches_per_epoch}") low, high = np.min(train_x_np), np.max(train_x_np) Z = np.linspace(low, high, num_ip).reshape(-1, 1) kernel = gpflow.kernels.Matern52() likelihood = gpflow.likelihoods.Gaussian() model = SWSGP(kernel=kernel, likelihood=likelihood, Z=Z, num_data=len(train_x_np)) batched_dataset = tf.data.Dataset.from_tensor_slices((train_x, train_y)).batch(batch_size) training_loss = model.training_loss_closure(iter(batched_dataset)) optimizer = tf.optimizers.Adam(0.001) def optimization_step(model: SWSGP, batch: Tuple[tf.Tensor, tf.Tensor], H: int): with tf.GradientTape(watch_accessed_variables=False) as tape: tape.watch(model.trainable_variables) #loss = model.training_loss(batch, H) #loss = -model.maximum_log_likelihood_objective(batch, H) loss = model._training_loss(batch, H) grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) return loss def compute_rmse(predictions: tf.Tensor, labels: tf.Tensor) -> tf.Tensor: rmse = tf.sqrt(tf.reduce_mean(tf.square(predictions - labels))) return rmse list_pred_means = [] list_rmse = [] tf_optimization_step = tf.function(optimization_step) batches = iter(train_dataset) train_time = 0 for epoch in range(n_epochs): for _ in range(ci_niter(num_batches_per_epoch)): start_time = current_milli_time() tf_optimization_step(model, next(batches), H) train_time = train_time + (current_milli_time() - start_time) / 60000 epoch_id = epoch + 1 if epoch_id == 1 or epoch_id % evaluate_model_interval == 0: pred_means, pred_vars = model.build_predict(test_x, H) rmse = compute_rmse(predictions=pred_means, labels=test_y) list_pred_means.append(pred_means) print("Epoch id: {}, train_time in minutes: {:.2f}, rmse: {:.4f}".format(epoch_id, train_time, rmse.numpy())) ```
github_jupyter
# Transfer Learning In this notebook, you'll learn how to use pre-trained networks to solved challenging problems in computer vision. Specifically, you'll use networks trained on [ImageNet](http://www.image-net.org/) [available from torchvision](http://pytorch.org/docs/0.3.0/torchvision/models.html). ImageNet is a massive dataset with over 1 million labeled images in 1000 categories. It's used to train deep neural networks using an architecture called convolutional layers. I'm not going to get into the details of convolutional networks here, but if you want to learn more about them, please [watch this](https://www.youtube.com/watch?v=2-Ol7ZB0MmU). Once trained, these models work astonishingly well as feature detectors for images they weren't trained on. Using a pre-trained network on images not in the training set is called transfer learning. Here we'll use transfer learning to train a network that can classify our cat and dog photos with near perfect accuracy. With `torchvision.models` you can download these pre-trained networks and use them in your applications. We'll include `models` in our imports now. ``` %matplotlib inline %config InlineBackend.figure_format = 'retina' import os import time import matplotlib.pyplot as plt import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms, models ``` Most of the pretrained models require the input to be 224x224 images. Also, we'll need to match the normalization used when the models were trained. Each color channel was normalized separately, the means are `[0.485, 0.456, 0.406]` and the standard deviations are `[0.229, 0.224, 0.225]`. ``` data_dir = 'CG_2' # TODO: Define transforms for the training data and testing data # Define a transform to normalize the data train_transforms = transforms.Compose([transforms.Resize((224,224)), transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)), ]) # Define a transform to normalize the data test_transforms = transforms.Compose([transforms.Resize((224,224)), transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)), ]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder( data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=128, shuffle=True,num_workers=16) testloader = torch.utils.data.DataLoader(test_data, batch_size=128,num_workers=16) # Define Threads #torch.set_num_threads(32) ``` We can load in a model such as [DenseNet](http://pytorch.org/docs/0.3.0/torchvision/models.html#id5). Let's print out the model architecture so we can see what's going on. ``` # Create Path to save the model model_path=os.path.join('./models','pf_run_1_trainable') if not os.path.exists(model_path): os.makedirs(model_path) model = models.densenet161(pretrained=True) #model = models.im model ``` This model is built out of two main parts, the features and the classifier. The features part is a stack of convolutional layers and overall works as a feature detector that can be fed into a classifier. The classifier part is a single fully-connected layer `(classifier): Linear(in_features=1024, out_features=1000)`. This layer was trained on the ImageNet dataset, so it won't work for our specific problem. That means we need to replace the classifier, but the features will work perfectly on their own. In general, I think about pre-trained networks as amazingly good feature detectors that can be used as the input for simple feed-forward classifiers. ``` from collections import OrderedDict classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(2208, 1000)), ('relu1', nn.ReLU()), ('dropout1',nn.Dropout(p=0.3)), ('fc2', nn.Linear(1000, 500)), ('relu2', nn.ReLU()), ('fc3', nn.Linear(500, 10)), ('output', nn.LogSoftmax(dim=1)) ])) model.classifier = classifier for name, params in model.named_children(): print(name) state_dict = torch.load('/datadrive2/amit_cvnd/deep-learning-pytorch/models/pf_run_1/cg_e3_lr_0.0005_loss_1.323_acc_0.869_config1_checkpoint.pth') model.load_state_dict(state_dict) model.features.denseblock4.denselayer24.parameters() ## Freezing the first few layers. Here I am freezing the first 7 layers ct = 0 for name, child in model.named_children(): for name2, params in child.named_parameters(): ct += 1 if ct < 474 : params.requires_grad = False #rint("Count : {} ; Name : {} ; Requires Grad : {} ".format(ct,name2,params.requires_grad)) ``` # Freeze parameters so we don't backprop through them for param in model.parameters(): param.requires_grad = False model.denselayer24.parameters() With our model built, we need to train the classifier. However, now we're using a **really deep** neural network. If you try to train this on a CPU like normal, it will take a long, long time. Instead, we're going to use the GPU to do the calculations. The linear algebra computations are done in parallel on the GPU leading to 100x increased training speeds. It's also possible to train on multiple GPUs, further decreasing training time. PyTorch, along with pretty much every other deep learning framework, uses [CUDA](https://developer.nvidia.com/cuda-zone) to efficiently compute the forward and backwards passes on the GPU. In PyTorch, you move your model parameters and other tensors to the GPU memory using `model.to('cuda')`. You can move them back from the GPU with `model.to('cpu')` which you'll commonly do when you need to operate on the network output outside of PyTorch. As a demonstration of the increased speed, I'll compare how long it takes to perform a forward and backward pass with and without a GPU. import time for device in ['cpu']: criterion = nn.NLLLoss() # Only train the classifier parameters, feature parameters are frozen optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) model.to(device) count=0 for ii, (inputs, labels) in enumerate(trainloader): # Move input and label tensors to the GPU inputs, labels = inputs.to(device), labels.to(device) start = time.time() outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() if ii==3: break time_taken = (time.time() - start)/3 print("Device = {}; Time per batch: {} seconds".format(device,time_taken)) You can write device agnostic code which will automatically use CUDA if it's enabled like so: ```python # at beginning of the script device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") ... # then whenever you get a new Tensor or Module # this won't copy if they are already on the desired device input = data.to(device) model = MyModule(...).to(device) ``` From here, I'll let you finish training the model. The process is the same as before except now your model is much more powerful. You should get better than 95% accuracy easily. >**Exercise:** Train a pretrained models to classify the cat and dog images. Continue with the DenseNet model, or try ResNet, it's also a good model to try out first. Make sure you are only training the classifier and the parameters for the features part are frozen. ``` lr=0.0005 criterion = nn.NLLLoss() optimizer = optim.Adam(list(model.features.denseblock4.denselayer24.parameters()) + list(model.classifier.parameters()), lr=lr) # Implement a function for the validation pass def validation(model, testloader, criterion): test_loss = 0 accuracy = 0 for images, labels in testloader: output = model.forward(images) test_loss += criterion(output, labels).item() ps = torch.exp(output) equality = (labels.data == ps.max(dim=1)[1]) accuracy += equality.type(torch.FloatTensor).mean() return test_loss, accuracy # Save Config File def save_config(model,time_taken,file): with open(file,'w') as f: f.write(str(model.classifier)) f.write('Time Taken : {}'.format(time_taken)) # Implement Visualisation & Saving it def visualize(train_loss,test_loss,image_name = 'Config.png'): # visualize the loss as the network trained plt.plot(train_loss,label='trianing_loss') plt.plot(test_loss,label='validation_loss') plt.xlabel('1000\'s of batches') plt.legend() plt.ylabel('loss') plt.ylim(0, 7) # consistent scale plt.show() fig = plt.figure() fig.savefig(image_name) # TODO: Train a model with a pre-trained network import time epoch = 6 steps = 0 running_loss = 0 test_loss = 0 accuracy = 0 print_every = 200 trainloss_over_time = [] # to track the loss as the network trains testloss_over_time = [] # to track the loss as the network trains start = time.time() for e in range(epoch): model.train() for images, labels in trainloader : steps +=1 outputs = model.forward(images) loss = criterion(outputs,labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: # Make sure network is in eval mode for inference model.eval() # Turn off gradients for validation, saves memory and computations with torch.no_grad(): test_loss, accuracy = validation(model, testloader, criterion) trainloss_over_time.append(running_loss/1000) testloss_over_time.append(test_loss/1000) print("Epoch: {}/{}.. ".format(e+1, epoch), "Training Loss: {:.3f}.. ".format(running_loss/print_every), "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) running_loss = 0 # Make sure training is back on model.train() torch.save(model.state_dict(), os.path.join(model_path, 'cg_e{}_lr_{}_loss_{:.3f}_acc_{:.3f}_config1_checkpoint.pth'.format(e+1, lr, test_loss/len(testloader), accuracy/len(testloader)) )) end = time.time() time_taken = end - start visualize(trainloss_over_time,testloss_over_time,os.path.join(model_path,'config1.png')) save_config(model,time_taken,os.path.join(model_path,'config1.cfg')) torch.get_num_threads() ```
github_jupyter
# Pridobitev vektorskih predstavitev besedil V tem zvezku predstavimo, kako lahko pridobimo vektorske predstavitve (vložitve) besed in dokumentov za analizo besedil. Za začetek si preko API-ja pridobimo besedila zadnjih 100 predlogov vladi, ki vsebujejo vsaj 50 znakov (s tem pogojem poskrbimo, da po predprocesiranju ne dobimo praznega seznama pojavnic). ``` from textsemantics.server_api import ServerAPI api = ServerAPI() metadata = api.get_metadata('predlogi-vladi', sample_size=100, sampling_strategy='latest') metadata['text'] = api.get_texts(urls=metadata['text']) metadata = metadata[metadata["text"].apply(lambda x: len(x) > 50)] texts = metadata['text'] print(f'Število predlogov vladi: {len(texts)}') ``` Dobili smo 99 dokumentov. Zdaj lahko dokumente predprocesiramo tako, da iz njih odstranimo končnice, jih pretvorimo v seznam pojavnic, odstranimo prazne besede in lematiziramo preostale pojavnice. ``` import string import nltk nltk.download('stopwords', quiet=True) from nltk.tokenize import RegexpTokenizer from nltk.corpus import stopwords from lemmagen.lemmatizer import Lemmatizer from lemmagen import DICTIONARY_SLOVENE from IPython.display import display, Markdown def preprocess(corpus): stop_words = set(stopwords.words('slovene')) tokenizer = RegexpTokenizer("\w+") lemmatizer = Lemmatizer(dictionary=DICTIONARY_SLOVENE) preprocessed = list() for text in corpus: text = text.translate(text.maketrans({punct: " " for punct in string.punctuation})) tokens = tokenizer.tokenize(text.lower()) tokens = [lemmatizer.lemmatize(token) for token in tokens if token not in stop_words and len(token) > 2 and not token.isnumeric()] preprocessed.append(tokens) return preprocessed tokens_list = preprocess(texts) md_string = '### Prvih 10 pojavnic v prvem dokumentu\n' for tok in tokens_list[0][:10]: md_string += f"- {tok}\n" display(Markdown(md_string)) ``` Sedaj lahko vsak dokument predstavimo kot vektor. Vektorje bomo dobili z uporabo vreče besed, kjer vsak atribut predstavlja eno besedo v slovarju, vsaka vrstica pa en dokument. Tabela predstavi število pojavitev posamezne besede za posamezen dokument. Tabelo lahko prilagodimo tako, da upoštevamo pogostost besed - manj pogoste, a pomembne besede bodo imele višjo vrednost kot take, ki so vseprisotne. Poleg tega bomo dokumente predstavili z uporabo modela fastText, ki temelji na nevronskih mrežah in je prednaučen na velikem korpusu dokumentov. V osnovi je fastText učen, da besede predstavi z nizkodimenzionalnimi vektorji, vendar lahko vektorje dokumentov dobimo s povprečenjem vektorjev besed, ki se v dokumentu nahajajo. ``` from flair.data import Sentence from flair.embeddings import WordEmbeddings, DocumentPoolEmbeddings from sklearn.feature_extraction.text import TfidfVectorizer import numpy as np def vectorize(tokens_list, emb_type='fasttext'): joined_texts = [' '.join(tokens) for tokens in tokens_list] if emb_type=='fasttext': embedder = DocumentPoolEmbeddings([WordEmbeddings('sl')], pooling='mean') X = list() for i, doc in enumerate(joined_texts): sent = Sentence(doc) embedder.embed(sent) X.append(sent.embedding.cpu().detach().numpy()) return np.array(X) elif emb_type == 'tfidf': return TfidfVectorizer().fit_transform(joined_texts) return None ft = vectorize(tokens_list, emb_type='fasttext') tfidf = vectorize(tokens_list, emb_type='tfidf') print(f'Matrika fastText: {ft.shape[0]} vrstic, {ft.shape[1]} stolpcev') print(f'Matrika vreče besed (tf-idf): {tfidf.shape[0]} vrstic, {tfidf.shape[1]} stolpcev') ``` Za vložitve tf-idf smo dobili 99 x 2259 matriko, za vložitve fastText pa 99 x 300 matriko. Dobljene vektorje si shranimo v datoteko, da bi jih lahko uporabljali v nadaljnjih primerih. ``` import os from scipy.sparse import save_npz def save_data(ft, tfidf): word_embs = list() embedder = WordEmbeddings('sl') for word in ['šola', 'počitnice', 'semafor', 'tehnologija']: sent = Sentence(word) embedder.embed(sent) vec = sent.tokens[0].embedding.cpu().detach().numpy() word_embs.append(vec) word_embs = np.array(word_embs) try: os.mkdir('data') except FileExistsError: pass np.save('data/ft.npy', ft) save_npz('data/tfidf.npz', tfidf) np.save('data/words.npy', word_embs) save_data(ft, tfidf) ```
github_jupyter
<!--NOTEBOOK_HEADER--> *This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks); content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).* <!--NAVIGATION--> < [Frequently Asked Questions/Troubleshooting Tips](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/01.05-FAQ.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Pose Basics](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.01-Pose-Basics.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.00-Introduction-to-PyRosetta.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a> # Introduction to PyRosetta Rosetta is a suite of algorithms for biomolecular structure prediction and design. Rosetta is written in C++ and is available from www.rosettacommons.org. PyRosetta is a toolkit in the programming language Python, which encapsulates the Rosetta functionality by using the compiled C++ libraries. Python is an easy language to learn and includes modern programming approaches such as objects. It can be used via scripts and interactively as a command-line program, similar to MATLAB®. The main Rosetta docs can be found here: https://www.rosettacommons.org/docs/latest/Home, and here is another link for getting started: https://www.rosettacommons.org/docs/latest/getting_started/Getting-Started. It should be noted, that while some Rosetta/PyRosetta functionality can be achieved on a local computer, a computational cluster is generally recommended to use for more in-depth structure prediction and design tasks. The goals of this first workshop are (1) to have you learn to use PyRosetta both interactively and by writing programs and (2) to have you learn the PyRosetta functions to access and manipulate properties of protein structure. **Chapter contributors:** - Jason C. Klima (University of Washington; Lyell Immunopharma) - Kathy Le (Johns Hopkins University); parts of this chapter were adapted from the [PyRosetta book](https://www.amazon.com/PyRosetta-Interactive-Platform-Structure-Prediction-ebook/dp/B01N21DRY8) (J. J. Gray, S. Chaudhury, S. Lyskov, J. Labonte). - Jared Adolf-Bryfogle (Scripps; Institute for Protein Innovation) <!--NAVIGATION--> < [Frequently Asked Questions/Troubleshooting Tips](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/01.05-FAQ.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Pose Basics](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.01-Pose-Basics.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.00-Introduction-to-PyRosetta.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
github_jupyter
# Working With Play by Play Working with play by play can be interesting work in that there's a lot of unknown types of data as well as parsing of strings. In addition, there a ton of cool things that can be done with play be play like sending the feed into a pub\sub model so other systems can interact with it, build your own UI, or a whole host of other ideas. The goal of this notebook is to walk through the play by play feed examining data such as 1. `EVENTMSGTYPE` which provides the play type (e.g. FIELD_GOAL_MADE, FIELD_GOAL_MISSED, TIMEOUT, PERIOD_BEGIN, etc.) 2. `EVENTMSGACTIONTYPE`which provides a subcatagorization of `EVENTMSGTYPE` (e.g. REVERSE_LAYUP, 3PT_JUMP_SHOT, HOOK_SHOT, etc.) This notebook builds on top of the following notebooks: [Finding Games](notebook2.ipynb), [Basics Notebook](Basics.ipynb), and of course, dives into `PlayByPlay` endpoint. Note that the `PlayByPlayV2` endpoint is an extension of `PlayByPlay`. So with that...let's get started! The goals are 1. Get the last game the Pacers played (maybe we'll get lucky and get a current game) 2. Examine the feed and the fields that are returned 3. See how Regex can be applied to the play by play 3. Dynamically build a unique list of NBA Player Actions Events using EVENTMSGACTIONTYPE 4. See what's hiding in the feed...need to get those BLOCKS from the shot blockers! ``` # Game IDs needed list_game_ids = ['0041900165', '0041900105', '0041900401', '0041900214', '0041900163', '0041900313', '0041900174', '0041900134', '0041900112', '0041900237', '0041900205', '0041900145', '0041900305', '0041900144', '0041900406', '0041900175', '0041900162', '0041900151', '0041900164', '0041900405', '0041900171', '0041900312', '0041900404', '0041900153', '0041900104', '0041900177', '0041900217', '0041900176', '0041900131', '0041900306', '0041900402', '0041900123', '0041900202', '0041900216', '0041900154', '0041900235', '0041900231', '0041900311', '0041900161', '0041900304', '0041900143', '0041900232', '0041900111', '0041900167', '0041900152', '0041900301', '0041900102', '0041900303', '0041900203', '0041900103', '0041900155', '0041900172', '0041900114', '0041900173', '0041900213', '0041900121', '0041900314', '0041900315', '0041900223', '0041900204', '0041900224', '0041900133', '0041900156', '0041900221', '0041900132', '0041900142', '0041900222', '0041900212', '0041900211', '0041900236', '0041900302', '0041900166', '0041900124', '0041900141', '0041900225', '0041900101', '0041900403', '0041900215', '0041900122', '0041900233', '0041900234', '0041900113', '0041900201'] ``` # Retrieving the play by play data Now that we've got a game_id, let's pull some play by play data ``` # Query for the play by play of that most recent regular season game from nba_api.stats.endpoints import playbyplay for x in list_game_ids: df = playbyplay.PlayByPlay(x).get_data_frames()[0] df.to_csv( f"../../data/interim/games/nba-stats-play-by-play-{x}-2019-20.csv", index=False ) df = playbyplay.PlayByPlay('0041900406').get_data_frames()[0] df.head() #just looking at the head of the data ``` Optional: Dataframes can become large. In pandas you can set some options to make it more visible if needed ``` #Since the datset is fairly large you'll see plenty of elipses(...). #If that's the case, you can set the following options to expand the data #You can adjust these as you'd like import pandas pandas.set_option('display.max_colwidth',250) pandas.set_option('display.max_rows',250) ``` Some of the most valuable fields of `PlayByPlay`are the following: `EVENTMSGTYPE` `EVENTMSGACTIONTYPE` `HOMEDESCRIPTION` and `VISITORDESCRIPTION`. `EVENTMSGTYPE` gives us the type of event that has occurred. This can vary per game. This is why finding these and placing them into an Enum or other type structure is a good idea. ``` #List unique values in the df['EVENTMSGTYPE'] colum print(f'EVENTMSGTYPE: {sorted(df.EVENTMSGTYPE.unique())}') #For quick refernce, here's an Enum for `EVENTMSGTYPE` #This list may be incomplete as a thourogh play by play scan is necessary from enum import Enum class EventMsgType(Enum): FIELD_GOAL_MADE = 1 FIELD_GOAL_MISSED = 2 FREE_THROWfree_throw_attempt = 3 REBOUND = 4 TURNOVER = 5 FOUL = 6 VIOLATION = 7 SUBSTITUTION = 8 TIMEOUT = 9 JUMP_BALL = 10 EJECTION = 11 PERIOD_BEGIN = 12 PERIOD_END = 13 ``` Using the `EVENTMSGTYPE` field we can begin to examine the event types to see what typical values will be in the `EVENTMSGACTIONTYPE` `HOMEDESCRIPTION` and `VISITORDESCRIPTION` fields. ``` #### pull the data for a specfic EVENTMSGTYPE df.loc[df['EVENTMSGTYPE'] == 1].head() #hint: use the EVENTMSGTYPE values above to see different data ``` Now that we've seen what the output of `EVENTMSGTYPE` is, let's dig into `EVENTMSGACTIONTYPE`. For this next exercise, let's pull all unique `EVENTMSGACTIONTYPE` values for `EVENTMSGTYPE = 1` _Note: `EVENTMSGACTIONTYPE` ids have a very loose correlation to `EVENTMSGTYPE` ids. This means that `EVENTMSGTYPE` ids share some of the same `EVENTMSGACTIONTYPE` ids. This allows the NBA to have a 'Missed Field Goal' share the same '3PT Jump Shot' with a 'Made Field Goal'. Now, that being said, they are not always unique. We'll see this towards the end._ ``` #List unique values in the df['EVENTMSGTYPE'] column emt_df = df.loc[df['EVENTMSGTYPE'] == 1] print(f'EVENTMSGACTIONTYPE: {sorted(emt_df.EVENTMSGACTIONTYPE.unique())}') ``` # So how do we know what each `EVENTMSGACTIONTYPE` is? Let the fun begin. Apply some regular expressions, that are `EVENTMSGTYPE` specific, against `HOMEDECSRIPTION` and `VISITORDESCRIPTION` while keeping track of the `EVENTMSGACTIONTYPE`. To see the regular expressions in action, take the example listed in the comments, along with the regex, and head on over to https://regex101.com/ or your favorite regex interative tool. # `EVENTMSGTYPE == 1` The following regex expression `'(\s{2}|\' )([\w+ ]*)` will look for the type of basket within the `VISITORDESCRIPTION` or `HOMEDESCRIPTION` and tie that to the `EVENTMSGACTIONTYPE`. Example: Given a `VISITORDESCRIPTION == 'Young Cutting Layup Shot (2 PTS) (Collison 1 AST)'` and a `EVENTMSGACTIONTYPE = 98`, the code will produce an output of `CUTTING_LAYUP_SHOT = 99` Let's see it in action... _Note: The regex may need to be adjusted over time to account for changes in the data_ ``` #Mapping out all of the EventMsgActionTypes for EventMsgType 1 import re import operator #the following expression is specific to EventMsgType 1 p = re.compile('(\s{2}|\' )([\w+ ]*)') #get the PlayByPlay data from the Pacers game_id plays = playbyplay.PlayByPlay(game_id).get_normalized_dict()['PlayByPlay'] #declare a few variables description = '' event_msg_action_types = {} #loop over the play by play data for play in plays: if play['EVENTMSGTYPE'] == 1: description = play['HOMEDESCRIPTION'] if play['HOMEDESCRIPTION'] is not None else play['VISITORDESCRIPTION'] if description is not None: #do a bit of searching(regex) and a little character magic: underscores and upper case event_msg_action = re.sub(' ', '_', p.search(description).groups()[1].rstrip()).upper() #Add it to our dictionary event_msg_action_types[event_msg_action] = play['EVENTMSGACTIONTYPE'] #sort it all event_msg_action_types = sorted(event_msg_action_types.items(), key=operator.itemgetter(0)) #output a class that we could plug into our code base for action in event_msg_action_types: print(f'\t{action[0]} = {action[1]}') ``` # `EVENTMSGTYPE == 2` We'll reuse the regex expression `(\s{2}|' )([\w+ ]*)` from `EVENTMSGTYPE == 1` for `EVENTMSGTYPE == 2`. EventMsgType 2 are missed field goals. Again, it'll look for the type of basket within the `VISITORDESCRIPTION` or `HOMEDESCRIPTION` and tie that to the `EVENTMSGACTIONTYPE`. Example: Given a `HOMEDESCRIPTION == 'MISS Collison 24' 3PT Jump Shot'` and a `EVENTMSGACTIONTYPE = 2`, the code will produce an output of `3PT_JUMP_SHOT = 1` Let's see it in action... ``` #Mapping out all of the EventMsgActionTypes for EventMsgType 2 import re import operator #the following expression is specific to EventMsgType 1 p = re.compile('(\s{2}|\' )([\w+ ]*)') #get the PlayByPlay data from the Pacers game_id plays = playbyplay.PlayByPlay(game_id).get_normalized_dict()['PlayByPlay'] #declare a few variables description = '' event_msg_action_types = {} #loop over the play by play data #do a bit of findall(regex) and a little character magic: underscores and upper case #we're using a findall here as we have to deal with the extra word MISS at the beginning of the text. #that extra text means we'll have multiple matches for our regex. for play in plays: if play['EVENTMSGTYPE'] == 2: match = list() if play['HOMEDESCRIPTION'] is not None: match = p.findall(play['HOMEDESCRIPTION']) if not match: match = p.findall(play['VISITORDESCRIPTION']) event_msg_action = re.sub(' ', '_', match[0][1]).upper() event_msg_action_types[event_msg_action] = play['EVENTMSGACTIONTYPE'] # if play['EVENTMSGACTIONTYPE'] event_msg_action_types = sorted(event_msg_action_types.items(), key=operator.itemgetter(0)) for action in event_msg_action_types: print(f'\t{action[0]} = {action[1]}') ``` # What About Blocks? So if you've taken a close look at the data, especially that where `EVENTMSGTYPE == 2` you may have noticed that a few of the missed field goals were due to some incredible shot blocking players. By adding a few lines of code, we can find these shot blockers. Dealing with this data is a bit beyond the scope of this notebook, but it's worth pointing out that the data is in there. One idea is to play it into it's own play by play block (just a thought). ``` #Blocks are not included in the event feed but are a part of the EVENTMSGTYPE 2 import re import operator print('------------------') #the following expression is specific to EventMsgType 1 p = re.compile('(\s{2}|\' )([\w+ ]*)') #get the PlayByPlay data from the Pacers game_id plays = playbyplay.PlayByPlay(game_id).get_normalized_dict()['PlayByPlay'] #declare a few variables description = '' event_msg_action_types = {} #loop over the play by play data #do a bit of findall(regex) and a little character magic: underscores and upper case #we're using a findall here as we have to deal with the extra word MISS at the beginning of the text. #that extra text means we'll have multiple matches for our regex. for play in plays: if play['EVENTMSGTYPE'] == 2: match = list() if play['HOMEDESCRIPTION'] is not None: match = p.findall(play['HOMEDESCRIPTION']) #looking for blocks if len(match) & (play['VISITORDESCRIPTION'] is not None): print(play['VISITORDESCRIPTION']) if not match: match = p.findall(play['VISITORDESCRIPTION']) #looking for blocks if len(match) & (play['HOMEDESCRIPTION'] is not None): print(play['HOMEDESCRIPTION']) event_msg_action = re.sub(' ', '_', match[0][1]).upper() event_msg_action_types[event_msg_action] = play['EVENTMSGACTIONTYPE'] event_msg_action_types = sorted(event_msg_action_types.items(), key=operator.itemgetter(0)) print('------------------') ```
github_jupyter
## Request Dask Cluster for parallel processing of the data This notebook server does not have enough cores to efficiently work with the data, so lets get a dask cluster set up first: ``` from dask_gateway import GatewayCluster from distributed import Client cluster = GatewayCluster() cluster.scale(30) client = Client(cluster) client ``` # Demo for AMS22 This was developed and tested on the Google Pangeo Deployment (more infos [here](https://pangeo.io/cloud.html#)). ``` import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = 15, 8 %config InlineBackend.figure_format = 'retina' ``` ## Load and clean the data For this example we use a catalog of CMIP6 zarr files, maintained by the Pangeo Project, and hosted publicly on GCS. For more info on the pangeo CMIP6 data click [here](https://pangeo-data.github.io/pangeo-cmip6-cloud/). This example uses the custom intake-esm catalog provided, but all functions shown here can be applied to an xarray dataset directly. ``` import intake import xarray as xr col = intake.open_esm_datastore("https://storage.googleapis.com/cmip6/pangeo-cmip6.json") ``` We are only using a few selected models (and members) for the sake of time. But there are a lot more. Feel free to change the cell below and experiment. ``` # This function is the 'all-in-one' cleaning component of cmip6_preprocessing from cmip6_preprocessing.preprocessing import combined_preprocessing selected_models = [ "IPSL-CM6A-LR", "ACCESS-ESM1-5", "GFDL-ESM4", "CESM2", "MPI-ESM1-2-LR", "TaiESM1", "CanESM5", "MIROC-ES2L", "EC-Earth3", "CMCC-ESM2", ] query = dict( experiment_id=["historical", "ssp585"], source_id=selected_models, grid_label='gn', ) kwargs = dict( zarr_kwargs={"consolidated": True, "use_cftime": True}, preprocess=combined_preprocessing, # This is the only modification needed aggregate=False, storage_options={'anon':True}, ) # load two dataset dictionaries: One for the surface temperature and another # for the horizontal grid area dset_dict = col.search( variable_id="tos", member_id=["r4i1p1f1", "r3i1p1f1", "r5i1p1f1","r2i1p1f1","r1i1p1f1"], table_id="Omon", **query ).to_dataset_dict(**kwargs) metric_dict = col.search( variable_id="areacello", **query ).to_dataset_dict(**kwargs) ``` ### You dont need intake-esm You can easily open any of the zarr stores with xarray (and then apply cmip6_preprocessing tools on it in the same way). ``` zarr_store = col.df['zstore'].tolist()[0] print(zarr_store) ds = xr.open_zarr(zarr_store) ds ``` ## Postprocessing - Combining datasets for final analysis Now we will add the metrics (horizontal cell area) and concatenate the members of each model into a dataset with an additional `member_id` dimension. ``` from cmip6_preprocessing.postprocessing import match_metrics, concat_members # cut runs that are running past 2100 (can lead to dask chunking issues) dset_dict_cut = {k:ds.sel(time=slice(None,'2100')) for k,ds in dset_dict.items()} #combine with matching metrics # (see https://cmip6-preprocessing.readthedocs.io/en/latest/postprocessing.html#Handling-grid-metrics-in-CMIP6) dset_dict_w_metrics = match_metrics(dset_dict_cut, metric_dict, ['areacello']) # concatenate members for each source_id and experiment_id # (see https://cmip6-preprocessing.readthedocs.io/en/latest/postprocessing.html#Postprocessing) dset_dict_combined = concat_members(dset_dict_w_metrics) dset_dict_combined['CESM2.gn.historical.Omon'] %%time import matplotlib.pyplot as plt color_dict = {k:f"C{ki}" for ki, k in enumerate(selected_models)} plt.figure() for ni, (name, ds) in enumerate(dset_dict_combined.items()): # Weighted average of surface ocean temperatures sst = ds.tos.weighted(ds.areacello.fillna(0)).mean(['x','y']) # annual averages sst = sst.coarsen(time=12).mean() ### Plotting ### color = color_dict[ds.source_id] # plot single members sst.plot( hue='member_id', color=color, add_legend=False, alpha=0.25 ) # plot member average sst.mean('member_id').plot( linewidth=2, color=color, add_legend=False, label=name ) plt.ylabel('Global Average Sea Surface Temperature') ``` ## How much data did we processed just now? ``` total_size = [] for ni, (name, ds) in enumerate(dset_dict_combined.items()): sst_size = ds.tos.nbytes area_size = ds.areacello.nbytes total_size.append(sst_size+area_size) print(f'We just crunched through {sum(total_size)/1e9}GB of data') ``` This means that we were able to process around 40+GB in a few minutes. Contrast that with the 'download and analyze' model: >Assuming a fast internet connection with 20MB/s throughput, downloading this data alone would have taken 30+ minutes.
github_jupyter
# Animating 1-qubit gates In this notebook we will use the Quil simulator to run a few example quantum circuits. To demonstrate what these gates do we will plot the state evolution on the Bloch sphere. ``` import subprocess from pyquil.api import QVMConnection # Start the quantum simulator server in a subprocess qvm_server = subprocess.Popen(["/src/qvm/qvm", "-S"]) # Connect to the simulator qvm = QVMConnection() from pyquil.quil import Program, Declare from pyquil.gates import X, MEASURE, RX, H import pylab as pl import numpy as np from qutip import Bloch, basis ``` This program runs an X pulse (or a "NOT" gate) and measures it. The result is stored in classical address 0. ``` program = Program() ro = program.declare('ro', 'BIT') program += X(0) program += MEASURE(0, ro[0]) print(program) wfn = qvm.wavefunction(program) program = Program() ro = program.declare('ro', 'BIT') program += H(0) program += MEASURE(0, ro[0]) qvm.run(program, trials=10) import pylab as pl from matplotlib import animation, rc from IPython.display import HTML import numpy as np def plot_bloch_sphere(state, fig, ax): b = Bloch(fig=fig, axes=ax) b.add_states(state) b.render(fig=fig, axes=ax) def get_quantum_state(program): wfn = qvm.wavefunction(program) state = np.dot(wfn.amplitudes, [basis(2, 0), basis(2, 1)]) return state from pyquil.gates import RX, RY program = Program() # program += RY(np.pi / 2, 0) program += RX(-np.pi/2, 0) fig = pl.figure(figsize=(6, 6)) ax = fig.add_subplot(111, projection='3d') state = get_quantum_state(program) plot_bloch_sphere(state, fig, ax) wfn = qvm.wavefunction(program) print(wfn) # Functions to animate an evolving quantum state using Matplotlib def animate(fig, ax, evolve_quantum_state): def _animate(i): ax.clear() program = evolve_quantum_state(i) state = get_quantum_state(program) plot_bloch_sphere(state, fig=fig, ax=ax) return (ax.artists[0],) return _animate def show_animation(fig, ax, evolve_quantum_state, num_frames=10): # call the animator. blit=True means only re-draw the parts that have changed. anim = animation.FuncAnimation(fig, animate(fig, ax, evolve_quantum_state), init_func=lambda: animate(fig, ax, evolve_quantum_state)(0), frames=num_frames+1, interval=100, blit=True) return HTML(anim.to_jshtml()) fig = pl.figure(figsize=(6, 6)) ax = fig.add_subplot(111, projection='3d') from pyquil.gates import RX, RY, RZ def x(i, num_frames = 10, alpha = np.pi): # Rotate X by π theta = i * alpha / num_frames program = Program() program += RX(theta, 0) return program def y(i, num_frames = 10, alpha = np.pi): # Rotate Y by π theta = i * alpha / num_frames program = Program() program += RY(theta, 0) return program def hadamard(i, num_frames = 30): # Rotate Y by π/2, X by π n = num_frames // 3 # Number of frames per rotation if i < n: return y(i, num_frames=n, alpha=np.pi/2) else: return y(n, num_frames=n, alpha=np.pi/2) + x(i-n, num_frames=2*n, alpha=np.pi) return program def z(i, num_frames = 10, alpha = np.pi): # Rotate Z by π theta = i * alpha / num_frames program = Program() program += RZ(theta, 0) return program def yzx_pi_2(i, num_frames = 30): # Rotate Y by π/2, Z by π, X by π/2 n = num_frames // 3 # Number of frames per rotation if i <= n: return y(i, num_frames=n, alpha=np.pi/2) elif i <= 2 * n: return yzx_pi_2(n, num_frames) + z(i - n, num_frames=n, alpha=np.pi/2) else: return yzx_pi_2(2 * n, num_frames) + x(i - 2 * n, num_frames=n, alpha=-np.pi/2) return program print(hadamard(3, 3)) print(x(1,1,np.pi/2)) show_animation(fig, ax, lambda i: x(i, num_frames=20), num_frames=10) show_animation(fig, ax, lambda i: y(i, num_frames=30), num_frames=30) print(yzx_pi_2(3, 3)) show_animation(fig, ax, lambda i: yzx_pi_2(i, num_frames=60), num_frames=60) ``` ## Entanglement Let's entangle two qubits! ``` from pyquil.gates import H, CNOT program = Program() ro = program.declare('ro', 'BIT', memory_size=2) program += H(0) program += CNOT(0, 1) program += MEASURE(0, ro[0]) program += MEASURE(1, ro[1]) single_shot_data = qvm.run(program, classical_addresses=[0, 1], trials=10) single_shot_data ``` ### Entanglement using native CZ gate Quil's natural gateset uses a CZ instead of a CNOT. One can transform a CNOT into CZs and thereby run the entangling operation as follows: ``` from pyquil.gates import CZ program = Program() ro = program.declare('ro', 'BIT', memory_size=2) program += H(0) program += H(1) program += CZ(0, 1) program += H(1) program += MEASURE(0, ro[0]) program += MEASURE(1, ro[1]) qvm.run(program, classical_addresses=[0, 1], trials=10) ``` ## Plot the parity Evaluate the following lines to plot the results of the single shot data for the entangling operation. ``` %pylab inline parity = { (0, 0): 0, (0, 1): 1, (1, 0): 2, (1, 1): 3 } def plot_parity(single_shot_data, title="Bell test results"): plt.hist([parity[(u, v)] for (u, v) in single_shot_data]) plt.xticks(list(parity.values()), parity.keys()) plt.title(title) ; plot_parity(single_shot_data) qvm_server.terminate() ```
github_jupyter
``` import os import sys sys.path.insert(0, os.path.abspath('../')) from matplotlib import pyplot as plt import numpy as np from glob import glob import pydicom as dicom import dicom_numpy import SimpleITK as sitk from matplotlib import pyplot as plt import matplotlib.patches as patches from copy import deepcopy from ct_charachterization import run_third_algorithm_expectation_at_the_beginning, \ run_third_algorithm_gamma_instead_of_pi, run_third_algorithm_expectation_at_the_end, run_first_algorithm from ct_charachterization.utility.utils import expand, central_gamma_log_pdf %matplotlib inline ``` # Selecting two different tissues ``` import matplotlib.patches as patches img = np.load(f'../resources/sample/img.npy')[90:97, 75:275, 50:320] fig1, ax1 = plt.subplots(1) ax1.imshow(img[3, :, :], cmap='gray') plt.title("the original image") rect1 = patches.Rectangle((0, 50),50,100,linewidth=1,edgecolor='r',facecolor='none') ax1.add_patch(rect1) rect2 = patches.Rectangle((120, 50),50,100,linewidth=1,edgecolor='r',facecolor='none') ax1.add_patch(rect2) plt.show() tissue1 = img[:, 50:150, 0:50] plt.imshow(tissue1[3, :, :], cmap='gray') plt.title("tissue1") plt.show() tissue2 = img[:, 50:150, 120:170] plt.imshow(tissue2[3, :, :], cmap='gray') plt.title("tissue2") plt.show() mu_9 = np.array([-987, -810, -540, -370, -160, 0, 100, 240, 340]) delta = -1001 # is for air ``` # Running the first algorithm (Global non-Central Gamma) We assume the whole 2 crops as a single neighborhood. ``` global_theta, global_gamma = run_first_algorithm(tissue1, mu_9, delta=delta, neighborhood_size=0, max_iter=10, tol=-1, non_central=True) global_alpha = global_theta[1, ...] global_beta = global_theta[2, ...] for j in range(len(mu_9)): wanted_alpha = global_alpha[j, ...] wanted_beta = global_beta[j, ...] xs = np.arange(delta + 1, 500, 1) - delta ys = 25000 * np.exp(central_gamma_log_pdf(xs, wanted_alpha, wanted_beta).ravel()) plt.plot(xs + delta, ys, '-') flat1 = tissue1.flatten() plt.hist(flat1, bins=list(np.arange(-1030, 500, 1)), label='original') plt.legend(loc='upper right') plt.title("tissue1") plt.show() ``` On each iteration, we can seen the min, mean, and max values over the whole matrices of probabilities - The prior probability (PI) and the posterior probability (GAMMA). After resolving numerical issues, we do not see any zeros (Also mean value always should be 0.111...., because we have 9 components the mean will be 1/9). ``` global_theta, global_gamma = run_first_algorithm(tissue2, mu_9, delta=delta, neighborhood_size=0, max_iter=10, tol=-1, non_central=True) global_alpha = global_theta[1, ...] global_beta = global_theta[2, ...] for j in range(len(mu_9)): wanted_alpha = global_alpha[j, ...] wanted_beta = global_beta[j, ...] xs = np.arange(delta + 1, 500, 1) - delta ys = 25000 * np.exp(central_gamma_log_pdf(xs, wanted_alpha, wanted_beta).ravel()) plt.plot(xs + delta, ys, '-') flat2 = tissue2.flatten() plt.hist(flat2, bins=list(np.arange(-1000, 500, 1)), label='original') plt.legend(loc='upper right') plt.title("tissue2") plt.show() ``` # This is cool At least, we know that the algorithm works fine, and on the different types of tissues, it can model the distribution. But! There is a problem with the second tissue. The blue and orange lines act kinda weird. But maybe, it is because they are too close to shifted zero (we have shifted x=0 using delta=-1030 to x=-1030, and the mean of the first two components are very close to this value; -987 and -810). Looking at the plots below gives the idea about what I am trying to say: ![](../resources/figs/gamma_exp1.png) ![](../resources/figs/gamma_exp2.png) ![](../resources/figs/gamma_exp3.png) # So It is kina confusing! Most of the tissue2's voxels are between -200 and 200. why the pdf of the blue line acts like that? Humm, maybe it is because its integral from zero to +inf should be 1, and its mean is too close to delta. But, maybe we can make the situation better by making a distance between the delta and the mean value of the first component. I am going to set delta=-10,000 ``` delta = -10000 global_theta, global_gamma = run_first_algorithm(tissue1, mu_9, delta=delta, neighborhood_size=0, max_iter=10, tol=-1, non_central=True) global_alpha = global_theta[1, ...] global_beta = global_theta[2, ...] for j in range(len(mu_9)): wanted_alpha = global_alpha[j, ...] wanted_beta = global_beta[j, ...] xs = np.arange(-1200, 500, 1) - delta ys = 25000 * np.exp(central_gamma_log_pdf(xs, wanted_alpha, wanted_beta).ravel()) plt.plot(xs + delta, ys, '-') flat1 = tissue1.flatten() plt.hist(flat1, bins=list(np.arange(-1000, 500, 1)), label='original') plt.legend(loc='upper right') plt.title("tissue1") plt.show() global_theta, global_gamma = run_first_algorithm(tissue2, mu_9, delta=delta, neighborhood_size=0, max_iter=10, tol=-1, non_central=True) global_alpha = global_theta[1, ...] global_beta = global_theta[2, ...] for j in range(len(mu_9)): wanted_alpha = global_alpha[j, ...] wanted_beta = global_beta[j, ...] xs = np.arange(-1200, 500, 1) - delta ys = 25000 * np.exp(central_gamma_log_pdf(xs, wanted_alpha, wanted_beta).ravel()) plt.plot(xs + delta, ys, '-') flat2 = tissue2.flatten() plt.hist(flat2, bins=list(np.arange(-1030, 500, 1)), label='original') plt.legend(loc='upper right') plt.title("tissue2") plt.show() ``` # Better! So the delta value really matters! It would be better for fitting if we choose a big negative delta. # Question 1 I Asked Brian Do you think it is ok to do this? I mean, the result is better for fitting the distribution, but I am just acting like a brute force algorithm and trying all the possible ways. ## Brian's Answer: ![](../resources/figs/q1_b.png) So, lets try: Min hu value is -1000: ``` b_img = np.load(f'../resources/sample/img.npy') fig1, ax1 = plt.subplots(1) ax1.imshow(b_img[93, :, :], cmap='gray') plt.title("the big original image") finding_air = b_img[93, 0:50, 0:50] plt.imshow(finding_air[:, :], cmap='gray') plt.title("tissue1") plt.show() min_val = np.min(finding_air) max_val = np.max(finding_air) print(min_val, max_val) flat1 = finding_air.flatten() plt.hist(flat1, bins=list(np.arange(-1030, 500, 1)), label='original') plt.legend(loc='upper right') plt.title("finding_air") plt.show() ``` # Air = -1000 OK now we know that the hu value for air is -1000. ``` # Shifting m_9 values and fixing delta near the least mu mu_9 = np.array([-987, -810, -540, -370, -160, 0, 100, 240, 340]) + (-1000 + 987) delta = -1001 global_theta, global_gamma = run_first_algorithm(tissue1, mu_9, delta=delta, neighborhood_size=0, max_iter=10, tol=-1, non_central=True) global_alpha = global_theta[1, ...] global_beta = global_theta[2, ...] for j in range(len(mu_9)): wanted_alpha = global_alpha[j, ...] wanted_beta = global_beta[j, ...] xs = np.arange(delta + 1, 500, 1) - delta ys = 25000 * np.exp(central_gamma_log_pdf(xs, wanted_alpha, wanted_beta).ravel()) plt.plot(xs + delta, ys, '-') flat1 = tissue1.flatten() plt.hist(flat1, bins=list(np.arange(delta , 500, 1)), label='original') plt.legend(loc='upper right') plt.title("tissue1") plt.show() ``` It seems that the first mu value has a big rise at its first values. We can start plotting the diagram from different place, like delta + 10. ``` for j in range(len(mu_9)): wanted_alpha = global_alpha[j, ...] wanted_beta = global_beta[j, ...] xs = np.arange(delta + 10, 500, 1) - delta ys = 25000 * np.exp(central_gamma_log_pdf(xs, wanted_alpha, wanted_beta).ravel()) plt.plot(xs + delta, ys, '-') flat1 = tissue1.flatten() plt.hist(flat1, bins=list(np.arange(delta , 500, 1)), label='original') plt.legend(loc='upper right') plt.title("tissue1") plt.show() ``` Better, but still I think the first mu can cause some problems. Ok lets go on. ``` global_theta, global_gamma = run_first_algorithm(tissue2, mu_9, delta=delta, neighborhood_size=0, max_iter=10, tol=-1, non_central=True) global_alpha = global_theta[1, ...] global_beta = global_theta[2, ...] for j in range(len(mu_9)): wanted_alpha = global_alpha[j, ...] wanted_beta = global_beta[j, ...] xs = np.arange(delta + 1, 500, 1) - delta ys = 25000 * np.exp(central_gamma_log_pdf(xs, wanted_alpha, wanted_beta).ravel()) plt.plot(xs + delta, ys, '-') flat2 = tissue2.flatten() plt.hist(flat2, bins=list(np.arange(delta, 500, 1)), label='original') plt.legend(loc='upper right') plt.title("tissue2") plt.show() for j in range(len(mu_9)): wanted_alpha = global_alpha[j, ...] wanted_beta = global_beta[j, ...] xs = np.arange(delta + 10, 500, 1) - delta ys = 25000 * np.exp(central_gamma_log_pdf(xs, wanted_alpha, wanted_beta).ravel()) plt.plot(xs + delta, ys, '-') flat1 = tissue2.flatten() plt.hist(flat1, bins=list(np.arange(delta , 500, 1)), label='original') plt.legend(loc='upper right') plt.title("tissue2") plt.show() ``` ## OK but still first two tissues can be problematic ### I am going to return the mu_9 and delt values, then compare the results in both cases: ``` mu_9 = np.array([-987, -810, -540, -370, -160, 0, 100, 240, 340]) delta = -1030 ``` # Stabilization Here are some minor improvements: 1. I am using 3d version 2. neighborhood shape is 7\*7\*7 Running the algorithm for tissue1, and using the below approach to deal with paper's incompleteness: ![](../resources/figs/beg.png) ``` resul = run_third_algorithm_expectation_at_the_beginning(y=tissue1, mu=mu_9, neighborhood_size=7, delta=delta, max_iter=10, tol=-1, constant_c=2, non_central=True) plt.imshow(resul[0, :, :], cmap='gray') plt.show() ``` P.S.1: First 10 iterations are for global non-central gamma algorithm and the second 10 iterations are related to local non-central gamma algorithm. P.S.2: As you can see, after iteration 7, some values become zero. But, if you look at the previous one, the min value is `6.346167187880914e-162`, so I think it is reasonable to become zero. ``` origi = tissue1[3:-3, 3:-3] flat_resul = resul.flatten() + delta flat_origi = origi.flatten() ax = plt.subplot(1, 1, 1) bins = list(np.arange(-1100, 500, 1)) ax.hist(flat_resul, bins=bins, alpha=0.7, label='result') ax.hist(flat_origi, bins=bins, alpha=0.7, label='original') plt.legend(loc='upper right') plt.title("histogram") plt.show() ``` # Trying with new mu_9 and delta value ``` # Shifting m_9 values and fixing delta near the least mu mu_9 = np.array([-987, -810, -540, -370, -160, 0, 100, 240, 340]) + (-1000 + 987) delta = -1001 resul = run_third_algorithm_expectation_at_the_beginning(y=tissue1, mu=mu_9, neighborhood_size=7, delta=delta, max_iter=10, tol=-1, constant_c=2, non_central=True) plt.imshow(resul[0, :, :], cmap='gray') plt.show() origi = tissue1[3:-3, 3:-3] flat_resul = resul.flatten() + delta flat_origi = origi.flatten() ax = plt.subplot(1, 1, 1) bins = list(np.arange(-1100, 500, 1)) ax.hist(flat_resul, bins=bins, alpha=0.7, label='result') ax.hist(flat_origi, bins=bins, alpha=0.7, label='original') plt.legend(loc='upper right') plt.title("histogram") plt.show() ``` # It seems there is no big improvement # Now, trying the second approach: ![](../resources/figs/end.png) ``` mu_9 = np.array([-987, -810, -540, -370, -160, 0, 100, 240, 340]) delta = -1030 resul = run_third_algorithm_expectation_at_the_end(y=tissue1, mu=mu_9, neighborhood_size=6, delta=delta, max_iter=10, tol=-1, constant_c=2, non_central=True) plt.imshow(resul[0, :, :], cmap='gray') plt.show() origi = tissue1[3:-3, 3:-3] flat_resul = resul.flatten() + delta flat_origi = origi.flatten() ax = plt.subplot(1, 1, 1) bins = list(np.arange(-1100, 500, 1)) ax.hist(flat_resul, bins=bins, alpha=0.7, label='result') ax.hist(flat_origi, bins=bins, alpha=0.7, label='original') plt.legend(loc='upper right') plt.title("histogram") plt.show() ``` # Trying with new mu_9 and delta value ``` # Shifting m_9 values and fixing delta near the least mu mu_9 = np.array([-987, -810, -540, -370, -160, 0, 100, 240, 340]) + (-1000 + 987) delta = -1001 resul = run_third_algorithm_expectation_at_the_end(y=tissue1, mu=mu_9, neighborhood_size=6, delta=delta, max_iter=10, tol=-1, constant_c=2, non_central=True) plt.imshow(resul[0, :, :], cmap='gray') plt.show() origi = tissue1[3:-3, 3:-3] flat_resul = resul.flatten() + delta flat_origi = origi.flatten() ax = plt.subplot(1, 1, 1) bins = list(np.arange(-1100, 500, 1)) ax.hist(flat_resul, bins=bins, alpha=0.7, label='result') ax.hist(flat_origi, bins=bins, alpha=0.7, label='original') plt.legend(loc='upper right') plt.title("histogram") plt.show() ``` # Again... # Now, the approach that was working with no true explanation (and I just found it with brute-forcing): ![](../resources/figs/gam_pi.png) ``` mu_9 = np.array([-987, -810, -540, -370, -160, 0, 100, 240, 340]) delta = -1030 resul = run_third_algorithm_gamma_instead_of_pi(y=tissue1, mu=mu_9, neighborhood_size=6, delta=delta, max_iter=10, tol=-1, constant_c=2, non_central=True) plt.imshow(resul[0, :, :], cmap='gray') plt.show() origi = tissue1[3:-3, 3:-3] flat_resul = resul.flatten() + delta flat_origi = origi.flatten() ax = plt.subplot(1, 1, 1) bins = list(np.arange(-1100, 500, 1)) ax.hist(flat_resul, bins=bins, alpha=0.7, label='result') ax.hist(flat_origi, bins=bins, alpha=0.7, label='original') plt.legend(loc='upper right') plt.title("histogram") plt.show() ``` # Trying with new mu_9 and delta value ``` # Shifting m_9 values and fixing delta near the least mu mu_9 = np.array([-987, -810, -540, -370, -160, 0, 100, 240, 340]) + (-1000 + 987) delta = -1001 resul = run_third_algorithm_gamma_instead_of_pi(y=tissue1, mu=mu_9, neighborhood_size=6, delta=delta, max_iter=10, tol=-1, constant_c=2, non_central=True) plt.imshow(resul[0, :, :], cmap='gray') plt.show() origi = tissue1[3:-3, 3:-3] flat_resul = resul.flatten() + delta flat_origi = origi.flatten() ax = plt.subplot(1, 1, 1) bins = list(np.arange(-1100, 500, 1)) ax.hist(flat_resul, bins=bins, alpha=0.7, label='result') ax.hist(flat_origi, bins=bins, alpha=0.7, label='original') plt.legend(loc='upper right') plt.title("histogram") plt.show() ``` # About using JAX JAX has some limitations, and we cannot use NumPy's SciPy with it. And the SciPy of their own is a work-in-progress project. I was trying to implement it but found out that I should get my hands dirty with the whole Jax project (and I was going to learn about it for a couple of days, but now, I think it takes time) ![](../resources/figs/jax_issue.png) # I feel stuck At this point, I feel a little bit stuck, and I do not know what I should do for the next steps to make one of the first two stabilization algorithms work (the third one works, with no reason). Do you have any idea?
github_jupyter
# Notebook 3: Linear Regression (Diabetes) ## Learning Goal The goal of this notebook is to get hands-on experience and intuition about linear regression and regularization. We once again emphasize the difference between fitting and predicting. We will see that it is much more difficult to get good out-of-sample performance on a test set (predicting) than it is to get good in-sample performance on the training set (fitting). ## Overview: In Notebook 1: __Section II: Machine Learning is difficult__, we explored linear regression in the context of a prediction problem. In this notebook, we'll formally introduce the notion of regression and see how learning and prediction can be improved by introducing regularization. We will focus mainly on simple applications of linear regression: minimizing the mean-square-error (MSE) on the training data (i.e. in-sample error) and see how well we perform on the test data (i.e. out-of-sample error). As we discussed in Sec. II of the review, there is a fundamental difference between minimizing the in-sample error and minimizing the out-of-sample error. The underlying reason for this is that the training data may not be representative of the full data distribution. From a Bayesian point of view, as [David MacKay](http://www.inference.org.uk/mackay/) likes to repeat: <i>We can't make predictions without making assumptions.</i> Thus, it is sensible to introduce priors that reflect the fact that we are likely to be undersampled (especially in high dimensions). We'll consider ordinary least squares regression problem in which the "error function" is defined as the square from the deviation of our linear predictor to the true response. We will supplement this error function with a regularizer that prevents overfitting. From a Bayesian point of view, the regularization can be thought of as a prior on parameters, see Sec VI. Minimizing the combined in-sample error + regularization terms is the same as the <b> Maximum a posteriori probability (MAP)</b> estimate in Bayesian regression (the parameters at which the posterior probability distribution is peaked). Note that in a true Bayesian approach, we should not use the mode of the posterior but the average over all possible choices of parameters weighted by their posterior probability. In practice, this is often not done (for computational and practical reasons). ## Least squares linear regression: Consider data of the form $(y_i,\mathbf{x}^{(i)})$ where the index $i=1\ldots n$ runs over the number of examples in the training data and $\mathbf{x}^{(i)}$ is a $p$-dimensional feature (row) vector. For notational convenience, it is useful to define the $n \times p$ <b>design matrix</b> $X$ whose rows, $\textbf{x}^{(1)},\cdots, \textbf{x}^{(n)}$, are the examples and columns, $\mathbf{X}_{:,1},\cdots, \mathbf{X}_{:,p}$, are the measured "features" (i.e. feature predictors). We also denote the $n$-dimensional column vector of sample $i$ as $\mathbf{y}_i$ and the $p$-dimensional column vector of regression parameters $\mathbf{w}\in\mathbb{R}^p$. For ordinary least square regression (no regularization), we minimize the square loss cost function: $$ \underset{\textbf{w}\in\mathbb{R}^p}{\operatorname{min}} ||\textbf{Xw}-\textbf{y}||_2^2 = \underset{\textbf{w}\in\mathbb{R}^p}{\operatorname{min}} \,(\mathbf{Xw}-\mathbf{y})^T(\mathbf{Xw}-\mathbf{y}), $$ or equivalently, in component form, $$ \underset{\textbf{w}\in\mathbb{R}^p}{\operatorname{min}} \sum_{i=1}^n (y_i -\mathbf{w}\cdot\mathbf{x}^{(i)})^2. $$ If rank$(\mathbf{X})=p$, namely, the feature predictors $\mathbf{X}_{:,1},\cdots \mathbf{X}_{:,p}$ are linearly independent, then there exists unique solution to this problem: $$ \hat{\textbf{w}}= (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T \textbf{y} $$ ### Exercise 1: ### <ul> <li> This choice of parameters correspond the maximum likehood estimate of which Likelihood function? <li> Derive $\hat{\textbf{w}}$ explicitly by solving the least square problem defined above. <li> Is $\hat{\textbf{w}}$ a biased or an unbiased estimator? In other words, does it give the correct answer as the number of data points goes to infinity ($n \rightarrow \infty$). To answer this question, you may assume i.i.d. (independent, identically distributed) samples $(y_i,\textbf{x}^{(i)})$. <li> Is $\hat{\textbf{w}}$ still well-defined when rank$(\mathbf{X})<p$? This happens when, for example, $n<p$. <li> Now imagine the samples are generated in the following manner: $y_i=\textbf{w}_\text{true}\cdot \textbf{x}^{(i)}+\epsilon_i$ where $\epsilon_i\sim\mathcal{N}(0,\sigma^2)$ are i.i.d. Gaussian errors. In statistics, the in-sample risk is defined as $$ R(\hat{\textbf{w}}, \textbf{w}_\text{true})=\frac{1}{n}\mathbb{E}[(\mathbf{X}\hat{\textbf{w}}-\mathbf{X}{\textbf{w}_\text{true}})^2], $$ where $\mathbb{E}[\cdots]$ is taken over all i.i.d pairs $(y_i,\textbf{x}^{(i)})$ and $\hat{\textbf{w}}$ is the least squares solution given above. Assuming that $\mathbf{X}$ and $\epsilon_i$ are independent, show that the risk is given by $$ R(\hat{\textbf{w}}, \textbf{w}_\text{true}) = \sigma^2\frac{p}{n} $$ What's the implication of this for fixed $p$ as $n \rightarrow \infty$? How about when $p,n$ scale together? </ul> From Exercise 1, it is clear that the uniqueness of the solution is only guaranteed when rank$(\mathbf{X})>p$. But even so we still may not want to use least squares if $p$ is moderately close to $n$, because its "risk" could be quite poor. One way to deal with this is to <i> regularize</i>. We will be concerned with two classes of regularizers: <b> L2-regularization</b> which is often called <b> Ridge-Regression</b> (or <b>Tikhonov regression</b>) and <b> L1-regularization</b> which goes under the name <b>LASSO</b> (and is closely related to <b>Compressed Sensing</b>). ## Ridge Regression In Ridge-Regression, the regularization penalty is taken to be the L2-norm of the parameters $$ E_{ridge}= \lambda ||\textbf{w}||_2^2 = \lambda \textbf{w}^T \textbf{w}=\lambda \sum_{\gamma=1}^p w_\gamma w_\gamma. $$ Thus, the model is fit by minimizing the sum of the in-sample error and the regularization term $$ \mathbf{w}_{ridge}(\lambda)= \underset{\textbf{w}\in\mathbb{R}^p}{\operatorname{argmin}} ||\mathbf{X}\textbf{w}-\textbf{y}||_2^2 + \lambda ||\textbf{w}||_2^2. $$ Notice that the parameter $\lambda$ controls how much we weigh the fit and regularization term. ### Exercise 2: ### <ul> <li>What choice of prior does this correspond to if we are performing a MAP estimate? <li>Show that the solution to Ridge regression is given by $\mathbf{w}_{ridge}= (\mathbf{X}^T\mathbf{X}+\lambda I)^{-1}\mathbf{X}^T \textbf{y}$. <li>Express your answer in terms of the Singular Value Decomposition of $\mathbf{X}$. </ul> ## LASSO ## We will also be interested in the case where the penalty is the L1-norm of the parameters (sum of absolute values of parameters). This is called LASSO. $$ E_{LASSO}= \lambda ||\mathbf{w}||_1 = \lambda \sum_{\gamma=1}^p |w_\gamma| . $$ In this case, $$ \textbf{w}_{LASSO}(\lambda)= \underset{\textbf{w}\in\mathbb{R}^p}{\operatorname{argmin}} {1 \over 2n} ||\mathbf{Xw}-\mathbf{y}||_2^2 + \lambda ||\mathbf{w}||_1. $$ Note the prefactor $1/(2n)$ in the loss function is not essential to this formulation. We have chosen this form to be consistent with the Scikit-Learn package in Python. As we discussed in the main text, LASSO tends to give sparse solution. In the following we're going to explore these ideas a little bit more. ### Exercise 3: ### <ul> <li>What choice of prior does this correspond to if we are performing a MAP estimate? <li>In this case, can you derive an analytic expression for $\mathbf{w}_{LASSO}$? Do you have any ideas about how we might be able to efficiently numerically calculate this? <li> Do you think LASSO and Ridge Regression will give qualitatively different answers? (Consider the limits $\lambda=0$ and $\lambda = \infty$) </ul> ## Numerical Experiments with Ridge Regression and LASSO## We will now perform some numerical experiments with the Diabetes Dataset trying to predict diabetes outcomes one year forward. More information about this data set can be found at <a href="https://archive.ics.uci.edu/ml/datasets/Diabetes">https://archive.ics.uci.edu/ml/datasets/Diabetes</a>. This dataset was described in the famous <a href="http://web.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf">Least Angle Regression</a> paper by Efron, Hastie, Johnstone, Tibshirani as follows: <blockquote>Ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements were obtained for each of $n = 442$ diabetes patients, as well as the response of interest, a quantitative measure of disease progression one year after baseline.</blockquote> We start by plotting the weights for each value of $\lambda$ for Ridge Regression and LASSO. This is called a regularization path. We also compare the in-sample and out-of-sample performance between two regressions by examining the $R^2$ coefficient of determination (for detailed definition see <a href="https://en.wikipedia.org/wiki/Coefficient_of_determination">here</a>). In terms of linear regression, $R^2$ tells us how well the regression function fits the data. The best attainable fit corresponds to $R^2=1$. ``` from __future__ import print_function print(__doc__) %matplotlib inline # This code is modified from plot_cv_diabetes.py in the skit-learn documentation # and plot_ridge_path.py import numpy as np import matplotlib.pyplot as plt #import seaborn from sklearn import datasets, linear_model # Load Training Data set with 200 examples number_examples=200 diabetes = datasets.load_diabetes() X = diabetes.data[:number_examples] y = diabetes.target[:number_examples] # Set up Lasso and Ridge Regression models ridge=linear_model.Ridge() lasso = linear_model.Lasso() # Chooose paths alphas = np.logspace(-2, 2, 10) # To see how well we learn, we partition the dataset into a training set with 150 # as well as a test set with 50 examples. We record their errors respectively. n_samples = 150 n_samples_train = 100 X_train, X_test = X[:n_samples_train], X[n_samples_train:] y_train, y_test = y[:n_samples_train], y[n_samples_train:] train_errors_ridge = list() test_errors_ridge = list() train_errors_lasso = list() test_errors_lasso = list() # Initialize coeffficients for ridge regression and Lasso coefs_ridge = [] coefs_lasso=[] for a in alphas: ridge.set_params(alpha=a) ridge.fit(X_train, y_train) coefs_ridge.append(ridge.coef_) # Use the coefficient of determination R^2 as the performance of prediction. train_errors_ridge.append(ridge.score(X_train, y_train)) test_errors_ridge.append(ridge.score(X_test, y_test)) lasso.set_params(alpha=a) lasso.fit(X_train, y_train) coefs_lasso.append(lasso.coef_) train_errors_lasso.append(lasso.score(X_train, y_train)) test_errors_lasso.append(lasso.score(X_test, y_test)) ############################################################################### # Display results # First see how the 10 features we learned scale as we change the regularization parameter plt.subplot(1,2,1) plt.semilogx(alphas, np.abs(coefs_ridge)) axes = plt.gca() #ax.set_xscale('log') #ax.set_xlim(ax.get_xlim()[::-1]) # reverse axis plt.xlabel(r'$\lambda$',fontsize=18) plt.ylabel('$|w_i|$',fontsize=18) plt.title('Ridge') #plt.savefig("Ridge_sparsity_scale.pdf.pdf") plt.subplot(1,2,2) plt.semilogx(alphas, np.abs(coefs_lasso)) axes = plt.gca() #ax.set_xscale('log') #ax.set_xlim(ax.get_xlim()[::-1]) # reverse axis plt.xlabel(r'$\lambda$',fontsize=18) #plt.ylabel('$|\mathbf{w}|$',fontsize=18) plt.title('LASSO') #plt.savefig("LASSO_sparsity_scale.pdf") plt.show() # Plot our performance on both the training and test data plt.semilogx(alphas, train_errors_ridge, 'b',label='Train (Ridge)') plt.semilogx(alphas, test_errors_ridge, '--b',label='Test (Ridge)') plt.semilogx(alphas, train_errors_lasso, 'g',label='Train (LASSO)') plt.semilogx(alphas, test_errors_lasso, '--g',label='Test (LASSO)') #plt.vlines(alpha_optim, plt.ylim()[0], np.max(test_errors), color='k', # linewidth=3, label='Optimum on test') plt.legend(loc='upper right') plt.ylim([-0.01, 1.0]) plt.xlabel(r'$\lambda$',fontsize=18) plt.ylabel('Performance') #plt.savefig("Ridge_LASSO_sparsity_performance.pdf") plt.show() ``` ### Exercise 4: ### <ul> <li>What do the points $\lambda=0$ and $\lambda=10^5$ correspond to? Is it strange that the weights are not monotonic in $\alpha$? Why do you think this might be? <li>Make a similar regularization plot for LASSO? <li> What is the qualitative difference between the LASSO path and Ridge Path? Does this agree with your earlier predictions? Can you make some qualitative argument to rationalize this difference? <li>How do your answers change when you vary the number of examples and training set size? </ul> ## A brief note about convexity## In Sec. VI of the review, we briefly discussed convexity. Here's a quick refresher: Recall that a set $C\subseteq\mathbb{R}^n$ is called <i> convex </i> if any $x,y\in C$ and $t\in [0,1]$, $$ tx+(1-t)x \in C. $$ In other words, every line segments joining $x,y$ lies entirely in $C$. A function $f:\mathbb{R}^n\rightarrow \mathbb{R}$ is called <i> convex </i> if its domain dom$(f)$ is a convex set and for any $x,y\in$dom$(f)$ and $t\in [0,1]$, $$ f(tx+(1-t)y)\le tf(x)+(1-t)f(y). $$ In other words, the function lies below the line segment joining $f(x)$ and $f(y)$. This function $f$ is called <b> strictly convex </b> if this inequality holds strictly for $x\neq y$ and $t\in(0,1)$. Why is convexity important? <b> For convex functions, any local minimizer is a global minimizer</b>. Algorithmically, this means that in the minimization (optimization) procedure, as long as we're "going down the hill" and agree to stop when we can't go any further, then we've hit the global minimum. In addition to this, there's a menagerie of beautiful theory regarding convex duality and optimality, which gives us a way of understanding the solutions even before solving the problem itself. We refer interested readers to <a href="http://web.stanford.edu/~boyd/cvxbook/">Boyd and Vandenberghe book on Convex Optimization</a>. Coming back to our regularization examples, a simple inspection reveals that both LASSO and Ridge regression are convex in $w$. What's more, Ridge is actually a <i> strictly convex </i> problem (assuming $\lambda>0$) due to presence of L2 penality. In fact, this is always true regardless of $X$ and so the ridge regression solution you worked out (presumably) in Exercise 2 is always well-defined. In contrast, LASSO is not always strictly convex and hence by convexity theory, it need not have a unique solution. The LASSO solution is unique under general conditions, for example, when $X$ has columns in <i> general position </i> (see <a href="https://arxiv.org/abs/1206.0313"> Tibshirani 2013</a>). To mitigate this, one can define a modified problem called the <a href="https://web.stanford.edu/~hastie/Papers/B67.2%20(2005)%20301-320%20Zou%20&%20Hastie.pdf">elastic net</a> such that the function we want to minimize is always strictly convex: $$ \underset{\mathbf{w}\in\mathbb{R}^p}{\operatorname{min}} ||\mathbf{Xw}-\mathbf{y}||_2^2 + \lambda ||\mathbf{w}||_1 + \delta||\mathbf{w}||_2^2, $$ where $\lambda,\delta\ge 0$ are regularization parameters. Now aside from uniqueness of the solution, the elastic net combines some of the desirable properties (e.g. prediction) of ridge regression with the sparsity properties of the LASSO. In the following exercise, you're going to explore a little bit about elastic net. ### Exercise 4: ### <ul> <li> Play with the parameters $\lambda$ and $\delta$, when would you expect sparse solutions? <li> Plot the regularization path of elastic net. How does it depend on $\lambda$ and $\delta$? <li> Derive the analytic solution of this elastic net problem. Check your answer by looking at two limiting cases ($\lambda\rightarrow 0$ and $\delta\rightarrow 0$). Does this agree with what you found previously? </ul> ### End-of-notebook questions ### <ul> <li> Can you explain the difference between in-sample and out-of-sample performance? Is out-of-sample error usually larger than in-sample error? Does this depend on regularization? Recall in Exercise 1, we defined the in-sample risk as $$ R_{in}(\hat{\textbf{w}}, \textbf{w}_\text{true})=\frac{1}{n}\mathbb{E}[(\mathbf{X}\hat{\textbf{w}}-\mathbf{X}{\textbf{w}_\text{true}})^2], $$ where $\hat{\textbf{w}}= (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T \textbf{y}$ is the least square solution and $\textbf{w}_\text{true}$ is the true parameter vector used to generate all samples. Following the same notation and assumption, now imagine if we're given a new data point $\textbf{x}_0$ independently drawn from the predictor distribution $\mathcal{P}$. We can define the out-of-sample risk as $$ R_{out} =\mathbb{E}_{\textbf{x}^{(0)}\sim\mathcal{P}}\mathbb{E}_{(y_i,\textbf{x}_i)}[(\hat{\textbf{w}}\cdot \textbf{x}_0 -\textbf{w}_\text{true}\cdot \textbf{x}_0)^2], $$ with the expectation value taken not only over the training samples $(y_i,\textbf{x}_i)$ but also over the predictor distribution $\mathcal{P}$ that generates the unseen sample $\textbf{x}^{(0)}$. One can actually show that $R_{out}\ge R_{in}$ under mild assumptions. This makes sense intuitively since it's usually harder to make prediction on unseen samples than to fit the samples given. You can numerically verify this by assuming a predictor distribution, say, $\mathcal{N}(0,\Sigma)$. </ul>
github_jupyter
&emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp; [Home Page](Start_Here.ipynb) &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&ensp; [1] [2](Getting_started_with_Deepstream_Pipeline.ipynb) [3](Introduction_to_Multi-DNN_pipeline.ipynb) [4](Multi-stream_pipeline.ipynb) [5](Multi-stream_Multi_DNN.ipynb) &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; [Next Notebook](Getting_started_with_Deepstream_Pipeline.ipynb) # Introduction to DeepStream In this notebook, you will be introduced to DeepStream ,it's workflow and the underlying principle upon which it works on. **Contents of this Notebook :** - [DeepStream](#DeepStream) - [Overview of the DeepStream SDK](#Overview-of-the-DeepStream-SDK) - [GStreamer Foundations](#GStreamer-Foundations-:) - [Elements](#Elements) - [Pipeline](#Pipeline) - [Pads](#Pads) - [Caps](#Caps) - [Buffers](#Buffers) - [Plugin based Architecture](#Plugin-based-Architecture) ## DeepStream ![Worlflow](images/ds-workflow.jpg) DeepStream simplifies building IVA applications by separating the application into components managed and built by the user and components managed by DeepStream. As developers, we build components to manage important business tasks like : - Selecting the kind and number of video streams we want to analyze - Choosing the type of analysis, we want to do on the video - Handling and interacting with the results of our analysis We don't need to build components to manage difficult tasks like : - Efficiently leverage the GPU for accelerated processing and inference - Efficiently process data from multiple video streams at once - Keeping track of metadata associated with each frame of video from multiple sources - Optimizing our pipeline for maximum data throughput - Optimizing our neural networks for high-speed inference These are mundane tasks that the **DeepStream SDK** manages for us. That lets us focus on the more important tasks related to the project's goal and impact. DeepStream lets us focus on the intelligence portion of the application. Here is an illustration on DeepStream Workflow that describes the tasks by DeepStream vs developer ![Workflow_split](images/ds-workflow-split.jpeg) ### Performance and Scalability ### Performance To give an overview of performance improvement when using DeepStream SDK, we will scrutinize the DeepStream-app reference application included within the release package. The below illustration shows that the performance is doubled using DeepStream 3.0 when tested on T4 compared to the previous version P4, while it consumes the same amount of power and equal number of streams.The reference application includes a primary detector, three classifiers and a tracker. ![Performance](images/ds-perf.png) ### Scalability DeepStream provides scalability at different levels of the system hierarchy. For example, - The DeepStream SDK 3.0 supports processing a higher number of concurrent streams, in addition to utilizing multiple GPUs upon availability. - The DeepStream SDK 4.0 delivers a unified code base for all NVIDIA GPUs and quick integration with IoT services. Furthermore DeepStream-in-containers provide flexibility to the deployment phase as shown below: ![Scalability](images/ds-scalability.png) # Overview of the DeepStream SDK The DeepStream SDK consists of a set of building blocks which bridge the gap between low level APIs (such as TensorRT, Video Codec SDK) and the user application. By utilizing the DeepStream SDK, you can accelerate development of the IVA applications by focusing on building core deep learning models instead of designing end-to-end applications from scratch. Below, you can see a schematic presentation of the DeepStream SDK in a series of potential applications. ![SDK](images/ds-sdk.png) In addition, the DeepStream SDK extends these capabilities by providing several other hardware accelerated building blocks. This includes support for TensorRT 7 and CUDA 11. In addition, DeepStream applications can be deployed as a part of a larger multi-GPU cluster or a microservice in containers. This allows highly flexible system architectures and opens new application capabilities. Below, you can see a shortened list of new capabilities provided by DeepStream: - Allowing addition and removal of video stream dynamically and during the pipeline execution,in addition to frame rate and resolution adjustments - Extending the video processing capabilities by supporting custom layers, and user-defined parsing of detector outputs - Providing Support for 360-degree camera using GPU-accelerated dewarping libraries - Augmenting the meta-data with application-specific, user-defined insights - Providing pruned and efficient inference models - Getting detailed performance analysis with the NVIDIA Nsight system profiler tool. The DeepStream SDK is based on the **GStreamer multimedia framework** and provides a pipeline of GPU accelerated plugins as shown below. The SDK facilitates application implementation procedure by providing plugins for video inputs, video decoding, image preprocessing, TensorRT-based inference, object tracking and display. You can utilize these capabilities to assemble flexible, multi-stream video analytics applications. ![Sample_pipeline](images/ds-sample-pipeline.png) # GStreamer Foundations : The DeepStream SDK is based on the open source [GStreamer multimedia framework](https://gstreamer.freedesktop.org/). There are a few key concepts in GStreamer that we need to touch on before getting started. These include Pipelines,Elements, Pads, Buffers, and Caps. We will be describing them at a high level, but encourage those who are interested in the details to read the [GStreamer Basics](https://gstreamer.freedesktop.org/documentation/?gi-language=c) documentation to learn more. ### Elements Elements are the core building block with which we make pipelines. Every process in-between the source (i.e. input of the pipeline, e.g. camera and video files) and sink elements (e.g. screen display) is passed through elements. Video decoding and encoding, neural network inference, and displaying text on top of video streams are examples of "element". DeepStream allows us to instantiate elements and weave them into pipelines. ### Pipeline All elements in GStreamer must typically be contained inside a pipeline before they can be used, because it takes care of some clocking and messaging functions. A pipeline is a particular type of bin, which is the element used to contain other elements. Therefore all methods which apply to bins also apply to pipelines. We need to add the elements to the pipeline and are then linked ,this linking must be established following the data flow (this is, from source elements to sink elements). ![pipeline](images/pipeline.png) ### Pads Pads are the interfaces between elements. When data flows from element to another element in a pipeline, it flows from the sink pad of one element to the source pad of another. Note that each element might have zero, one or many source/sink elements. ![pads](images/pads.png) ### Caps Caps (or Capabilities), are the data types that a pad is permitted to utilize or emit. Because pads can allow multiple data types, sometimes the data flow is ambiguous. Pads are "negotiated" in order to explicitly define the type of data that can flow through the pad. Caps streamline this process and allow elements of our pipeline with ambiguous pads to negotiate the correct data flow process. Later in this course, we will use caps to pass certain video data types (NV12, RGB) to the downstream elements in the pipeline. ### Buffers Buffers carry the data that will passed on through the pipeline. Buffers are timestamped, contain metadata such as how many elements are using it, flags, and pointers to objects in memory. When we write application code, we rely on accessing data attached to the buffer. ### Plugin based Architecture DeepStream applications can be thought of as pipelines consisting of individual components (plugins). Each plugin represents a functional block like inference using TensorRT or multi-stream decode. Where applicable, plugins are accelerated using the underlying hardware to deliver maximum performance. DeepStream’s key value is in making deep learning for video easily accessible, to allow you to concentrate on quickly building and customizing efficient and scalable video analytics applications. The plugin architecture provides functionality such as video encode/decode, scaling, inferencing, and more. By connecting plugins into a pipeline, we can build complex applications. Because DeepStream is built on top of GStreamer, we can inspect plugins using `gst-inspect-1.0`. ``` #To make sure that right paths to the NVidia Libraries are added run this cell !rm ~/.cache/gstreamer-1.0/registry.x86_64.bin !export LD_LIBRARY_PATH=/opt/tensorrtserver/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/.singularity.d/libs:$LD_LIBRARY_PATH # Inspect the nvinfer plugin !gst-inspect-1.0 nvinfer ``` ## Licensing This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0). &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&ensp; [1] [2](Getting_started_with_Deepstream_Pipeline.ipynb) [3](Introduction_to_Multi-DNN_pipeline.ipynb) [4](Multi-stream_pipeline.ipynb) [5](Multi-stream_Multi_DNN.ipynb) &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; [Next Notebook](Getting_started_with_Deepstream_Pipeline.ipynb) &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp; [Home Page](Start_Here.ipynb)
github_jupyter
# hicstuff command line interface demo ## Getting started The easiest way to generate matrices is to use `hicstuff pipeline`. By default the command only requires two fastq files (forward and reverse reads) and a genome in fasta format. The pipeline command can be used to generate the Hi-C contact map from the input reads. ```bash hicstuff pipeline --genome genome.fa \ --outdir results \ forward.fq \ reverse.fq ``` For instance, this will create a directory named "results", containing three output text files with tab-separated columns. * `abs_fragments_contacts_weighter.txt`: Sparse matrix file with 3 columns the rows, column and values of nonzero pixels. The first row contains the shape and total number of nonzero pixels in the matrix. * `fragments_list.txt`: Contains genomic coordinates of the matrix bins (row/columns). * `info_contigs.txt`: Contains chromosome names, theirs length and number of bins. ### Using an indexed genome When given a genome in fasta format, hicstuff regenerates the index everytime unless it finds an index starting by the genome filename. For example using bowtie2, you can index the genome like this: ```bash bowtie2-build genome.fa genome.fa ``` And when running hicstuff with `--genome=genome.fa`, it will automatically find the index files and not regenerate it. ### Setting the binning By default, matrices generated are binned at 5kb. You can change this using the `--enzyme` option. This option allows to specify either a bin size, or an enzyme. For example if you set `--enzyme='DpnII'`, the matrix will be at restriction fragments resolution. ### Temporary files By default temporary files are removed when the pipeline finishes. They can be kept by adding the `--no-cleanup` flag. For example, if you run: ```bash hicstuff pipeline -g genome.fa --no-cleanup --prefix demo ``` The `--prefix` option used here gives a common prefix to all output files, overriding the default output file names as follows: * `abs_fragments_contacts_weighted.txt` -> `demo.mat.tsv` * `fragments_list.txt` -> `demo.frags.tsv` * `info_contigs.txt` -> `demo.chr.tsv` And the output folder should now contain a `tmp/` subfolder and look like this: ``` output ├── demo.chr.tsv ├── demo.frags.tsv ├── demo.hicstuff_20190423185220.log ├── demo.mat.tsv ├── demo.distance_law.txt ├── plots │ ├── event_distance.pdf │ ├── event_distribution.pdf │ └── frags_hist.pdf └── tmp ├── demo.for.bam ├── demo.genome.fasta ├── demo.rev.bam ├── demo.valid_idx_filtered.pairs ├── demo.valid_idx.pairs └── demo.valid.pairs ``` ### Additional options The `hicstuff pipeline` command has additional options allowing to tweak various parameters and change the output files or their format. You can always consult `hicstuff pipeline --help` to get a comprehensive description of those options, but here are a few of them: * `--filter`: Filters out spurious 3C events, such as self religations or undigested fragments. This is only really useful at very fine resolutions (1-2kb) and not needed most of the time. This option is only meaningful when `--enzyme` is given a restriction enzyme and not a bin size. * `--duplicates`: Removes PCR duplicates, defined as sets of pairs having identical mapping positions for both reads. * `--matfmt`: allows to specify what file format should be used for the matrix. Available formats are bg2 (bedgraph2d), graal (the text format described above) and cool, [a binary format](https://cooler.readthedocs.io) that is probably the most appropriate for large genomes. * `--distance-law`: Computes the distance-law table (i.e. the probability of contacts as a function of genomic distance). * `--plot`: Enables plotting. When used in conjunction with `--filter` or `--distance-law`, this will generate figures showing properties of your Hi-C data. ## Advanced usage ### Starting from intermediate files If your hicstuff run was interrupted, or you wish to align the reads separately, the `--start-stage` option allows the `hicstuff pipeline` command to take intermediate files as input. For example to skip the alignment step and start from aligned reads: ```bash hicstuff pipeline --start-stage bam --genome genome.fa forward.bam reverse.bam ``` Or if you already have a pairs file: ```bash hicstuff pipeline --start-stage pairs --genome genome.fa valid.pairs ``` ## Generating the distance law The distance law is the probability of contact of two fragments in function of the distance between these fragments. There are two ways to compute it with hicstuff. The first one using the full pipeline with the option `--distance-law`, as done above. It's possible to add an option `--centromeres` if you want to compute the distance law on separate arms. The output of this command will be a raw table of the distance without any treatment of the data. It will be then possible with the command distancelaw to process this table. The second way is to use the command distancelaw with the pairs file as input: ```bash hicstuff distancelaw --average \ --big-arm-only \ --centromeres centromeres.txt \ --frags output/demo.frags.tsv \ --inf 3000 \ --outputfile-img output/demo_distance_law.svg \ --labels labels.txt \ --sup 500000 \ --pairs output/tmp/demo.valid_idx_filtered.pairs ``` For instance, this will create an image with the distance law generated from the pairs file given in input. The distance law will be the average between all the distance laws of the arms bigger than 500kb. The logspace used to plot it will have a base 1.1 by default. The limits of the x axis will be 3kb and 500kb. Note that if `hicstuff pipeline` was given the `--distance-law` option, the output folder should contain a file named `distance_law.txt` containing the precomputed interaction frequencies. This file can be provided to the `hicstuff distancelaw` command using `--dist-tbl distance_law.txt` instead of the pairs file. ## Operations on Hi-C matrices All commands described below can take the output of hicstuff as input. They will accept either a bg2 matrix, a cool matrix or a graal matrix. Note that when using a graal matrix, you will usually need to specify the fragments file using `--frags` since genomic coordinates are not encoded in this matrix format. ### Visualizing the matrix Below are example commands that can be used to visualise Hi-C matrices with hicstuff. ```bash # Viewing a normalized (=balanced) matrix in graal format at 5kb resolution hicstuff view --binning 5kb --normalize --frags output/demo.frags.tsv output/demo.mat.tsv # Viewing a log ratio of 2 matrices in cool format hicstuff view sample1.cool sample2.cool # Viewing a raw matrix in bedgraph2 format at 500bp resolution with log transformed contacts hicstuff view --binning 500bp --transform log high_res.bg2 ``` This will show an interactive heatmap using matplotlib. In order to save the matrix to a file instead, one could add `--output output/demo.png` Note there are others options allowing to process the matrix which are documented in the help message. ### Converting between formats Output files from `hicstuff pipeline` can be converted between different format using the command `hicstuff convert`. For example to generate the file `cool_output/demo.cool` from files in the default hicstuff format: ```bash hicstuff convert --frags output/demo.frags.tsv \ --chroms output/demo.chr.tsv \ --to cool \ output/demo.mat.tsv output/demo ``` Notice that the command takes 2 positional arguments, the first one is the matrix file and the second is the prefix to give for output files, to which the extension will be added depending on the output format chosen. Input format is inferred automatically from the input matrix. ### Rebinning existing matrices Files previously produced by hicstuff pipeline can be rebinned at a lower resolutions using the `hicstuff rebin` command. This will generate a new matrix, a new fragments_list.txt and a new info_contigs.txt, all with updated number of bins: ```bash hicstuff rebin -f output/demo.frags.tsv \ -c output/demo.chr.tsv \ --out rebin_1kb \ --binning 1kb output/demo.mat.tsv ``` When working with cool or bedgraph2 files, the command is simpler as we don't need fragments and contig files: ```bash # Rebins demo.cool at 10kb and saves the results to rebinned.cool hicstuff rebin --binning 10kb demo.cool rebinned ``` ### Subsampling contacts For many applications, differences in sequencing coverage will impact results. To avoid this, one can subsample contacts from Hi-C matrices to ensure the different samples to be compared have comparable signal. This functionality is implemented in the `hicstuff subsample` command, which allows to keep a fixed number of contacts from a matrix, or to extract a fraction of contacts: ```bash # Keep 30% contacts in matrix.cool and save the results to subsamples_30.cool hicstuff subsample --prop 0.5 matrix.cool subsample_30 # Keep 1 million contacts in matrix.bg2 and save the result in subsample_1M.bg2 hicstuff subsample --prop 1000000 matrix.bg2 subsample_1M ```
github_jupyter
# Gillette Tweet Modelling ``` #Import libraries to start your analysis import pandas as pd import numpy as np from nltk.corpus import stopwords from nltk.tokenize import word_tokenize import string from nltk.stem import WordNetLemmatizer #Import the gillette tweets for our analysis, use this encoding to avoid encoding errors #Save it to a dataframe df1 = pd.read_csv("gilettesent4.csv", encoding='ISO-8859-1') #Call head to see the first six records df1.head() #Make two lists one with the tweets and one with the sentiment label Tweet = [] Labels = [] for row in df1["Tweets"]: #Take the first column Tweets to clean it more #tokenize words words = word_tokenize(row) #Tokenize it or break it down into rows #remove punctuations clean_words = [word.lower() for word in words if word not in set(string.punctuation)] #Take out any string punctuation english_stops = set(stopwords.words('english')) #remove stop words characters_to_remove = ["''",'``',"rt","https","’","“","”","\u200b","--","n't","'s","...","//t.c","'re" ,"'m"] #remove other #characters that may intefere clean_words = [word for word in clean_words if word not in english_stops] #take out the stop words clean_words = [word for word in clean_words if word not in set(characters_to_remove)] #take out characters for clean words wordnet_lemmatizer = WordNetLemmatizer() #get the lemmas which breaks down the word but still keeps the semantic meaning lemma_list = [wordnet_lemmatizer.lemmatize(word) for word in clean_words] #create a list of those lemmas Tweet.append(lemma_list) #append them into the new Tweet list for row in df1["Label"]: Labels.append(row) #Get a separate list for the label combined = zip(Tweet, Labels) #Zip both lists togther after the cleaning def bag_of_words(words): #create a bag of words function return dict([(word, True) for word in words]) #This will create a new dictionary with key value pairs of the tweet and label #The bag-of-words model is a simplifying representation used in natural language processing and information retrieval (IR). #In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding #grammar and even word order but keeping multiplicity. This will help with our model Final_Data = [] #Create a new list for r, v in combined: bag_of_words(r) Final_Data.append((bag_of_words(r),v)) #make the dictionary keys from the tweets and sentiment and put it in new list #Randomize the data that will be used for the model import random random.shuffle(Final_Data) print(len(Final_Data)) #This is how many records we have in our dataset ``` ### INFO on Naive Bayes PRE-MODEL The Naive Bayes algorithm is an intuitive method that uses the probabilities of each attribute belonging to each class to make a prediction. It is the supervised learning approach you would come up with if you wanted to model a predictive modeling problem probabilistically. Naive bayes simplifies the calculation of probabilities by assuming that the probability of each attribute belonging to a given class value is independent of all other attributes. This is a strong assumption but results in a fast and effective method. The probability of a class value given a value of an attribute is called the conditional probability. By multiplying the conditional probabilities together for each attribute for a given class value, we have a probability of a data instance belonging to that class. To make a prediction we can calculate probabilities of the instance belonging to each class and select the class value with the highest probability. Naive bases is often described using categorical data because it is easy to describe and calculate using ratios. A more useful version of the algorithm for our purposes supports numeric attributes and assumes the values of each numerical attribute are normally distributed (fall somewhere on a bell curve). Again, this is a strong assumption, but still gives robust results. ``` train_set, test_set = Final_Data[0:188], Final_Data[188:] #As a standard, split the data 30/70 of the 265 for the train and #test set. The train set is the portion of data that we are using to train our model, and the test set is where after the #the model has been trained, we will test its predictions on the test set and see how accurate it is in predicting what #was the sentiment of the test set from the sentiment given in the train set. Essentially, how accurate is our model in #predicting sentiment of tweets? import nltk import collections from nltk.metrics.scores import (accuracy, precision, recall, f_measure) from nltk import metrics refsets = collections. defaultdict(set) testsets = collections.defaultdict(set) classifier = nltk.NaiveBayesClassifier.train(train_set) #This is our Naive Bayes classifier for i, (feats, label) in enumerate(test_set): refsets[label].add(i) observed = classifier.classify(feats) testsets[observed].add(i) print("Naive Bayes Performance with Unigrams ") print("Accuracy:",nltk.classify.accuracy(classifier, test_set)) ``` ### INFO on Confusion Matrix (which is after this cell) Compute confusion matrix to evaluate the accuracy of a classification By definition a confusion matrix is such that is equal to the number of observations known to be in group but predicted to be in group. Thus in binary classification, the count of true brand damaging negative (how many tweets that were negative the model predicted correctly), false brand damaging negatives - recall (how many tweets that were labeled as negative were positive), true brand positives (how many tweets that were positive were predicted correctly), and false brand positive positives - precision (how many tweets that were labeled as positive were actually negative). The F measure (F1 score or F score) is a measure of a test's accuracy and is defined as the weighted harmonic mean of the precision and recall of the test. ``` print("UnigramNB Results") print('Brand Positive Precision:', precision(refsets['Positive'], testsets['Positive'])) print('Brand Positive Recall:', recall(refsets['Positive'], testsets['Positive'])) print('Brand Positive F-measure:', f_measure(refsets['Positive'], testsets['Positive'])) print('Brand Damaging Precision:', precision(refsets['Negative'], testsets['Negative'])) print('Brand Damaging Recall:', recall(testsets['Negative'], refsets['Negative'])) print('Brand Damaging F-measure:', f_measure(refsets['Negative'], testsets['Negative'])) print("") #Top features from the model based on the tweets classifier.show_most_informative_features(n=50) ``` ### INFO on Decision Tree Algorithm PRE-MODEL A decision tree is a flowchart-like tree structure where an internal node represents feature(or attribute), the branch represents a decision rule, and each leaf node represents the outcome. The topmost node in a decision tree is known as the root node. It learns to partition on the basis of the attribute value. It partitions the tree in recursively manner call recursive partitioning. This flowchart-like structure helps you in decision making. It's visualization like a flowchart diagram which easily mimics the human level thinking. That is why decision trees are easy to understand and interpret The basic idea behind any decision tree algorithm is as follows: Select the best attribute using Attribute Selection Measures(ASM) to split the records. Make that attribute a decision node and breaks the dataset into smaller subsets. Starts tree building by repeating this process recursively for each child until one of the condition will match: All the tuples belong to the same attribute value. There are no more remaining attributes. There are no more instances. ``` from nltk.classify import DecisionTreeClassifier dt_classifier = DecisionTreeClassifier.train(train_set, #making the necessary cutoffs to prune the tree binary=True, entropy_cutoff=0.8, depth_cutoff=5, support_cutoff=30) refset = collections.defaultdict(set) testset = collections.defaultdict(set) for i, (feats, label) in enumerate(test_set): refset[label].add(i) observed = dt_classifier.classify(feats) #Start the model with the classifier testset[observed].add(i) print("UnigramDT Results") print("Accuracy:",nltk.classify.accuracy(dt_classifier, test_set)) print('Brand Positive Precision:', precision(refsets['Positive'], testsets['Positive'])) print('Brand Positive Recall:', recall(refsets['Positive'], testsets['Positive'])) print('Brand Positive F-measure:', f_measure(refsets['Positive'], testsets['Positive'])) print('Brand Damaging Precision:', precision(refsets['Negative'], testsets['Negative'])) print('Brand Damaging Recall:', recall(testset['Negative'], refset['Negative'])) print('Brand Damaging F-measure:', f_measure(refsets['Negative'], testsets['Negative'])) print("") ``` ### INFO on Logistic Regression PRE-MODEL Logistic Regression is a Machine Learning classification algorithm that is used to predict the probability of a categorical dependent variable. In logistic regression, the dependent variable is a binary variable that contains data coded as 1 (yes, success, etc.) or 0 (no, failure, etc.). In other words, the logistic regression model predicts P(Y=1) as a function of X. Logistic Regression Assumptions Binary logistic regression requires the dependent variable to be binary. For a binary regression, the factor level 1 of the dependent variable should represent the desired outcome. Only the meaningful variables should be included. The independent variables should be independent of each other. That is, the model should have little or no multicollinearity. The independent variables are linearly related to the log odds. Logistic regression requires quite large sample sizes. Keeping the above assumptions in mind, let’s look at our dataset. ``` from nltk.classify import MaxentClassifier logit_classifier = MaxentClassifier.train(train_set, algorithm='gis', trace=0, max_iter=10, min_lldelta=0.5) for i, (feats, label) in enumerate(test_set): refset[label].add(i) observed = logit_classifier.classify(feats) testset[observed].add(i) print("UnigramsLogit Results") print("Accuracy:",nltk.classify.accuracy(logit_classifier, test_set)) print('Brand Positive Precision:', precision(refsets['Positive'], testsets['Positive'])) print('Brand Positive Recall:', recall(refsets['Positive'], testsets['Positive'])) print('Brand Positive F-measure:', f_measure(refsets['Positive'], testsets['Positive'])) print('Brand Damaging Precision:', precision(refsets['Negative'], testsets['Negative'])) print('Brand Damaging Recall:', recall(testset['Negative'], refset['Negative'])) print('Brand Damaging F-measure:', f_measure(refsets['Negative'], testsets['Negative'])) print("") ``` ### INFO on Support Vector Machine Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection. The advantages of support vector machines are: Effective in high dimensional spaces. Still effective in cases where number of dimensions is greater than the number of samples. Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient. Versatile: different Kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels. The disadvantages of support vector machines include: If the number of features is much greater than the number of samples, avoid over-fitting in choosing Kernel functions and regularization term is crucial. SVMs do not directly provide probability estimates, these are calculated using an expensive five-fold cross-validation (see Scores and probabilities, below). ``` from nltk.classify import SklearnClassifier from sklearn.svm import SVC SVM_classifier = SklearnClassifier(SVC(), sparse=False).train(train_set) for i, (feats, label) in enumerate(test_set): refset[label].add(i) observed = SVM_classifier.classify(feats) testset[observed].add(i) print("UniigramSVM Recall") print("Accuracy:",nltk.classify.accuracy(SVM_classifier, test_set)) print('Brand Positive Precision:', precision(refsets['Positive'], testsets['Positive'])) print('Brand Positive Recall:', recall(refsets['Positive'], testsets['Positive'])) print('Brand Positive F-measure:', f_measure(refsets['Positive'], testsets['Positive'])) print('Brand Damaging Precision:', precision(refsets['Negative'], testsets['Negative'])) print('Brand Damaging Recall:', recall(testsets['Negative'], refsets['Negative'])) print('Brand Damaging F-measure:', f_measure(refsets['Negative'], testsets['Negative'])) ``` ## Bigrams!! A bigram or digram is a sequence of two adjacent elements from a string of tokens, which are typically letters, syllables, or words. A bigram is an n-gram for n=2 Here is our sentence "I read a book about the history of America." The machine wants to get the meaning of the sentence by separating it into small pieces. How should it do that? 1. It can regard words one by one. This is unigram; each word is a gram. "I", "read", "a", "book", "about", "the", "history", "of", "America" 2. It can regard words two at a time. This is bigram (digram); each two adjacent words create a bigram. "I read", "read a", "a book", "book about", "about the", "the history", "history of", "of America" 3. It can regard words three at a time. This is trigram; each three adjacent words create a trigram. "I read a", "read a book", "a book about", "book about the", "about the history", "the history of", "history of America" ``` from nltk import bigrams, trigrams from nltk.collocations import BigramCollocationFinder from nltk.metrics import BigramAssocMeasures combined = zip(Tweet,Labels) def bag_of_bigrams_words(words, score_fn=BigramAssocMeasures.chi_sq, n=200): bigram_finder = BigramCollocationFinder.from_words(words) bigrams = bigram_finder.nbest(score_fn, n) return bag_of_words(bigrams) #Create the bigrams Final_Data2 =[] for z, e in combined: bag_of_bigrams_words(z) Final_Data2.append((bag_of_bigrams_words(z),e)) import random random.shuffle(Final_Data2) print(len(Final_Data2)) train_set, test_set = Final_Data2[0:218], Final_Data2[218:] import nltk import collections from nltk.metrics.scores import (accuracy, precision, recall, f_measure) from nltk import metrics refsets = collections. defaultdict(set) testsets = collections.defaultdict(set) classifier = nltk.NaiveBayesClassifier.train(train_set) for i, (feats, label) in enumerate(test_set): refsets[label].add(i) observed = classifier.classify(feats) testsets[observed].add(i) print("Naive Bayes Performance with Unigrams ") print("Accuracy:",nltk.classify.accuracy(classifier, test_set)) classifier.show_most_informative_features(n=20) print("BigramDT Results") print('Brand Positive Precision:', precision(refsets['Positive'], testsets['Positive'])) print('Brand Positive Recall:', recall(refsets['Positive'], testsets['Positive'])) print('Brand Positive F-measure:', f_measure(refsets['Positive'], testsets['Positive'])) print('Brand Damaging Precision:', precision(refsets['Negative'], testsets['Negative'])) print('Brand Damaging Recall:', recall(testsets['Negative'], refsets['Negative'])) print('Brand Damaging F-measure:', f_measure(refsets['Negative'], testsets['Negative'])) print("") from nltk.classify import DecisionTreeClassifier dt_classifier = DecisionTreeClassifier.train(train_set, binary=True, entropy_cutoff=0.8, depth_cutoff=5, support_cutoff=30) refsets = collections.defaultdict(set) testsets = collections.defaultdict(set) for i, (feats, label) in enumerate(test_set): refsets[label].add(i) observed = dt_classifier.classify(feats) testsets[observed].add(i) print("BigramDT Results") print("Accuracy:",nltk.classify.accuracy(dt_classifier, test_set)) print('Brand Positive Precision:', precision(refsets['Positive'], testsets['Positive'])) print('Brand Positive Recall:', recall(refsets['Positive'], testsets['Positive'])) print('Brand Positive F-measure:', f_measure(refsets['Positive'], testsets['Positive'])) print('Brand Damaging Precision:', precision(refsets['Negative'], testsets['Negative'])) print('Brand Damaging Recall:', recall(testsets['Negative'], refsets['Negative'])) print('Brand Damaging F-measure:', f_measure(refsets['Negative'], testsets['Negative'])) print("") from nltk.classify import MaxentClassifier logit_classifier = MaxentClassifier.train(train_set, algorithm='gis', trace=0, max_iter=10, min_lldelta=0.5) for i, (feats, label) in enumerate(test_set): refset[label].add(i) observed = logit_classifier.classify(feats) testset[observed].add(i) print("BigramsLogit Results") print("Accuracy:",nltk.classify.accuracy(logit_classifier, test_set)) print('Brand Positive Precision:', precision(refsets['Positive'], testsets['Positive'])) print('Brand Positive Recall:', recall(refsets['Positive'], testsets['Positive'])) print('Brand Positive F-measure:', f_measure(refsets['Positive'], testsets['Positive'])) print('Brand Damaging Precision:', precision(refsets['Negative'], testsets['Negative'])) print('Brand Damaging Recall:', recall(testsets['Negative'], refsets['Negative'])) print('Brand Damaging F-measure:', f_measure(refsets['Negative'], testsets['Negative'])) print("") from nltk.classify import SklearnClassifier from sklearn.svm import SVC SVM_classifier = SklearnClassifier(SVC(), sparse=False).train(train_set) for i, (feats, label) in enumerate(test_set): refset[label].add(i) observed = SVM_classifier.classify(feats) testset[observed].add(i) print("Bigrams Recall") print("Accuracy:",nltk.classify.accuracy(SVM_classifier, test_set)) print('Brand Positive Precision:', precision(refsets['Positive'], testsets['Positive'])) print('Brand Positive Recall:', recall(refsets['Positive'], testsets['Positive'])) print('Brand Positive F-measure:', f_measure(refsets['Positive'], testsets['Positive'])) print('Brand Damaging Precision:', precision(refsets['Negative'], testsets['Negative'])) print('Brand Damaging Recall:', recall(testsets['Negative'], refsets['Negative'])) print('Brand Damaging F-measure:', f_measure(refsets['Negative'], testsets['Negative'])) ``` ### Trigrams!! Trigrams are a special case of the n-gram, where n is 3. They are often used in natural language processing for performing statistical analysis of texts and in cryptography for control and use of ciphers and codes. Here is our sentence "I read a book about the history of America." The machine wants to get the meaning of the sentence by separating it into small pieces. How should it do that? 1. It can regard words one by one. This is unigram; each word is a gram. "I", "read", "a", "book", "about", "the", "history", "of", "America" 2. It can regard words two at a time. This is bigram (digram); each two adjacent words create a bigram. "I read", "read a", "a book", "book about", "about the", "the history", "history of", "of America" 3. It can regard words three at a time. This is trigram; each three adjacent words create a trigram. "I read a", "read a book", "a book about", "book about the", "about the history", "the history of", "history of America" ``` combined = zip(Tweet,Labels) from nltk import bigrams, trigrams from nltk.collocations import TrigramCollocationFinder from nltk.metrics import TrigramAssocMeasures def bag_of_trigrams_words(words, score_fn=TrigramAssocMeasures.chi_sq, n=200): trigram_finder = TrigramCollocationFinder.from_words(words) trigrams = trigram_finder.nbest(score_fn, n) return bag_of_words(trigrams) Final_Data3 =[] for z, e in combined: bag_of_trigrams_words(z) Final_Data3.append((bag_of_trigrams_words(z),e)) import random random.shuffle(Final_Data3) print(len(Final_Data3)) train_set, test_set = Final_Data3[0:218], Final_Data3[218:] import nltk import collections from nltk.metrics.scores import (accuracy, precision, recall, f_measure) from nltk import metrics refsets = collections. defaultdict(set) testsets = collections.defaultdict(set) classifier = nltk.NaiveBayesClassifier.train(train_set) for i, (feats, label) in enumerate(test_set): refsets[label].add(i) observed = classifier.classify(feats) testsets[observed].add(i) print("Naive Bayes Performance with Trigrams ") print("Accuracy:",nltk.classify.accuracy(classifier, test_set)) print('Brand Positive Precision:', precision(refsets['Positive'], testsets['Positive'])) print('Brand Positive Recall:', recall(refsets['Positive'], testsets['Positive'])) print('Brand Positive F-measure:', f_measure(refsets['Positive'], testsets['Positive'])) print('Brand Damaging Precision:', precision(refsets['Negative'], testsets['Negative'])) print('Brand Damaging Recall:', recall(testset['Negative'], refset['Negative'])) print('Brand Damaging F-measure:', f_measure(refsets['Negative'], testsets['Negative'])) classifier.show_most_informative_features(n=10) from nltk.classify import DecisionTreeClassifier dt_classifier = DecisionTreeClassifier.train(train_set, binary=True, entropy_cutoff=0.8, depth_cutoff=5, support_cutoff=30) refsets = collections.defaultdict(set) testsets = collections.defaultdict(set) for i, (feats, label) in enumerate(test_set): refsets[label].add(i) observed = dt_classifier.classify(feats) testsets[observed].add(i) print("TrigramDT Results") print("Accuracy:",nltk.classify.accuracy(dt_classifier, test_set)) print('Brand Positive Precision:', precision(refsets['Positive'], testsets['Positive'])) print('Brand Positive Recall:', recall(refsets['Positive'], testsets['Positive'])) print('Brand Positive F-measure:', f_measure(refsets['Positive'], testsets['Positive'])) print('Brand Damaging Precision:', precision(refsets['Negative'], testsets['Negative'])) print('Brand Damaging Recall:', recall(testsets['Negative'], refsets['Negative'])) print('Brand Damaging F-measure:', f_measure(refsets['Negative'], testsets['Negative'])) print("") from nltk.classify import MaxentClassifier logit_classifier = MaxentClassifier.train(train_set, algorithm='gis', trace=0, max_iter=10, min_lldelta=0.5) for i, (feats, label) in enumerate(test_set): refsets[label].add(i) observed = logit_classifier.classify(feats) testsets[observed].add(i) print("TrigramsLogit Results") print("Accuracy:",nltk.classify.accuracy(logit_classifier, test_set)) print('Brand Positive Precision:', precision(refsets['Positive'], testsets['Positive'])) print('Brand Positive Recall:', recall(refsets['Positive'], testsets['Positive'])) print('Brand Positive F-measure:', f_measure(refsets['Positive'], testsets['Positive'])) print('Brand Damaging Precision:', precision(refsets['Negative'], testsets['Negative'])) print('Brand Damaging Recall:', recall(testsets['Negative'], refsets['Negative'])) print('Brand Damaging F-measure:', f_measure(refsets['Negative'], testsets['Negative'])) print("") from nltk.classify import SklearnClassifier from sklearn.svm import SVC SVM_classifier = SklearnClassifier(SVC(), sparse=False).train(train_set) for i, (feats, label) in enumerate(test_set): refset[label].add(i) observed = SVM_classifier.classify(feats) testset[observed].add(i) print("Trigrams Results") print("Accuracy:",nltk.classify.accuracy(SVM_classifier, test_set)) print('Brand Positive Precision:', precision(refsets['Positive'], testsets['Positive'])) print('Brand Positive Recall:', recall(refsets['Positive'], testsets['Positive'])) print('Brand Positive F-measure:', f_measure(refsets['Positive'], testsets['Positive'])) print('Brand Damaging Precision:', precision(refsets['Negative'], testsets['Negative'])) print('Brand Damaging Recall:', recall(testsets['Negative'], refsets['Negative'])) print('Brand Damaging F-measure:', f_measure(refsets['Negative'], testsets['Negative'])) ``` ### N-grams!!! (combining all the grams!) N-grams are contiguous sequences of n-items in a sentence. N can be 1, 2 or any other positive integers, although usually we do not consider very large N because those n-grams rarely appears in many different places. When performing machine learning tasks related to natural language processing, we usually need to generate n-grams from input sentences. For example, in text classification tasks, in addition to using each individual token found in the corpus, we may want to add bi-grams or tri-grams as features to represent our documents. This post describes several different ways to generate n-grams quickly from input sentences in Python. ``` combined = zip(Tweet,Labels) def bigrams_words(words, score_fn=BigramAssocMeasures.chi_sq, n=200): bigram_finder = BigramCollocationFinder.from_words(words) bigrams = bigram_finder.nbest(score_fn, n) return bigrams from nltk.collocations import TrigramCollocationFinder # Import Bigram metrics - we will use these to identify the top 200 bigrams from nltk.metrics import TrigramAssocMeasures def trigrams_words(words, score_fn=TrigramAssocMeasures.chi_sq, n=200): trigram_finder = TrigramCollocationFinder.from_words(words) trigrams = trigram_finder.nbest(score_fn, n) return trigrams def bag_of_Ngrams_words(words): bigramBag = bigrams_words(words) #The following two for loops convert tuple into string for b in range(0,len(bigramBag)): bigramBag[b]=' '.join(bigramBag[b]) trigramBag = trigrams_words(words) for t in range(0,len(trigramBag)): trigramBag[t]=' '.join(trigramBag[t]) return bag_of_words(trigramBag + bigramBag + words) Final_Data4 =[] for z, e in combined: bag_of_Ngrams_words(z) Final_Data4.append((bag_of_Ngrams_words(z),e)) import random random.shuffle(Final_Data4) print(len(Final_Data4)) train_set, test_set = Final_Data4[0:218], Final_Data4[218:] import nltk import collections from nltk.metrics.scores import (accuracy, precision, recall, f_measure) from nltk import metrics refset = collections. defaultdict(set) testset = collections.defaultdict(set) classifier = nltk.NaiveBayesClassifier.train(train_set) for i, (feats, label) in enumerate(test_set): refset[label].add(i) observed = classifier.classify(feats) testset[observed].add(i) print("Naive Bayes Performance with Ngrams ") print("Accuracy:",nltk.classify.accuracy(classifier, test_set)) classifier.show_most_informative_features(n=20) print('Brand Positive Precision:', precision(refset['Positive'], testset['Positive'])) print('Brand Positive Recall:', recall(refset['Positive'], testset['Positive'])) print('Brand Positive F-measure:', f_measure(refset['Positive'], testset['Positive'])) print('Brand Damaging Precision:', precision(refset['Negative'], testset['Negative'])) print('Brand Damaging Recall:', recall(testset['Negative'], refset['Negative'])) print('Brand Damaging F-measure:', f_measure(refset['Negative'], testset['Negative'])) from nltk.classify import DecisionTreeClassifier dt_classifier = DecisionTreeClassifier.train(train_set, binary=True, entropy_cutoff=0.8, depth_cutoff=5, support_cutoff=30) refset = collections.defaultdict(set) testset = collections.defaultdict(set) for i, (feats, label) in enumerate(test_set): refset[label].add(i) observed = dt_classifier.classify(feats) testset[observed].add(i) print("NgramDT Results") print("Accuracy:",nltk.classify.accuracy(dt_classifier, test_set)) print('Brand Positive Precision:', precision(refset['Positive'], testset['Positive'])) print('Brand Positive Recall:', recall(refset['Positive'], testset['Positive'])) print('Brand Positive F-measure:', f_measure(refset['Positive'], testset['Positive'])) print('Brand Damaging Precision:', precision(refset['Negative'], testset['Negative'])) print('Brand Damaging Recall:', recall(testset['Negative'], refset['Negative'])) print('Brand Damaging F-measure:', f_measure(refset['Negative'], testset['Negative'])) print("") from nltk.classify import MaxentClassifier logit_classifier = MaxentClassifier.train(train_set, algorithm='gis', trace=0, max_iter=10, min_lldelta=0.5) for i, (feats, label) in enumerate(test_set): refset[label].add(i) observed = logit_classifier.classify(feats) testset[observed].add(i) print("NgramsLogit Recall") print("Accuracy:",nltk.classify.accuracy(logit_classifier, test_set)) print('Brand Positive Precision:', precision(refset['Positive'], testset['Positive'])) print('Brand Positive Recall:', recall(refset['Positive'], testset['Positive'])) print('Brand Positive F-measure:', f_measure(refset['Positive'], testset['Positive'])) print('Brand Damaging Precision:', precision(refset['Negative'], testset['Negative'])) print('Brand Damaging Recall:', recall(testset['Negative'], refset['Negative'])) print('Brand Damaging F-measure:', f_measure(refset['Negative'], testset['Negative'])) print("") from nltk.classify import SklearnClassifier from sklearn.svm import SVC SVM_classifier = SklearnClassifier(SVC(), sparse=False).train(train_set) for i, (feats, label) in enumerate(test_set): refset[label].add(i) observed = SVM_classifier.classify(feats) testset[observed].add(i) print("NgramsSVM Recall") print("Accuracy:",nltk.classify.accuracy(SVM_classifier, test_set)) print('Brand Positive Precision:', precision(refset['Positive'], testset['Positive'])) print('Brand Positive Recall:', recall(refset['Positive'], testset['Positive'])) print('Brand Positive F-measure:', f_measure(refset['Positive'], testset['Positive'])) print('Brand Damaging Precision:', precision(refset['Negative'], testset['Negative'])) print('Brand Damaging Recall:', recall(testset['Negative'], refset['Negative'])) print('Brand Damaging F-measure:', f_measure(refset['Negative'], testset['Negative'])) print("") ```
github_jupyter
### Results on data subset with trajectory length 6 Quantitative results: Prediction accuracies on train and test sets Qualitative results: t-SNE visualization nad cluster analysis ``` import sys print sys.executable %load_ext autoreload %autoreload 2 %reload_ext autoreload from collections import defaultdict from experiments import * from utils import * import numpy as np from sklearn.model_selection import train_test_split from sklearn.manifold import TSNE from tflearn.data_utils import to_categorical, pad_sequences from experiments import * from sklearn.metrics import accuracy_score from sklearn import cluster import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib import style style.use('seaborn-darkgrid') %matplotlib inline # import pandas as pd import seaborn as sns from mpl_toolkits.mplot3d import Axes3D from sklearn.decomposition import PCA print tf.__version__ traj_len = 6 hoc_num = 18 x, y, student_ids = load_data_will_student_solve_next_problem_traj_len(hoc_num, only_traj_len=traj_len, y_is_seq=False) x_train, x_test, y_train, y_test, student_ids_train, student_ids_test = train_test_split(x, y, student_ids, test_size=0.1, random_state=42) model_id = "predict_next_prob_binary_two_layer_traj_len_{}".format(traj_len) graph_to_use = tf.Graph() with graph_to_use.as_default(): saved_model = load_model(model_id, load_checkpoint=True, is_training=False, timesteps=traj_len) # x, y, student_ids = load_data_will_student_solve_next_problem_traj_len(hoc_num, only_traj_len=traj_len, y_is_seq=False) # x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.1, random_state=42) pred_train =np.argmax(saved_model.predict(x_train), axis=1) pred_test_probs = saved_model.predict(x_test) pred_test = np.argmax(pred_test_probs, axis=1) train_acc = accuracy_score(pred_train, np.argmax(y_train, axis=1)) test_acc = accuracy_score(pred_test, np.argmax(y_test, axis=1)) print ("Train acc: {}\t Test acc: {}".format(train_acc, test_acc)) y_test = np.argmax(y_test, axis=1) sample_with_highest_prob = np.argmax(np.array(pred_test_probs)[:,0]) sample_with_lowest_prob = np.argmin(np.array(pred_test_probs)[:,0]) print ("sample with highest probability: {}, student id: {}".format(sample_with_highest_prob, student_ids[sample_with_highest_prob])) print ("sample with lowest probability: {}, student id: {}".format(sample_with_lowest_prob, student_ids[sample_with_lowest_prob])) y_test_colors = ['c' if y_test[i] == 1 else 'm' for i in xrange(len(y_test)) ] pred_test_colors = ['c' if pred_test[i] == 1 else 'm' for i in xrange(len(pred_test)) ] traj_len = 6 x, y, student_ids = load_data_will_student_solve_next_problem_traj_len(hoc_num, only_traj_len=traj_len, y_is_seq=False) x_train, x_test, y_train, y_test, student_ids_train, student_ids_test = train_test_split(x, y, student_ids, test_size=0.1, random_state=42) model_id = "predict_next_prob_binary_two_layer_traj_len_{}".format(traj_len) hidden_reps = None graph_to_use = tf.Graph() with graph_to_use.as_default(): hidden_rep_model = load_model(model_id, load_checkpoint=True, is_training=False, get_hidden_rep=True) hidden_reps = np.array(hidden_rep_model.predict(x_test)) pca = PCA(n_components=8) pca_results = pca.fit_transform(hidden_reps) tsne_model = TSNE(n_components=2, perplexity=30, random_state=0) tsne_results = tsne_model.fit_transform(pca_results) dim_0 = np.reshape(tsne_results[:,0], tsne_results.shape[0]) dim_1 = np.reshape(tsne_results[:,1], tsne_results.shape[0]) plt.scatter(dim_0, dim_1, c=y_test_colors) print pred_test.shape print y_test.shape dim_0 = np.reshape(tsne_results[:,0], tsne_results.shape[0]) dim_1 = np.reshape(tsne_results[:,1], tsne_results.shape[0]) plt.scatter(dim_0, dim_1, c=pred_test_colors) check_if_path_exists_or_create('../saved_matrices/') np.save(open('../saved_matrices/tsne_results_traj_len_6.npy', 'wb+'), tsne_results) np.save(open('../saved_matrices/y_test_traj_len_6.npy', 'wb+'), y_test) np.save(open('../saved_matrices/pred_test_binary_traj_len_6.npy', 'wb+'), pred_test) np.save(open('../saved_matrices/pred_test_probs_traj_len_6.npy', 'wb+'), pred_test_probs[:,1]) # Finding the clusters analytically with Kmeans n_clusters = 5 kmeans_model = cluster.MiniBatchKMeans(n_clusters=n_clusters) kmeans_model.fit(hidden_reps) kmeans_cluster_idx = kmeans_model.predict(hidden_reps) colors = np.array(sns.color_palette("Set2", n_clusters)) dim_0 = np.reshape(tsne_results[:,0], tsne_results.shape[0]) dim_1 = np.reshape(tsne_results[:,1], tsne_results.shape[0]) plt.scatter(dim_0, dim_1, c=colors[kmeans_cluster_idx]) clusters = defaultdict(list) for cl in xrange(n_clusters): for i in xrange(len(hidden_reps)): if kmeans_cluster_idx[i] == cl: clusters[cl].append(i) for cl in clusters: print len(clusters[cl]) spectral = cluster.SpectralClustering(n_clusters=n_clusters, eigen_solver='arpack', affinity="nearest_neighbors") spectral.fit(hidden_reps) spectral_cluster_idx = spectral.fit_predict(hidden_reps) dim_0 = np.reshape(tsne_results[:,0], tsne_results.shape[0]) dim_1 = np.reshape(tsne_results[:,1], tsne_results.shape[0]) plt.scatter(dim_0, dim_1, c=colors[spectral_cluster_idx]) ``` ## five clusters: A: big red one B-E: small clusters going clockwise ``` cluster_indices = np.where(np.logical_and(tsne_results[:,0] > 10, tsne_results[:,1] < -20))[0] tsne_results_cluster = tsne_results[cluster_indices] print tsne_results_cluster.shape dim_0 = np.reshape(tsne_results_cluster[:,0], tsne_results_cluster.shape[0]) dim_1 = np.reshape(tsne_results_cluster[:,1], tsne_results_cluster.shape[0]) print pred_test[cluster_indices].shape plt.scatter(dim_0, dim_1, c=y_test[cluster_indices]) print y_test[cluster_indices] print pred_test[cluster_indices] for i in np.random.choice(len(cluster_indices), 5, replace=False): c = cluster_indices[i] print ("sample at index: {}".format(c)) prediction = pred_test[c] true_y = y_test[c] print ("predicted: {}, true: {}".format(prediction, true_y)) student = student_ids_test[c] traj_id = student_to_traj_map[student] print_all_asts_in_traj(hoc_num, traj_id, filename='../chosen_trajectories/cluster_a_{}.json'.format(i)) print ("#####################################") # Cluster B: cluster_indices = np.where(np.logical_and(tsne_results[:,0] < -10, tsne_results[:,1] < -15))[0] tsne_results_cluster = tsne_results[cluster_indices] print tsne_results_cluster.shape dim_0 = np.reshape(tsne_results_cluster[:,0], tsne_results_cluster.shape[0]) dim_1 = np.reshape(tsne_results_cluster[:,1], tsne_results_cluster.shape[0]) print y_test[cluster_indices].shape plt.scatter(dim_0, dim_1, c=y_test[cluster_indices]) for i in np.random.choice(len(cluster_indices), 5, replace=False): c = cluster_indices[i] print ("sample at index: {}".format(c)) prediction = pred_test[c] true_y = y_test[c] print ("predicted: {}, true: {}".format(prediction, true_y)) student = student_ids_test[c] traj_id = student_to_traj_map[student] print_all_asts_in_traj(hoc_num, traj_id, filename='../chosen_trajectories/cluster_b_{}.json'.format(i)) print ("#####################################") # Cluster C cluster_indices = np.where(np.logical_and(tsne_results[:,0] < -10, tsne_results[:,1] > 10))[0] tsne_results_cluster = tsne_results[cluster_indices] print tsne_results_cluster.shape dim_0 = np.reshape(tsne_results_cluster[:,0], tsne_results_cluster.shape[0]) dim_1 = np.reshape(tsne_results_cluster[:,1], tsne_results_cluster.shape[0]) print y_test[cluster_indices].shape plt.scatter(dim_0, dim_1, c=y_test[cluster_indices]) for i in np.random.choice(len(cluster_indices), 5, replace=False): c = cluster_indices[i] print ("sample at index: {}".format(c)) prediction = pred_test[c] true_y = y_test[c] print ("predicted: {}, true: {}".format(prediction, true_y)) student = student_ids_test[c] traj_id = student_to_traj_map[student] print_all_asts_in_traj(hoc_num, traj_id, filename='../chosen_trajectories/cluster_c_{}.json'.format(i)) print ("#####################################") # Cluster D (mostly unsuccessful): cluster_indices = np.where(np.logical_and(tsne_results[:,0] < 5, tsne_results[:,1] > 20))[0] tsne_results_cluster = tsne_results[cluster_indices] print tsne_results_cluster.shape dim_0 = np.reshape(tsne_results_cluster[:,0], tsne_results_cluster.shape[0]) dim_1 = np.reshape(tsne_results_cluster[:,1], tsne_results_cluster.shape[0]) print y_test[cluster_indices].shape plt.scatter(dim_0, dim_1, c=y_test[cluster_indices]) for i in np.random.choice(len(cluster_indices), 5, replace=False): c = cluster_indices[i] print ("sample at index: {}".format(c)) prediction = pred_test[c] true_y = y_test[c] print ("predicted: {}, true: {}".format(prediction, true_y)) student = student_ids_test[c] traj_id = student_to_traj_map[student] print_all_asts_in_traj(hoc_num, traj_id, filename='../chosen_trajectories/cluster_d_{}.json'.format(i)) print ("#####################################") # Cluster E: mixed successful / unssuccessful, but for some reason it's own little cluster cluster_indices = np.where(np.logical_and(tsne_results[:,0] > 15, tsne_results[:,1] > 10))[0] tsne_results_cluster = tsne_results[cluster_indices] print tsne_results_cluster.shape dim_0 = np.reshape(tsne_results_cluster[:,0], tsne_results_cluster.shape[0]) dim_1 = np.reshape(tsne_results_cluster[:,1], tsne_results_cluster.shape[0]) print y_test[cluster_indices].shape plt.scatter(dim_0, dim_1, c=y_test[cluster_indices]) cluster_indices = np.where(np.logical_and(tsne_results[:,0] > 15, tsne_results[:,1] > 10))[0] tsne_results_cluster = tsne_results[cluster_indices] print tsne_results_cluster.shape dim_0 = np.reshape(tsne_results_cluster[:,0], tsne_results_cluster.shape[0]) dim_1 = np.reshape(tsne_results_cluster[:,1], tsne_results_cluster.shape[0]) print pred_test[cluster_indices].shape plt.scatter(dim_0, dim_1, c=pred_test[cluster_indices]) for i in np.random.choice(len(cluster_indices), 5, replace=False): c = cluster_indices[i] print ("sample at index: {}".format(c)) prediction = pred_test[c] true_y = y_test[c] print ("predicted: {}, true: {}".format(prediction, true_y)) student = student_ids_test[c] traj_id = student_to_traj_map[student] print_all_asts_in_traj(hoc_num, traj_id, filename='../chosen_trajectories/cluster_e_{}.json'.format(i)) print ("#####################################") student_highest_prob = student_ids[sample_with_highest_prob] student_lowest_prob = student_ids[sample_with_lowest_prob] traj_highest_prob = student_to_traj_map[student_highest_prob] traj_lowest_prob = student_to_traj_map[student_lowest_prob] ``` ``` print_all_asts_in_traj(hoc_num, traj_highest_prob, filename='../chosen_trajectories/most_likely_unsuccessful.json') print_all_asts_in_traj(hoc_num, traj_lowest_prob, filename='../chosen_trajectories/most_likely_successful.json') print pred_test_probs[sample_with_lowest_prob] print y_test[sample_with_lowest_prob] import pprint with open('../chosen_trajectories/most_likely_successful.json', 'rb+') as f: traj_json = json.load(f) pprint.pprint(traj_json) traj_to_score_map = get_traj_to_score_map(hoc_num) print traj_to_score_map[62046] print traj_to_score_map[79125] traj_id = 79125 student_to_traj_map = get_student_to_traj_map(hoc_num) students_who_solved_next_problem = set(get_students_who_solved_next_problem(hoc_num)) student_ids = sorted(student_to_traj_map.keys()) result_students = [] for s in student_ids: if student_to_traj_map[s] == traj_id: result_students.append(s) count = 0 for s in result_students: if s in students_who_solved_next_problem: count += 1 success_rate = count / float(len(result_students)) print success_rate print len(result_students) print result_students ```
github_jupyter
# Grouping data using python In this tutorial we're going to learn how to use basic python funtionality to group datasets. ``` import csv from pprint import pprint # recall, opening the file and reading the data with open('../data/csv/patients.csv') as f: reader = csv.DictReader(f) for i, row in enumerate(reader): pprint(row) if i >= 1: break # then we're going to parse out a group, e.g. Gender # recall, opening the file and reading the data with open('../data/csv/patients.csv') as f: reader = csv.DictReader(f) for i, row in enumerate(reader): patient_gender = row['GENDER'] print(patient_gender) if i >= 2: break # how do we know what the unique genders are? # let's iterate over them and create a set patient_genders = set() with open('../data/csv/patients.csv') as f: reader = csv.DictReader(f) for i, row in enumerate(reader): patient_gender = row['GENDER'] patient_genders.add(patient_gender) print(patient_genders) # okay, we have 2 genders let's create 2 lists male_patients = [] female_patients = [] with open('../data/csv/patients.csv') as f: reader = csv.DictReader(f) for i, row in enumerate(reader): patient_gender = row['GENDER'] if patient_gender == 'M': male_patients.append(row) elif patient_gender == 'F': female_patients.append(row) else: raise Exception('Unknown Gender') pprint(male_patients[0]) pprint(female_patients[0]) ``` ## Can we do better? What's wrong with the code above? 1. Multiple iterations over the file 2. Brittle... not resilient to new genders What can we do? 1. Use a dictionary to store the groupings 2. Make the code case insensitive ``` # patients by gender patients_by_gender = {} patients_by_gender['F'] = ['patient1', 'patient3'] patients_by_gender['M'] = ['patient2', 'patient4'] pprint(patients_by_gender) print(patients_by_gender.keys()) print(patients_by_gender.values()) print(patients_by_gender.items()) # check if a gender is in the dictionary print('F' in patients_by_gender) print('f' in patients_by_gender) # let's group using a dictionary patients_by_gender = {} with open('../data/csv/patients.csv') as f: reader = csv.DictReader(f) for row in reader: patient_gender = row['GENDER'].upper() # let's store the keys as uppercase # check to see if the key exists, if not if patient_gender not in patients_by_gender: # add the key patients_by_gender[patient_gender] = [] # create an empty list # append the patient as a new row to the correct grouping patients_by_gender[patient_gender].append(row) print(patients_by_gender.keys()) pprint(patients_by_gender['F'][0:2]) print(patients_by_gender['F'][0]['LAST']) ``` ## Let's make the code reusable 1. Create a function 2. Externalize parameters that will change from the function What's common? 1. The csv processing 2. The grouping logic What's different? 1. The file name 2. The groupby parameters ``` # define the function def group_patient(file_name, groupby): pass # call the function patients_by_gender = group_patient('../data/csv/patients.csv', 'GENDER') print(patients_by_gender) # now let's implement the function def group_patient(file_name, groupby): patients_by_group = {} with open(file_name) as f: reader = csv.DictReader(f) for row in reader: # note : we renamed the variable to _attribute patient_attribute = row[groupby].upper() # check to see if the key exists, if not if patient_attribute not in patients_by_group: # add the key patients_by_group[patient_attribute] = [] # create an empty list # append the patient as a new row to the correct grouping patients_by_group[patient_attribute].append(row) return patients_by_group patients_by_gender = group_patient('../data/csv/patients.csv', 'GENDER') pprint(patients_by_gender['F'][0:2]) # but wait... is there anything patient specific??? # let's refactor the code to make it more extensible # and easier to read def group(file_name, groupby): grouped_data = {} with open(file_name) as f: reader = csv.DictReader(f) for row in reader: attribute = row[groupby].upper() if attribute not in grouped_data: grouped_data[attribute] = [] grouped_data[attribute].append(row) return grouped_data patients_by_gender = group('../data/csv/patients.csv', 'GENDER') pprint(patients_by_gender['F'][0]) ``` ## What if we want multiple grouping levels? ``` # single level grouping by gender { "F": [ 'patient1', 'patient2'], "M": [ 'patient3', 'patient4'] } # multi level grouping by gender and race { "F": { "WHITE": ['patient1', 'patient2'], "HISPANIC": ['patient3'] }, "M": { "WHITE": [], "HISPANIC": ['patient4', 'patient5'] } } def group_file(file_name, groupby): with open(file_name) as f: reader = csv.DictReader(f) return group(reader, groupby) def group(iterable, groupby): grouped_data = {} for item in iterable: attribute = item[groupby].upper() if attribute not in grouped_data: grouped_data[attribute] = [] grouped_data[attribute].append(item) return grouped_data patients_by_gender = group_file('../data/csv/patients.csv', 'GENDER') pprint(patients_by_gender['F'][0]) # patients by gender and race # here we'll use a dictionary to represent the 2nd level patients_by_gender_and_race = group_file('../data/csv/patients.csv', 'GENDER') for gender in patients_by_gender_and_race.keys(): patients_by_gender_and_race[gender] = {} pprint(patients_by_gender_and_race) # now let's perform the groupings patients_by_gender_and_race = group_file('../data/csv/patients.csv', 'GENDER') # print(patients_by_gender_and_race) for gender in patients_by_gender_and_race.keys(): patients_by_gender_and_race[gender] = group(patients_by_gender_and_race[gender], 'RACE') # print(patients_by_gender) # finally let's print the unique genders and races for gender in patients_by_gender_and_race.keys(): for race in patients_by_gender_and_race[gender].keys(): print(gender, race) # as a sanity check let's look over the dataset again gender = set() race = set() with open('../data/csv/patients.csv') as f: reader = csv.DictReader(f) for row in reader: gender.add(row['GENDER']) race.add(row['RACE']) print(gender, race) # what about 3 levels of grouping? grouped_patients = group_file('../data/csv/patients.csv', 'GENDER') for gender in grouped_patients.keys(): grouped_patients[gender] = group(grouped_patients[gender], 'RACE') for race in grouped_patients[gender].keys(): grouped_patients[gender][race] = group(grouped_patients[gender][race], 'ETHNICITY') # print(grouped_patients) # let's print the unique genders and races for gender in grouped_patients.keys(): for race in grouped_patients[gender].keys(): for ethnicity in grouped_patients[gender][race].keys(): print(gender, race, ethnicity) ``` ## What about 4 levels of grouping? I think we should refactor again. This time using [recursion](https://realpython.com/python-thinking-recursively/). This time we'll support any number of groupings. ``` def group_file_by_list(file_name, groupings): with open(file_name) as f: reader = csv.DictReader(f) grouped_data = group(reader, groupings[0]) if len(groupings) > 1: group_by_list(grouped_data.keys(), grouped_data, groupings[1:]) return grouped_data def group_by_list(iterable, grouped_data, groupings): for item in iterable: grouped_data[item] = group(grouped_data[item], groupings[0]) if len(groupings) > 1: group_by_list(grouped_data[item].keys(), grouped_data[item], groupings[1:]) return grouped_data groupings = ['GENDER', 'RACE', 'ETHNICITY', 'BIRTHPLACE'] grouped_patients = group_file_by_list('../data/csv/patients.csv', groupings) for gender in grouped_patients.keys(): for race in grouped_patients[gender].keys(): for ethnicity in grouped_patients[gender][race].keys(): print(gender, race, ethnicity) # let's create a recursive print def print_keys(d): for k, v in d.items(): print_keys2(v, [k]) def print_keys2(d, l): if isinstance(d, dict): for k, v in d.items(): print_keys2(v, l + [k]) else: print(' '.join(l)) l = [] print_keys(grouped_patients) # lets try it out on a few examples grouped_patients = group_file_by_list('../data/csv/patients.csv', ['GENDER']) print_keys(grouped_patients) grouped_patients = group_file_by_list('../data/csv/patients.csv', ['RACE']) print_keys(grouped_patients) grouped_patients = group_file_by_list('../data/csv/patients.csv', ['GENDER', 'RACE']) print_keys(grouped_patients) grouped_patients = group_file_by_list('../data/csv/patients.csv', ['RACE', 'GENDER']) print_keys(grouped_patients) grouped_patients = group_file_by_list('../data/csv/patients.csv', ['ETHNICITY']) print_keys(grouped_patients) grouped_patients = group_file_by_list('../data/csv/patients.csv', ['RACE', 'ETHNICITY']) print_keys(grouped_patients) grouped_patients = group_file_by_list('../data/csv/patients.csv', ['RACE', 'ETHNICITY', 'BIRTHPLACE', 'GENDER']) print_keys(grouped_patients) ```
github_jupyter
<div class="alert alert-block alert-info" style="margin-top: 20px"> <a href="https://cocl.us/topNotebooksPython101Coursera"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png" width="750" align="center"> </a> </div> <a href="https://cognitiveclass.ai/"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png" width="200" align="center"> </a> <h1>2D <code>Numpy</code> in Python</h1> <p><strong>Welcome!</strong> This notebook will teach you about using <code>Numpy</code> in the Python Programming Language. By the end of this lab, you'll know what <code>Numpy</code> is and the <code>Numpy</code> operations.</p> <h2>Table of Contents</h2> <div class="alert alert-block alert-info" style="margin-top: 20px"> <ul> <li><a href="create">Create a 2D Numpy Array</a></li> <li><a href="access">Accessing different elements of a Numpy Array</a></li> <li><a href="op">Basic Operations</a></li> </ul> <p> Estimated time needed: <strong>20 min</strong> </p> </div> <hr> <h2 id="create">Create a 2D Numpy Array</h2> ``` # Import the libraries import numpy as np import matplotlib.pyplot as plt ``` Consider the list <code>a</code>, the list contains three nested lists **each of equal size**. ``` # Create a list a = [[11, 12, 13], [21, 22, 23], [31, 32, 33]] a ``` We can cast the list to a Numpy Array as follow ``` # Convert list to Numpy Array # Every element is the same type A = np.array(a) A ``` We can use the attribute <code>ndim</code> to obtain the number of axes or dimensions referred to as the rank. ``` # Show the numpy array dimensions A.ndim ``` Attribute <code>shape</code> returns a tuple corresponding to the size or number of each dimension. ``` # Show the numpy array shape A.shape ``` The total number of elements in the array is given by the attribute <code>size</code>. ``` # Show the numpy array size A.size ``` <hr> <h2 id="access">Accessing different elements of a Numpy Array</h2> We can use rectangular brackets to access the different elements of the array. The correspondence between the rectangular brackets and the list and the rectangular representation is shown in the following figure for a 3x3 array: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumTwoEg.png" width="500" /> We can access the 2nd-row 3rd column as shown in the following figure: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumTwoFT.png" width="400" /> We simply use the square brackets and the indices corresponding to the element we would like: ``` # Access the element on the second row and third column A[1, 2] ``` We can also use the following notation to obtain the elements: ``` # Access the element on the second row and third column A[1][2] ``` Consider the elements shown in the following figure <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumTwoFF.png" width="400" /> We can access the element as follows ``` # Access the element on the first row and first column A[0][0] ``` We can also use slicing in numpy arrays. Consider the following figure. We would like to obtain the first two columns in the first row <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumTwoFSF.png" width="400" /> This can be done with the following syntax ``` # Access the element on the first row and first and second columns A[0][0:2] ``` Similarly, we can obtain the first two rows of the 3rd column as follows: ``` # Access the element on the first and second rows and third column A[0:2, 2] ``` Corresponding to the following figure: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumTwoTST.png" width="400" /> <hr> <h2 id="op">Basic Operations</h2> We can also add arrays. The process is identical to matrix addition. Matrix addition of <code>X</code> and <code>Y</code> is shown in the following figure: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumTwoAdd.png" width="500" /> The numpy array is given by <code>X</code> and <code>Y</code> ``` # Create a numpy array X X = np.array([[1, 0], [0, 1]]) X # Create a numpy array Y Y = np.array([[2, 1], [1, 2]]) Y ``` We can add the numpy arrays as follows. ``` # Add X and Y Z = X + Y Z ``` Multiplying a numpy array by a scaler is identical to multiplying a matrix by a scaler. If we multiply the matrix <code>Y</code> by the scaler 2, we simply multiply every element in the matrix by 2 as shown in the figure. <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumTwoDb.png" width="500" /> We can perform the same operation in numpy as follows ``` # Create a numpy array Y Y = np.array([[2, 1], [1, 2]]) Y # Multiply Y with 2 Z = 2 * Y Z ``` Multiplication of two arrays corresponds to an element-wise product or Hadamard product. Consider matrix <code>X</code> and <code>Y</code>. The Hadamard product corresponds to multiplying each of the elements in the same position, i.e. multiplying elements contained in the same color boxes together. The result is a new matrix that is the same size as matrix <code>Y</code> or <code>X</code>, as shown in the following figure. <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Images/NumTwoMul.png" width="500" /> We can perform element-wise product of the array <code>X</code> and <code>Y</code> as follows: ``` # Create a numpy array Y Y = np.array([[2, 1], [1, 2]]) Y # Create a numpy array X X = np.array([[1, 0], [0, 1]]) X # Multiply X with Y Z = X * Y Z ``` We can also perform matrix multiplication with the numpy arrays <code>A</code> and <code>B</code> as follows: First, we define matrix <code>A</code> and <code>B</code>: ``` # Create a matrix A A = np.array([[0, 1, 1], [1, 0, 1]]) A # Create a matrix B B = np.array([[1, 1], [1, 1], [-1, 1]]) B ``` We use the numpy function <code>dot</code> to multiply the arrays together. ``` # Calculate the dot product Z = np.dot(A,B) Z # Calculate the sine of Z np.sin(Z) ``` We use the numpy attribute <code>T</code> to calculate the transposed matrix ``` # Create a matrix C C = np.array([[1,1],[2,2],[3,3]]) C # Get the transposed of C C.T ``` <hr> <h2>The last exercise!</h2> <p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work. <hr> <div class="alert alert-block alert-info" style="margin-top: 20px"> <h2>Get IBM Watson Studio free of charge!</h2> <p><a href="https://cocl.us/bottemNotebooksPython101Coursera"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png" width="750" align="center"></a></p> </div> <h3>About the Authors:</h3> <p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p> Other contributors: <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a> <hr> <p>Copyright &copy; 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
github_jupyter
``` # HIDDEN # The standard set of libraries we need import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Make plots look a little bit more fancy plt.style.use('fivethirtyeight') # The standard library for data in tables import pandas as pd # A tiny function to read a file directly from a URL from urllib.request import urlopen def read_url(url): return urlopen(url).read().decode() ``` This page is largely derived from `Another_Kind_Of_Character` of the UC Berkeley course \- see the license file on the main website. ``` # HIDDEN # Read the text of Pride and Prejudice, split into chapters. book_url = 'http://www.gutenberg.org/cache/epub/42671/pg42671.txt' book_text = read_url(book_url) # Break the text into Chapters book_chapters = book_text.split('CHAPTER ') # Drop the first "Chapter" - it's the Project Gutenberg header book_chapters = book_chapters[1:] ``` In some situations, the relationships between quantities allow us to make predictions. This text will explore how to make accurate predictions based on incomplete information and develop methods for combining multiple sources of uncertain information to make decisions. As an example of visualizing information derived from multiple sources, let us first use the computer to get some information that would be tedious to acquire by hand. In the context of novels, the word "character" has a second meaning: a printed symbol such as a letter or number or punctuation symbol. Here, we ask the computer to count the number of characters and the number of periods in each chapter of *Pride and Prejudice*. ``` # In each chapter, count the number of all characters; # Also count the number of periods. chars_periods = pd.DataFrame.from_dict({ 'Number of chars in chapter': [len(s) for s in book_chapters], 'Number of periods': np.char.count(book_chapters, '.') }) ``` Here are the data. Each row of the table corresponds to one chapter of the novel and displays the number of characters as well as the number of periods in the chapter. Not surprisingly, chapters with fewer characters also tend to have fewer periods, in general – the shorter the chapter, the fewer sentences there tend to be, and vice versa. The relation is not entirely predictable, however, as sentences are of varying lengths and can involve other punctuation such as question marks. ``` chars_periods ``` In the plot below, there is a dot for each chapter in the book. The horizontal axis represents the number of periods and the vertical axis represents the number of characters. ``` plt.figure(figsize=(6, 6)) plt.scatter(chars_periods['Number of periods'], chars_periods['Number of chars in chapter'], color='darkblue') ``` Notice how the blue points are roughly clustered around a straight line. Now look at all the chapters that contain about 100 periods. The plot shows that those chapters contain about 10,000 characters to about 15,000 characters, roughly. That's about 90 to 150 characters per period. Indeed, it appears from looking at the plot that the chapters tend to have somewhere between 100 and 150 characters between periods, as a very rough estimate. Perhaps Jane Austen was announcing something familiar to us now: the original 140-character limit of Twitter.
github_jupyter
# Exponentially weighted averages --- ***Author: Piotr Skalski*** ## Imports ``` import numpy as np import seaborn as sns import matplotlib.pyplot as plt import pandas as pd import os from pandas_datareader import data ``` ## Settings ``` # We will play around value of the Apple shares company_name = 'AAPL' # We are interested in data from the following period start_date = '2017-10-31' end_date = '2018-10-31' ``` ## Plotting functions ``` def make_graph(data_y, x, labels, colors, plot_name, file_name=None): plt.figure(figsize=(16,12)) plt.style.use('dark_background') for series, color in zip(data_y, colors): plt.plot(x[:len(series)], series, color, lw=2) plt.title(plot_name, fontsize=30) plt.ylabel('stock close value', fontsize=15) plt.xlabel('date', fontsize=15) plt.legend(labels, loc='lower right', prop={'size': 15}, framealpha=0.0) plt.tick_params(top=False, bottom=False, left=False, right=False, labelleft=False, labelbottom=False) plt.box(False) if(file_name): plt.savefig(file_name) plt.close() ``` ## Data loading ``` panel_data = data.DataReader(company_name, 'yahoo', start_date, end_date) panel_data.head() # Extraction of key values close_data = panel_data["Close"].tolist() index = panel_data.index.tolist() ``` ## Visualization of the dataset ``` make_graph([close_data], index, ["Value"], ["#FFFFFF"], "") ``` ## Exponentially weighted averages ``` def exp_weighted_average(data, beta): weighted_average_values = [] for value in data: if len(weighted_average_values) is 0: weighted_average_values.append(value) else: next_value = beta * weighted_average_values[-1] + (1 - beta) * value weighted_average_values.append(next_value) return weighted_average_values ewa_98 = exp_weighted_average(close_data, 0.98) ewa_95 = exp_weighted_average(close_data, 0.95) ewa_90 = exp_weighted_average(close_data, 0.90) ewa_80 = exp_weighted_average(close_data, 0.80) data = [ close_data, ewa_80, ewa_90, ewa_95, ewa_98 ] labels = [ "Close value", "EWA 0.80", "EWA 0.90", "EWA 0.95", "EWA 0.98" ] colors = [ "#FFFFFF", "#9E7CC1", "#9897CE", "#93B2DC", "#8ECDEA" ] make_graph(data, index, labels, colors, "") ``` ## Animation ### Settings ``` OUTPUT_DIR = "ewa_visualisations" ANIMATION_SHIFT = 20 ``` ### Support functions ``` def cut_list(array, iteration, shift): if iteration < shift: return []; elif iteration - shift < len(array): return array[:(iteration - shift)] else: return array labels = [ "Close value", "EWA 0.80", "EWA 0.90", "EWA 0.95", "EWA 0.98" ] colors = [ "#FFFFFF", "#9E7CC1", "#9897CE", "#93B2DC", "#8ECDEA" ] for i in range(len(close_data) + 3 * ANIMATION_SHIFT): file_name = "EWA_{:05}.png".format(i) output_file = os.path.join(OUTPUT_DIR, file_name) data = [ close_data, cut_list(ewa_80, i, 0 * ANIMATION_SHIFT), cut_list(ewa_90, i, 1 * ANIMATION_SHIFT), cut_list(ewa_95, i, 2 * ANIMATION_SHIFT), cut_list(ewa_98, i, 3 * ANIMATION_SHIFT) ] make_graph(data, index, labels, colors, "", output_file) ```
github_jupyter
# Quick start skforecast This code is a quick example of how to create, validate and optimize a recursive multi-step forecaster, `ForecasterAutoreg`, using **skforecast**. For more detailed documentation, visit [User Guides](https://joaquinamatrodrigo.github.io/skforecast/latest/user_guides/input-data.html). ## Libraries ``` # Libraries # ============================================================================== import pandas as pd import matplotlib.pyplot as plt from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_squared_error from skforecast.ForecasterAutoreg import ForecasterAutoreg from skforecast.model_selection import backtesting_forecaster from skforecast.model_selection import grid_search_forecaster ``` ## Data ``` # Download data # ============================================================================== url = ('https://raw.githubusercontent.com/JoaquinAmatRodrigo/skforecast/master/data/h2o.csv') data = pd.read_csv(url, sep=',', header=0, names=['y', 'datetime']) # Data preprocessing # ============================================================================== data['datetime'] = pd.to_datetime(data['datetime'], format='%Y/%m/%d') data = data.set_index('datetime') data = data.asfreq('MS') data = data['y'] data = data.sort_index() # Train-test dates # ============================================================================== end_train = '2005-06-01 23:59:00' print(f"Train dates : {data.index.min()} --- {data.loc[:end_train].index.max()} (n={len(data.loc[:end_train])})") print(f"Test dates : {data.loc[end_train:].index.min()} --- {data.index.max()} (n={len(data.loc[end_train:])})") # Plot # ============================================================================== fig, ax=plt.subplots(figsize=(9, 4)) data.loc[:end_train].plot(ax=ax, label='train') data.loc[end_train:].plot(ax=ax, label='test') ax.legend(); ``` ## Train forecaster For more detailed documentation, visit: [User guide ForecasterAutoreg](https://joaquinamatrodrigo.github.io/skforecast/latest/user_guides/autoregresive-forecaster.html). ``` # Create and fit Recursive multi-step forecaster (ForecasterAutoreg) # ============================================================================== forecaster = ForecasterAutoreg( regressor = RandomForestRegressor(random_state=123), lags = 15 ) forecaster.fit(y=data.loc[:end_train]) forecaster ``` ## Prediction This method predicts $n$ steps in the future. ``` # Predict # ============================================================================== predictions = forecaster.predict(steps=len(data.loc[end_train:])) predictions.head(3) # Plot predictions # ============================================================================== fig, ax=plt.subplots(figsize=(9, 4)) data.loc[:end_train].plot(ax=ax, label='train') data.loc[end_train:].plot(ax=ax, label='test') predictions.plot(ax=ax, label='predictions') ax.legend(); # Prediction error # ============================================================================== error_mse = mean_squared_error( y_true = data.loc[end_train:], y_pred = predictions ) print(f"Test error (mse): {error_mse}") ``` ## Backtesting: forecaster validation Backtesting is a term used in modeling to refer to testing a predictive model on historical data. Backtesting involves moving backward in time, step-by-step, in as many stages as is necessary. Therefore, it is a special type of cross-validation applied to previous period(s). For more detailed documentation, visit: [User guide Backtesting forecaster](https://joaquinamatrodrigo.github.io/skforecast/latest/user_guides/backtesting.html). ``` # Backtesting # ============================================================================== metric, predictions_backtest = backtesting_forecaster( forecaster = forecaster, y = data, initial_train_size = len(data.loc[:end_train]), fixed_train_size = False, steps = 10, metric = 'mean_squared_error', refit = True, verbose = True ) print(f"Backtest error: {metric}") ``` ## Grid search: forecaster optimization Skforecast library combines grid search strategy with backtesting to identify the combination of lags and hyperparameters that achieve the best prediction performance. For more detailed documentation, visit: [Grid search forecaster](https://joaquinamatrodrigo.github.io/skforecast/latest/user_guides/grid-search-forecaster.html). ``` # Grid search hyperparameter and lags # ============================================================================== # Regressor hyperparameters param_grid = {'n_estimators': [50, 100], 'max_depth': [5, 10, 15]} # Lags used as predictors lags_grid = [3, 10, [1, 2, 3, 20]] results_grid = grid_search_forecaster( forecaster = forecaster, y = data, param_grid = param_grid, lags_grid = lags_grid, steps = 10, refit = True, metric = 'mean_squared_error', initial_train_size = len(data.loc[:end_train]), fixed_train_size = False, return_best = True, verbose = False ) # Grid results # ============================================================================== results_grid %%html <style> .jupyter-wrapper .jp-CodeCell .jp-Cell-inputWrapper .jp-InputPrompt {display: none;} </style> ```
github_jupyter
``` # Import the usual libraries import numpy as np import matplotlib import matplotlib.pyplot as plt import matplotlib.patches as mpatches # Enable inline plotting %matplotlib inline ``` # Linearity Correction Example notebook to generate a non-linearity correction function. The general idea is to use sample-up-the-ramp data to determine a flux-dependent correction factor that linearizes the ramps. The final product is a set of polynomial coefficents that represent the correction function. In principle, this can be accomplished independently for every pixel in the array, but for simplicity, this notebook determines a single correction function that is used for every light-sensitive pixel (e.g., excludes reference pixels). ``` # Reference pixel correction modules and functions import ref_pixels from ref_pixels import robust from ref_pixels import reffix_hxrg, get_fits_data from ref_pixels import jl_poly_fit, jl_poly from ref_pixels.utils import find_sat, cube_fit, hist_indices # Astropy FITS from astropy.io import fits # Progress bar from tqdm.auto import trange, tqdm def gen_average_ramp(allfiles, det, bias=None, deg=2, **kwargs): """ Crete an averaged data cube from all FITS cubes """ nfiles = len(allfiles) # Time array tarr = det.times_group_avg data_mean = np.zeros([nz, ny, nx]) for fname in tqdm(allfiles): # Read in data data = get_fits_data(fname, bias=bias, reffix=True, **kwargs) # Perform fit to 50% saturation cf = cube_fit(tarr, data, sat_frac=0.5, deg=deg, ref_info=det.ref_info) # Subtract bias offset image data -= cf[0] data_mean += data # Take average data_mean /= nfiles return data_mean def gen_lincorr(data_cube, det, return_binvals=False): """ Create a linearity correction function. This produces a set of polynomial coefficients that are used to generate a correction factor based on measured flux values (after bias subtration and reference pixel correction). The assumes all pixels have the same non-linearity function. While not necessarily true, creating independent corrections for each pixel can be rather challenging. The method showcased here produces corrected data that is >99% linear. """ # Time array tarr = det.times_group_avg # Active and reference pixel masks mask_act = det.mask_act # Get saturation values for each pixel sat_vals = find_sat(data_cube, ref_info=det.ref_info) # Fit linear regime of all pixels to get coefficients # cf_all = cube_fit(tarr, data_cube, sat_vals=sat_vals, sat_frac=0.25, fit_zero=True, deg=1) cf_all = cube_fit(tarr[1:], data_cube[1:], sat_vals=sat_vals, sat_frac=0.5, fit_zero=True, deg=2) # Generate a ramp of ideal linear data from fit coefficients data_fit = jl_poly(tarr, cf_all[0:2]) # Calculate correction for every data sample ratio = data_fit / data_cube # Create a ramp mask of pixels below saturation levels mask_good = data_cube < 0.99*sat_vals # Combine with active pixel mask mask = mask_good & mask_act # All data flattened into a single array xv = data_cube[mask].flatten() yv = ratio[mask].flatten() # Bin data bsize = 1000 bins = np.arange(xv.min(), xv.max()+bsize, bsize) ig, vg, cv = hist_indices(xv, bins=bins, return_more=True) # Grab indices that have non-negative data well_max_fit = np.median(sat_vals) nvals = np.array([len(i) for i in ig]) imask = (nvals>0) & (cv>=0) & (cv<well_max_fit) ig_nozero = np.array(ig)[imask] # Take the median of data in each valid bin xmed = np.array([np.median(xv[i]) for i in ig_nozero]) ymed = np.array([np.median(yv[i]) for i in ig_nozero]) # Add data point for ratio of 1 at x=0 ifit = (ymed>=1) xfit = np.concatenate(([0],xmed[ifit])) yfit = np.concatenate(([1],ymed[ifit])) cf = jl_poly_fit(xfit, yfit, deg=7, robust_fit=True) if return_binvals: return cf, xmed, ymed else: return cf ``` ## Linearity FITS Cubes Paths to linearity and superbias data. ``` import os flat_dir = '/Users/jarron/SHARK-NIR/20200215_Lin/Lin_250kHz/' flat_files = np.array([flat_dir + f for f in os.listdir(flat_dir) if f.endswith('.fits')]) flat_files.sort() # Read in superbias image bias_path = '/Users/jarron/SHARK-NIR/20200220_Dark/SHARK-NIR_250Hz_superbias_example.fits' superbias = get_fits_data(bias_path) ``` ## Detector timing Define a detector timing object that houses all the necessary information concerning the pixel and frame clocking, detector size and output channels, etc. ``` # Get shape information for input file hdul = fits.open(flat_files[0]) nz, ny, nx = hdul[0].data.shape hdul.close() # Detector timing info if nx<2048 and ny<2048: wind_mode = 'WINDOW' elif ny<2048: wind_mode = 'STRIPE' else: wind_mode = 'FULL' det = ref_pixels.detops.det_timing(mode='SHARK_250', wind_mode=wind_mode, xpix=nx, ypix=ny, ngroup=nz) # Double check basic frame size and setup information print(det.to_dict()) # Check timing information makes sense det.times_to_dict() # Time array tarr = det.times_group_avg print(tarr) # Active and reference pixel masks mask_ref = det.mask_ref mask_act = ~mask_ref ``` ## Calculate average ramp Call the function that will perform bias subtraction and reference pixel correction in order to cacluate the final linearity calibration function. ``` # Keyword arguments to pass to reference pixel correction before slope fitting kw_refpix = { 'nchans': det.nout, 'altcol': True, 'in_place': True, 'fixcol': True, 'avg_type': 'pixel', 'savgol': True, 'perint': False } data_mean = gen_average_ramp(flat_files, det, bias=superbias, **kw_refpix) # Generate the linearity correction polynomial coefficients cf_lincorr, vals_dn, ratio = gen_lincorr(data_mean, det, return_binvals=True) ``` Coefficients `cf_lincorr` can then be used to generate the correction factor directly from flux values (after bias subtration and reference pixel correction): ``` python # Assume vals are an array of uncorrected values corr_fact = jl_poly(vals, cf_lincorr) vals_corr = vals * corr_fact ``` ``` # Create correction for average of pixel data # Don't correct reference pixels vals = np.median(data_mean[:,mask_act], axis=1) corr_fact = jl_poly(vals, cf_lincorr) vals_corr = vals * corr_fact # Check the correction compared to linear fit ifit = vals_corr < 40000 cf = jl_poly_fit(tarr[ifit],vals_corr[ifit]) frac_diff = (vals_corr - jl_poly(tarr, cf)) / vals_corr ``` In the above cell, we created a median ramp of all active pixels, then perform the correction on those values. However, we could instead flip the order around such that we first perform the linearity correction on the entire cube, then take the median of the corrected ramp: ``` python mask_ref = det.mask_ref mask_act = det.mask_act corr_fact_all = jl_poly(data_mean[:, mask_act].flatten(), cf_lincorr) # Create corrected cube data_mean_corr = np.zeros_like(data_mean) data_mean_corr[:, mask_act] = data_mean[:, mask_act] * corr_fact_all.reshape([data_mean.shape[0],-1]) data_mean_corr[:, mask_ref] = data_mean[:, mask_ref] # Take median of all active pixels data_med = np.median(data_mean_corr[:,mask_act], axis=1) # Delete correction factor array del corr_fact_all ``` ``` # Plot everything layout = """ AACCC BBCCC """ fig = plt.figure(constrained_layout=True, figsize=(12,5.5)) ax_dict = fig.subplot_mosaic(layout) # Linearity correction factor ax = ax_dict['A'] ax.plot(vals_dn, ratio, ls='none', marker='.') xvals = np.linspace(0, 60000, 1000) ax.plot(xvals, jl_poly(xvals,cf_lincorr), lw=4, alpha=0.5) ax.set_title('Linearity Correction') ax.set_xlabel('Pixel Values (DN)') ax.set_ylabel('Correction Factor') # ax = ax_dict['B'] iplot = vals<55000 ax.plot(vals[iplot], 100*frac_diff[iplot]) ax.set_ylim(np.array([-1,1])*np.max(np.abs(ax.get_ylim()))) ax.set_xlim(ax_dict['A'].get_xlim()) ax.set_title('Median Linear Deviation') ax.set_xlabel('Pixel Values (DN)') ax.set_ylabel('% Difference') ax = ax_dict['C'] ax.plot(tarr, vals, label='Raw Data') ax.plot(tarr, vals_corr, label='Corrected Data') ifit = vals_corr < 60000 cf = jl_poly_fit(tarr[ifit],vals_corr[ifit]) tvals = np.linspace(0,tarr.max(),100) ax.plot(tvals, jl_poly(tvals, cf), label='Linear Function', ls='--') ax.set_title('Example Ramp Correction') ax.set_xlabel('Time (sec)') ax.set_ylabel('Values (DN)') ax.legend(); # Save linearity coefficients file outdir = '/Users/jarron/SHARK-NIR/20200215_Lin/' file_out = 'SHARK-NIR_250Hz_lincorr.npy' np.save(outdir + file_out, cf_lincorr) corr_fact = np.zeros_like(data_mean) for i, cf in enumerate(cf_lincorr): corr_fact += cf * data_mean**i # Make sure reference pixels are excluded corr_fact[:, mask_ref] = 1 data_corr = data_mean * corr_fact # Get saturation values sat_vals = find_sat(data_corr, ref_info=det.ref_info) del corr_fact, data_corr # Save saturation values to file outdir = '/Users/jarron/SHARK-NIR/20200215_Lin/' file_out = 'SHARK-NIR_250Hz_lincorr_satvals.fits' hdu = fits.PrimaryHDU(sat_vals) hdu.writeto(outdir + file_out, overwrite=True) ```
github_jupyter
# Working with code cells In this notebook you'll get some experience working with code cells. First, run the cell below. As I mentioned before, you can run the cell by selecting it the click the "run cell" button above. However, it's easier to run it by pressing **Shift + Enter** so you don't have to take your hands away from the keyboard. ``` # Select the cell, then press Shift + Enter 3**2 ``` Shift + Enter runs the cell then selects the next cell or creates a new one if necessary. You can run a cell without changing the selected cell by pressing **Control + Enter**. The output shows up below the cell. It's printing out the result just like in a normal Python shell. Only the very last result in a cell will be printed though. Otherwise, you'll need to use `print()` print out any variables. > **Exercise:** Run the next two cells to test this out. Think about what you expect to happen, then try it. ``` 3**2 4**2 print(3**2) 4**2 ``` Now try assigning a value to a variable. ``` mindset = 'growth' ``` There is no output, `'growth'` has been assigned to the variable `mindset`. All variables, functions, and classes created in a cell are available in every other cell in the notebook. What do you think the output will be when you run the next cell? Feel free to play around with this a bit to get used to how it works. ``` mindset[:4] ``` ## Code completion When you're writing code, you'll often be using a variable or function often and can save time by using code completion. That is, you only need to type part of the name, then press **tab**. > **Exercise:** Place the cursor at the end of `mind` in the next cell and press **tab** ``` print 'hello' ``` ``` mind ``` Here, completing `mind` writes out the full variable name `mindset`. If there are multiple names that start the same, you'll get a menu, see below. ``` # Run this cell mindful = True # Complete the name here again, choose one from the menu mind ``` Remember that variables assigned in one cell are available in all cells. This includes cells that you've previously run and cells that are above where the variable was assigned. Try doing the code completion on the cell third up from here. Code completion also comes in handy if you're using a module but don't quite remember which function you're looking for or what the available functions are. I'll show you how this works with the [random](https://docs.python.org/3/library/random.html) module. This module provides functions for generating random numbers, often useful for making fake data or picking random items from lists. ``` # Run this import random ``` > **Exercise:** In the cell below, place the cursor after `random.` then press **tab** to bring up the code completion menu for the module. Choose `random.randint` from the list, you can move through the menu with the up and down arrow keys. ``` random. ``` Above you should have seen all the functions available from the random module. Maybe you're looking to draw random numbers from a [Gaussian distribution](https://en.wikipedia.org/wiki/Normal_distribution), also known as the normal distribution or the "bell curve". ## Tooltips You see there is the function `random.gauss` but how do you use it? You could check out the [documentation](https://docs.python.org/3/library/random.html), or just look up the documentation in the notebook itself. > **Exercise:** In the cell below, place the cursor after `random.gauss` the press **shift + tab** to bring up the tooltip. ``` random.gauss ``` You should have seen some simple documentation like this: Signature: random.gauss(mu, sigma) Docstring: Gaussian distribution. The function takes two arguments, `mu` and `sigma`. These are the standard symbols for the mean and the standard deviation, respectively, of the Gaussian distribution. Maybe you're not familiar with this though, and you need to know what the parameters actually mean. This will happen often, you'll find some function, but you need more information. You can show more information by pressing **shift + tab** twice. > **Exercise:** In the cell below, show the full help documentation by pressing **shift + tab** twice. ``` random.gauss ``` You should see more help text like this: mu is the mean, and sigma is the standard deviation. This is slightly faster than the normalvariate() function.
github_jupyter
# Utilizing existing FAQs for Question Answering EXECUTABLE VERSION: [colab](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial4_Tutorial4_FAQ_style_QA.ipynb) While *extractive Question Answering* works on pure texts and is therefore more generalizable, there's also a common alternative that utilizes existing FAQ data. **Pros**: - Very fast at inference time - Utilize existing FAQ data - Quite good control over answers **Cons**: - Generalizability: We can only answer questions that are similar to existing ones in FAQ In some use cases, a combination of extractive QA and FAQ-style can also be an interesting option. ``` # Install the latest release of Haystack in your own environment #! pip install farm-haystack # Install the latest master of Haystack !pip install git+https://github.com/deepset-ai/haystack.git from haystack import Finder from haystack.document_store.elasticsearch import ElasticsearchDocumentStore from haystack.retriever.dense import EmbeddingRetriever from haystack.utils import print_answers import pandas as pd import requests ``` ### Start an Elasticsearch server You can start Elasticsearch on your local machine instance using Docker. If Docker is not readily available in your environment (eg., in Colab notebooks), then you can manually download and execute Elasticsearch from source. ``` # Recommended: Start Elasticsearch using Docker # ! docker run -d -p 9200:9200 -e "discovery.type=single-node" elasticsearch:7.6.2 # In Colab / No Docker environments: Start Elasticsearch from source ! wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-linux-x86_64.tar.gz -q ! tar -xzf elasticsearch-7.6.2-linux-x86_64.tar.gz ! chown -R daemon:daemon elasticsearch-7.6.2 import os from subprocess import Popen, PIPE, STDOUT es_server = Popen(['elasticsearch-7.6.2/bin/elasticsearch'], stdout=PIPE, stderr=STDOUT, preexec_fn=lambda: os.setuid(1) # as daemon ) # wait until ES has started ! sleep 30 ``` ### Init the DocumentStore In contrast to Tutorial 1 (extractive QA), we: * specify the name of our `text_field` in Elasticsearch that we want to return as an answer * specify the name of our `embedding_field` in Elasticsearch where we'll store the embedding of our question and that is used later for calculating our similarity to the incoming user question * set `excluded_meta_data=["question_emb"]` so that we don't return the huge embedding vectors in our search results ``` from haystack.document_store.elasticsearch import ElasticsearchDocumentStore document_store = ElasticsearchDocumentStore(host="localhost", username="", password="", index="document", embedding_field="question_emb", embedding_dim=768, excluded_meta_data=["question_emb"]) ``` ### Create a Retriever using embeddings Instead of retrieving via Elasticsearch's plain BM25, we want to use vector similarity of the questions (user question vs. FAQ ones). We can use the `EmbeddingRetriever` for this purpose and specify a model that we use for the embeddings. ``` retriever = EmbeddingRetriever(document_store=document_store, embedding_model="deepset/sentence_bert", use_gpu=True) ``` ### Prepare & Index FAQ data We create a pandas dataframe containing some FAQ data (i.e curated pairs of question + answer) and index those in elasticsearch. Here: We download some question-answer pairs related to COVID-19 ``` # Download temp = requests.get("https://raw.githubusercontent.com/deepset-ai/COVID-QA/master/data/faqs/faq_covidbert.csv") open('small_faq_covid.csv', 'wb').write(temp.content) # Get dataframe with columns "question", "answer" and some custom metadata df = pd.read_csv("small_faq_covid.csv") # Minimal cleaning df.fillna(value="", inplace=True) df["question"] = df["question"].apply(lambda x: x.strip()) print(df.head()) # Get embeddings for our questions from the FAQs questions = list(df["question"].values) df["question_emb"] = retriever.embed_queries(texts=questions) df["question_emb"] = df["question_emb"] df = df.rename(columns={"answer": "text"}) # Convert Dataframe to list of dicts and index them in our DocumentStore docs_to_index = df.to_dict(orient="records") document_store.write_documents(docs_to_index) ``` ### Ask questions Initialize a Finder (this time without a reader) and ask questions ``` finder = Finder(reader=None, retriever=retriever) prediction = finder.get_answers_via_similar_questions(question="How is the virus spreading?", top_k_retriever=10) print_answers(prediction, details="all") ```
github_jupyter
Notebook to create a Nemo Bathymetry file for the ERDDAP server Based on: Nancy/NEMO depths vs bathymetry file.ipynb ``` from matplotlib.colors import BoundaryNorm import matplotlib.pyplot as plt from matplotlib.ticker import MaxNLocator import netCDF4 as nc import numpy as np from salishsea_tools import ( bathy_tools, nc_tools, ) %matplotlib inline mesh = nc.Dataset('../../NEMO-forcing/grid/mesh_mask_downbyone2.nc') mbathy = mesh.variables['mbathy'][0,:,:] #used to calculate number of vertical ocean grid cells at each (i,j) (1=land point) gdepw = mesh.variables['gdepw_0'][0,:,:,:] surface_tmask = mesh.variables['tmask'][0,0,:,:] surface_tmask = np.abs(surface_tmask-1) NEMO_bathy = np.zeros(mbathy.shape) for i in range(NEMO_bathy.shape[1]): for j in range(NEMO_bathy.shape[0]): level = mbathy[j,i] NEMO_bathy[j,i] = gdepw[level,j,i] NEMO_bathy = np.ma.masked_array(NEMO_bathy, mask = surface_tmask) lats = mesh.variables['nav_lat'][:] lons = mesh.variables['nav_lon'][:] # build nc file new_bathy = nc.Dataset('../../NEMO-forcing/grid/downbyone2_NEMO_bathy.nc', 'w') nc_tools.init_dataset_attrs( new_bathy, title='Bathymetry after NEMO Processes, SalishSea downbyonegrid2', notebook_name='NEMOBathymetryfromMeshMask', nc_filepath='NEMO-forcing/grid/downbyone2_NEMO_bathy.nc', comment='Bathymetry, Latitudes and Longitudes') new_bathy.createDimension('y', 898) new_bathy.createDimension('x', 398) nc_tools.show_dimensions(new_bathy) # variables latitude = new_bathy.createVariable('latitude', 'float32', ('y','x'), zlib=True) latitude.long_name = 'Latitude' latitude.units = 'degrees_north' latitude[:] = lats latitude.valid_range = np.array((np.min(latitude), np.max(latitude))) longitude = new_bathy.createVariable('longitude', 'float32', ('y','x'), zlib=True) longitude.long_name = 'Longitude' longitude.units = 'degrees_east' longitude[:] = lons longitude.valid_range = np.array((np.min(longitude), np.max(longitude))) bathymetry = new_bathy.createVariable( 'bathymetry', 'float32', ('y','x'), zlib=True, least_significant_digit=1, fill_value=0) bathymetry.units = 'm' bathymetry.long_name = 'Depth of Bottom' bathymetry.coordinates = 'longitude latitude' bathymetry.grid = 'Salish Sea downbyonegrid2' bathymetry[:] = NEMO_bathy bathymetry.valid_range = np.array((np.min(bathymetry), np.max(bathymetry))) new_bathy.history = """[2016-11-15 13:50:42] Created dataset. [2016-11-15 13:50:42] Changed all variables to zlib=True. [2016-11-15 13:50:42] Added least_significant_digit=1 and fill_value=0 to bathymetry variable. [2016-11-15 13:50:42] Added valid_range attribute to all variables.""" new_bathy.references = 'https://bitbucket.org/salishsea/nemo-forcing/src/tip/grid/mesh_mask_downbyone2.nc' nc_tools.show_dataset_attrs(new_bathy) print(bathy_tools.min_mid_max(latitude)) print(bathy_tools.min_mid_max(longitude)) print(np.min(bathymetry), np.max(bathymetry)) print(latitude.valid_range) print(longitude.valid_range) print(bathymetry.valid_range) fig = plt.figure(figsize=(9, 9)) ax = plt.axes() ax.set_aspect(1 / np.cos(np.median(latitude) * np.pi / 180)) plt.title(new_bathy.title) levels = MaxNLocator(nbins=15).tick_values(0, np.max(bathymetry)) cmap = plt.get_cmap('winter_r') norm = BoundaryNorm(levels, ncolors=cmap.N) cmap.set_bad('burlywood') plt.pcolormesh(longitude[:], latitude[:], bathymetry[:], cmap=cmap, norm=norm) cbar = plt.colorbar() cbar.set_label('Depth [m]') new_bathy.close() ```
github_jupyter
# Hardware simulators - gem5 target support The gem5 simulator is a modular platform for computer-system architecture research, encompassing system-level architecture as well as processor microarchitecture. Before creating the gem5 target, the inputs needed by gem5 should have been created (eg gem5 binary, kernel suitable for gem5, disk image, device tree blob, etc). For more information, see [GEM5 - Main Page](http://gem5.org/Main_Page). # Environment setup ``` from conf import LisaLogging LisaLogging.setup() # One initial cell for imports import json import logging import os from env import TestEnv # Suport for FTrace events parsing and visualization import trappy from trappy.ftrace import FTrace from trace import Trace # Support for plotting # Generate plots inline %matplotlib inline import numpy import pandas as pd import matplotlib.pyplot as plt ``` # Target configuration The definitions below need to be changed to the paths pointing to the gem5 binaries on your development machine. M5_PATH needs to be set in your environment - **platform** - the currently supported platforms are: - linux - accessed via SSH connection - **board** - the currently supported boards are: - gem5 - target is a gem5 simulator - **host** - target IP or MAC address of the platform hosting the simulator - **gem5** - the settings for the simulation are: - **system** - **platform** - description - python description of the platform to simulate - args - arguments to be given to the python script (./gem5.fast model.py --help) - kernel - kernel image to run on the simulated platform - dtb - dtb of the platform to simulate - disk - disk image to run on the platform - **simulator** - bin - path to the gem5 simulator binary - args - arguments to be given to the gem5 binary (./gem5.fast --help) - **modules** - devlib modules to be enabled - **exclude_modules** - devlib modules to be disabled - **tools** - binary tools (available under ./tools/$ARCH/) to install by default - **ping_time** - wait time before trying to access the target after reboot - **reboot_time** - maximum time to wait after rebooting the target - **__features__** - list of test environment features to enable - no-kernel - do not deploy kernel/dtb images - no-reboot - do not force reboot the target at each configuration change - debug - enable debugging messages - **ftrace** - ftrace configuration - events - functions - buffsize - **results_dir** - location of results of the experiments ``` # Root path of the gem5 workspace base = "/home/vagrant/gem5/" conf = { # Only 'linux' is supported by gem5 for now # 'android' is a WIP "platform" : 'linux', # Preload settings for a specific target "board" : 'gem5', # Host that will run the gem5 instance "host" : "workstation-lin", "gem5" : { # System to simulate "system" : { # Platform description "platform" : { # Gem5 platform description # LISA will also look for an optional gem5<platform> board file # located in the same directory as the description file. "description" : os.path.join(base, "juno.py"), "args" : [ "--atomic", # Resume simulation from a previous checkpoint # Checkpoint must be taken before Virtio folders are mounted # "--checkpoint-indir " + os.path.join(base, "Juno/atomic/", # "checkpoints"), # "--checkpoint-resume 1", ] }, # Kernel compiled for gem5 with Virtio flags "kernel" : os.path.join(base, "platform_juno/", "vmlinux"), # DTB of the system to simulate "dtb" : os.path.join(base, "platform_juno/", "armv8_juno_r2.dtb"), # Disk of the distrib to run "disk" : os.path.join(base, "binaries/", "aarch64-ubuntu-trusty-headless.img") }, # gem5 settings "simulator" : { # Path to gem5 binary "bin" : os.path.join(base, "gem5/build/ARM/gem5.fast"), # Args to be given to the binary "args" : [ # Zilch ], } }, # FTrace events to collect for all the tests configuration which have # the "ftrace" flag enabled "ftrace" : { "events" : [ "sched_switch", "sched_wakeup", "sched_overutilized", "sched_load_avg_cpu", "sched_load_avg_task", "sched_load_waking_task", "cpu_capacity", "cpu_frequency", "cpu_idle", "sched_energy_diff" ], "buffsize" : 100 * 1024, }, "modules" : ["cpufreq", "bl", "gem5stats"], # Tools required by the experiments "tools" : ['trace-cmd', 'sysbench'], # Output directory on host "results_dir" : "gem5_res" } # Create the hardware target. Patience is required : # ~40 minutes to resume from a checkpoint (detailed) # ~5 minutes to resume from a checkpoint (atomic) # ~3 hours to start from scratch (detailed) # ~15 minutes to start from scratch (atomic) te = TestEnv(conf) target = te.target ``` # Run workloads on gem5 This is an example of running a workload and extracting stats from the simulation using m5 commands. For more information about m5 commands, see http://gem5.org/M5ops ``` # This function is an example use of gem5's ROI functionality def record_time(command): roi = 'time' target.gem5stats.book_roi(roi) target.gem5stats.roi_start(roi) target.execute(command) target.gem5stats.roi_end(roi) res = target.gem5stats.match(['host_seconds', 'sim_seconds'], [roi]) target.gem5stats.free_roi(roi) return res # Initialise command: [binary/script, arguments] workload = 'sysbench' args = '--test=cpu --max-time=1 run' # Install binary if needed path = target.install_if_needed("/home/vagrant/lisa/tools/arm64/" + workload) command = path + " " + args # FTrace the execution of this workload te.ftrace.start() res = record_time(command) te.ftrace.stop() print "{} -> {}s wall-clock execution time, {}s simulation-clock execution time".format(command, sum(map(float, res['host_seconds']['time'])), sum(map(float, res['sim_seconds']['time']))) ``` # Trace analysis For more information on this please check **examples/trace_analysis/TraceAnalysis_TasksLatencies.ipynb.** ``` # Load traces in memory (can take several minutes) platform_file = os.path.join(te.res_dir, 'platform.json') te.platform_dump(te.res_dir, platform_file) with open(platform_file, 'r') as fh: platform = json.load(fh) trace_file = os.path.join(te.res_dir, 'trace.dat') te.ftrace.get_trace(trace_file) trace = Trace(trace_file, conf['ftrace']['events'], platform, normalize_time=False) # Plot some stuff trace.analysis.cpus.plotCPU() # Simulations done target.disconnect() ```
github_jupyter
# Autoregressive Moving Average (ARMA): Sunspots data ``` %matplotlib inline import numpy as np from scipy import stats import pandas as pd import matplotlib.pyplot as plt import statsmodels.api as sm from statsmodels.tsa.arima.model import ARIMA from statsmodels.graphics.api import qqplot ``` ## Sunspots Data ``` print(sm.datasets.sunspots.NOTE) dta = sm.datasets.sunspots.load_pandas().data dta.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008')) del dta["YEAR"] dta.plot(figsize=(12,8)); fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2) arma_mod20 = ARIMA(dta, order=(2, 0, 0)).fit() print(arma_mod20.params) arma_mod30 = ARIMA(dta, order=(3, 0, 0)).fit() print(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic) print(arma_mod30.params) print(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic) ``` * Does our model obey the theory? ``` sm.stats.durbin_watson(arma_mod30.resid.values) fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) ax = arma_mod30.resid.plot(ax=ax); resid = arma_mod30.resid stats.normaltest(resid) fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) fig = qqplot(resid, line='q', ax=ax, fit=True) fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(resid.values.squeeze(), lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2) r,q,p = sm.tsa.acf(resid.values.squeeze(), fft=True, qstat=True) data = np.c_[range(1,41), r[1:], q, p] table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"]) print(table.set_index('lag')) ``` * This indicates a lack of fit. * In-sample dynamic prediction. How good does our model do? ``` predict_sunspots = arma_mod30.predict('1990', '2012', dynamic=True) print(predict_sunspots) def mean_forecast_err(y, yhat): return y.sub(yhat).mean() mean_forecast_err(dta.SUNACTIVITY, predict_sunspots) ``` ### Exercise: Can you obtain a better fit for the Sunspots model? (Hint: sm.tsa.AR has a method select_order) ### Simulated ARMA(4,1): Model Identification is Difficult ``` from statsmodels.tsa.arima_process import ArmaProcess np.random.seed(1234) # include zero-th lag arparams = np.array([1, .75, -.65, -.55, .9]) maparams = np.array([1, .65]) ``` Let's make sure this model is estimable. ``` arma_t = ArmaProcess(arparams, maparams) arma_t.isinvertible arma_t.isstationary ``` * What does this mean? ``` fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) ax.plot(arma_t.generate_sample(nsample=50)); arparams = np.array([1, .35, -.15, .55, .1]) maparams = np.array([1, .65]) arma_t = ArmaProcess(arparams, maparams) arma_t.isstationary arma_rvs = arma_t.generate_sample(nsample=500, burnin=250, scale=2.5) fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(arma_rvs, lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(arma_rvs, lags=40, ax=ax2) ``` * For mixed ARMA processes the Autocorrelation function is a mixture of exponentials and damped sine waves after (q-p) lags. * The partial autocorrelation function is a mixture of exponentials and dampened sine waves after (p-q) lags. ``` arma11 = ARIMA(arma_rvs, order=(1, 0, 1)).fit() resid = arma11.resid r,q,p = sm.tsa.acf(resid, fft=True, qstat=True) data = np.c_[range(1,41), r[1:], q, p] table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"]) print(table.set_index('lag')) arma41 = ARIMA(arma_rvs, order=(4, 0, 1)).fit() resid = arma41.resid r,q,p = sm.tsa.acf(resid, fft=True, qstat=True) data = np.c_[range(1,41), r[1:], q, p] table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"]) print(table.set_index('lag')) ``` ### Exercise: How good of in-sample prediction can you do for another series, say, CPI ``` macrodta = sm.datasets.macrodata.load_pandas().data macrodta.index = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3')) cpi = macrodta["cpi"] ``` #### Hint: ``` fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) ax = cpi.plot(ax=ax); ax.legend(); ``` P-value of the unit-root test, resoundingly rejects the null of a unit-root. ``` print(sm.tsa.adfuller(cpi)[1]) ```
github_jupyter
**This notebook is an exercise in the [Data Visualization](https://www.kaggle.com/learn/data-visualization) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/scatter-plots).** --- In this exercise, you will use your new knowledge to propose a solution to a real-world scenario. To succeed, you will need to import data into Python, answer questions using the data, and generate **scatter plots** to understand patterns in the data. ## Scenario You work for a major candy producer, and your goal is to write a report that your company can use to guide the design of its next product. Soon after starting your research, you stumble across this [very interesting dataset](https://fivethirtyeight.com/features/the-ultimate-halloween-candy-power-ranking/) containing results from a fun survey to crowdsource favorite candies. ## Setup Run the next cell to import and configure the Python libraries that you need to complete the exercise. ``` import pandas as pd pd.plotting.register_matplotlib_converters() import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns print("Setup Complete") ``` The questions below will give you feedback on your work. Run the following cell to set up our feedback system. ``` # Set up code checking import os if not os.path.exists("../input/candy.csv"): os.symlink("../input/data-for-datavis/candy.csv", "../input/candy.csv") from learntools.core import binder binder.bind(globals()) from learntools.data_viz_to_coder.ex4 import * print("Setup Complete") ``` ## Step 1: Load the Data Read the candy data file into `candy_data`. Use the `"id"` column to label the rows. ``` # Path of the file to read candy_filepath = "../input/candy.csv" # Fill in the line below to read the file into a variable candy_data candy_data = pd.read_csv(candy_filepath,index_col="id") # Run the line below with no changes to check that you've loaded the data correctly step_1.check() # Lines below will give you a hint or solution code #step_1.hint() #step_1.solution() ``` ## Step 2: Review the data Use a Python command to print the first five rows of the data. ``` # Print the first five rows of the data candy_data.head() # Your code here ``` The dataset contains 83 rows, where each corresponds to a different candy bar. There are 13 columns: - `'competitorname'` contains the name of the candy bar. - the next **9** columns (from `'chocolate'` to `'pluribus'`) describe the candy. For instance, rows with chocolate candies have `"Yes"` in the `'chocolate'` column (and candies without chocolate have `"No"` in the same column). - `'sugarpercent'` provides some indication of the amount of sugar, where higher values signify higher sugar content. - `'pricepercent'` shows the price per unit, relative to the other candies in the dataset. - `'winpercent'` is calculated from the survey results; higher values indicate that the candy was more popular with survey respondents. Use the first five rows of the data to answer the questions below. ``` # Fill in the line below: Which candy was more popular with survey respondents: # '3 Musketeers' or 'Almond Joy'? (Please enclose your answer in single quotes.) more_popular = '3 Musketeers' # Fill in the line below: Which candy has higher sugar content: 'Air Heads' # or 'Baby Ruth'? (Please enclose your answer in single quotes.) more_sugar = 'Air Heads' # Check your answers step_2.check() # Lines below will give you a hint or solution code #step_2.hint() #step_2.solution() ``` ## Step 3: The role of sugar Do people tend to prefer candies with higher sugar content? #### Part A Create a scatter plot that shows the relationship between `'sugarpercent'` (on the horizontal x-axis) and `'winpercent'` (on the vertical y-axis). _Don't add a regression line just yet -- you'll do that in the next step!_ ``` # Scatter plot showing the relationship between 'sugarpercent' and 'winpercent' sns.scatterplot(x=candy_data['sugarpercent'], y=candy_data['winpercent']) # Check your answer step_3.a.check() # Lines below will give you a hint or solution code #step_3.a.hint() #step_3.a.solution_plot() ``` #### Part B Does the scatter plot show a **strong** correlation between the two variables? If so, are candies with more sugar relatively more or less popular with the survey respondents? ``` #step_3.b.hint() # Check your answer (Run this code cell to receive credit!) step_3.b.solution() ``` ## Step 4: Take a closer look #### Part A Create the same scatter plot you created in **Step 3**, but now with a regression line! ``` # Scatter plot w/ regression line showing the relationship between 'sugarpercent' and 'winpercent' sns.regplot(x=candy_data['sugarpercent'], y=candy_data['winpercent']) # Check your answer step_4.a.check() # Lines below will give you a hint or solution code #step_4.a.hint() #step_4.a.solution_plot() ``` #### Part B According to the plot above, is there a **slight** correlation between `'winpercent'` and `'sugarpercent'`? What does this tell you about the candy that people tend to prefer? ``` #step_4.b.hint() # Check your answer (Run this code cell to receive credit!) step_4.b.solution() ``` ## Step 5: Chocolate! In the code cell below, create a scatter plot to show the relationship between `'pricepercent'` (on the horizontal x-axis) and `'winpercent'` (on the vertical y-axis). Use the `'chocolate'` column to color-code the points. _Don't add any regression lines just yet -- you'll do that in the next step!_ ``` # Scatter plot showing the relationship between 'pricepercent', 'winpercent', and 'chocolate' sns.scatterplot(x=candy_data['sugarpercent'], y=candy_data['winpercent'],hue = candy_data['chocolate']) # Check your answer step_5.check() # Lines below will give you a hint or solution code #step_5.hint() #step_5.solution_plot() ``` Can you see any interesting patterns in the scatter plot? We'll investigate this plot further by adding regression lines in the next step! ## Step 6: Investigate chocolate #### Part A Create the same scatter plot you created in **Step 5**, but now with two regression lines, corresponding to (1) chocolate candies and (2) candies without chocolate. ``` # Color-coded scatter plot w/ regression lines sns.lmplot(x='sugarpercent', y='winpercent',hue = 'chocolate',data=candy_data) # Check your answer step_6.a.check() # Lines below will give you a hint or solution code #step_6.a.hint() #step_6.a.solution_plot() ``` #### Part B Using the regression lines, what conclusions can you draw about the effects of chocolate and price on candy popularity? ``` #step_6.b.hint() # Check your answer (Run this code cell to receive credit!) step_6.b.solution() ``` ## Step 7: Everybody loves chocolate. #### Part A Create a categorical scatter plot to highlight the relationship between `'chocolate'` and `'winpercent'`. Put `'chocolate'` on the (horizontal) x-axis, and `'winpercent'` on the (vertical) y-axis. ``` # Scatter plot showing the relationship between 'chocolate' and 'winpercent' sns.swarmplot(x=candy_data['chocolate'], y=candy_data['winpercent']) # Check your answer step_7.a.check() # Lines below will give you a hint or solution code #step_7.a.hint() #step_7.a.solution_plot() ``` #### Part B You decide to dedicate a section of your report to the fact that chocolate candies tend to be more popular than candies without chocolate. Which plot is more appropriate to tell this story: the plot from **Step 6**, or the plot from **Step 7**? ``` #step_7.b.hint() # Check your answer (Run this code cell to receive credit!) step_7.b.solution() ``` ## Keep going Explore **[histograms and density plots](https://www.kaggle.com/alexisbcook/distributions)**. --- *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161291) to chat with other Learners.*
github_jupyter
# Luminosity Calculator Interactive This interactive figure lets you investigate how the temperature and radius of a star affect the amount of energy it puts out every second, which is the star's **luminosity**. Luminosity has metric units of Watts (e.g. a 100 Watt light bulb is converting electrical energy into light and heat at a rate of 100 Watts). However, since stars are very luminous (at least compared to most things on a "human" scale, in astronomy typically use units of solar luminosity, $L_\odot$, where the Sun's luminosity is $1 L_\odot = 3.83\times10^{26}$ Watts! The figure below shows a model star. Use the sliders to change the radius and the temperature of the star. Using our Sun as a baseline, the sliders take the radius in Solar Radii and the temperature in Solar Temperatures. The luminosity is reported in the lower right in both Watts and Solar Luminosities and the temperature is translated to units of Kelvin (for a baseline, room temperature is approximately 295 K). Use this interactive to explore the following questions: 1. What changes the luminosity more, doubling the radius or doubling the temperature? 2. What color are the hottest stars? The coolest stars? 3. If a star is blue in color and large in radius, what can you say about its luminosity compared to a smaller star of the same color? 4. If a star is blue in color and large in radius, what can you say about its luminosity compared to a red star of the same size? 5. If two stars of the same size but different colors orbit each other, which stars will be more luminous? ``` # Author: Andrew Louwagie Gordon # Date Created: 22May2018 # Last Modified: 22Jun2018 (tweaked by Juan Cabanela) # Import Block # Import the necessary packages from IPython.display import display import numpy as np import ipywidgets as widgets import bqplot as bq import pythreejs as p3j import tempNcolor as tc import number_formatting as nf import starlib as star # Function Definitions Block def Star_Temp(T): ''' This function calculates the temperature of the star in Kelvin. ''' global T_Sun temp = T * T_Sun temp = round(temp, -2) # Round the temperature to the nearest 100 K return int(temp) def L_Ratio(t, r): ''' This function calculates the ratio of luminosities for the star based on temperature and radius. ''' lum = (r ** 2.0) * (t ** 4.0) # Luminosity calculation in L/L_sun return nf.SigFig(lum, 2) def UpdateWidgets(change=None): ''' This function continuously updates the widgets that display information. ''' # Get the luminosity ratio for this star and display it get_l_ratio = L_Ratio(Temp.value, Rad.value) L_Ratio_report.value = str(get_l_ratio) # Compute the luminosity in Watts and display it. Luminosity = float(get_l_ratio) * L_Sun latex = nf.exp2LaTeX(Luminosity,3) Luminosity_report.value = '{}'.format(latex[2]) # Set the temperature of this star and display it t_star = Star_Temp(Temp.value) t_star_report.value = str(t_star) def UpdateStar(change=None): ''' This function continuously updates the color and radius (really scale) of the star. ''' global init_r, star_sphere # Get temperature in K and assign associated hexcolor t_star = Star_Temp(Temp.value) hex_color = tc.rgb2hex(tc.temp2rgb(t_star)) # Set the color of the star image star.StarMeshColor(star_sphere, hex_color[0]) # Set the scale of the star image scale_dim = Rad.value/init_r star_sphere.scale = (scale_dim, scale_dim, scale_dim) # Define constants L_Sun = star.L_Sun # Solar luminosity in Watts T_Sun = star.Te_Sun # Solar temperature in Kelvin t_star = T_Sun # Define variable to be updated later Luminosity = 1 # Define variable to be updated later get_l_ratio = 1 # Define variable to be updated later # Make a list from the number2LaTeX converter being used latex = nf.exp2LaTeX(Luminosity) # Define initial conditions to be Sun-like init_temp = 1 init_rad = 1 # Widgets Definitions Block # Radius slider in units of R/R_Sun Rad = widgets.FloatSlider( min=0.2, value=1.0, max=15, step=0.1, disabled=False, continuous_update=True, orientation='horizontal', readout=True, readout_format='.1f', layout=widgets.Layout(border='none', width='200px') ) # Temperature slider in units of T/T_Sun Temp = widgets.FloatSlider( min=0.5, value=1.0, max=7.0, step=0.1, disabled=False, continuous_update=True, orientation='horizontal', readout=True, readout_format='.1f', layout=widgets.Layout(border='none', width='200px') ) # Widget to report updated temperature in Kelvin t_star_report = widgets.Text( value = str(int(t_star)), readout_format='.0f', placeholder = 'Type something', disabled = True ) # Widget to report updated luminosity in L/L_sun L_Ratio_report = widgets.Text( value = str(get_l_ratio), placeholder = 'Type something', disabled = True ) # Widget to report updated luminosity in Watts Luminosity_report = widgets.HTML( value = '{}'.format(latex[2]), placeholder = 'Type something', disabled = True ) # Reset to initial values Temp.value = init_temp Rad.value = init_rad # Set viewer size view_width = 300 view_height = 300 # Get the initial temperature for the star t_star = Star_Temp(Temp.value) t_star_report.value = str(t_star) # Compute the luminosity in Watts and display it. latex = nf.exp2LaTeX(float(get_l_ratio) * L_Sun,3) Luminosity_report.value = '{}'.format(latex[2]) # Set scale factor for radius (approximately 10 pixels per solar radius) scale_factor = 1 # Set initial parameters based on stellar parameters r1 = scale_factor*Rad.value # Save initial radius to scale all other radii to this init_r = r1 # set the scale scale1 = (r1/init_r, r1/init_r, r1/init_r) # Create a stellar image sphere (including a copy that represents the Sun, since it will use initial values) star_sphere = star.StarMesh(t_star, r1, scale1, [0, 0, 0]) sun_sphere = star.StarMesh(t_star, r1, scale1, [0, 18, 0]) # Makes the scene environment, not sure how the background works yet scene2 = p3j.Scene(children=[star_sphere, sun_sphere], background='black') # Creates the camera so you can see stuff. Place the cemera just above the x-axis and orient camera so up # is along y-axis. starcam = p3j.PerspectiveCamera(position=[45, 0, 0], up=[0, 0, 1]) # Makes a controller to use for the controller = p3j.OrbitControls(controlling=starcam, enableRotate=False, enableZoom=False) # creates the object that gets displayed to the screen renderer2 = p3j.Renderer(camera=starcam, scene=scene2, controls=[controller], width=view_width, height=view_height) # Use the UpdateStar function to continuously update the star in the plot Temp.observe(UpdateStar, names=['value']) Rad.observe(UpdateStar, names=['value']) # Use the UpdateWidgets function to continuously update the calculated values in the display widgets on the bottom Temp.observe(UpdateWidgets, names=['value']) Rad.observe(UpdateWidgets, names=['value']) # Define the layout for the final widget to make it presentable box_layout = widgets.Layout(align_items='center', justify_content = 'flex-end', border='none', width='800px') # Arrange and display all the widgets in a presentable manner top_box = widgets.VBox([widgets.HTML ("<h2>Model Star</h2>"), renderer2, widgets.HTML ("<p>Model Star in center, Sun shown to right for comparison.</p>")], layout = box_layout) Rad_label = widgets.HTML('Radius (R<sub>&#x2609;</sub>):', layout=widgets.Layout(align = 'right', width='150px')) rad_slide = widgets.HBox([Rad_label, Rad], layout=widgets.Layout(border='none', width='350px')) Temp_Label = widgets.HTML('Temperature (T<sub>&#x2609;</sub>):', layout=widgets.Layout(align = 'right', width='150px')) temp_slide = widgets.HBox([Temp_Label, Temp], layout=widgets.Layout(border='none', width='350px')) temp_disp = widgets.HBox([widgets.Label('Temperature (K):'), t_star_report], layout=widgets.Layout(border='none')) temp_disp.children[0].layout.width = '150px' temp_disp.children[1].layout.width = '100px' lratio_disp = widgets.HBox([widgets.HTML('Luminosity (L<sub>&#x2609;</sub>):'), L_Ratio_report], layout=widgets.Layout(border='none')) lratio_disp.children[0].layout.width = '150px' lratio_disp.children[1].layout.width = '100px' lum_disp = widgets.HBox([widgets.Label('Luminosity (W):'), Luminosity_report], layout=widgets.Layout(border='none')) lum_disp.children[0].layout.width = '150px' lum_disp.children[1].layout.width = '100px' bottom_left = widgets.VBox([temp_slide, rad_slide]) bottom_right = widgets.VBox([temp_disp, lratio_disp, lum_disp]) bottom = widgets.HBox([bottom_left, bottom_right]) the_box = widgets.VBox([top_box, bottom], layout = box_layout) display(the_box) ```
github_jupyter
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/mrk-W2D1/tutorials/W2D1_BayesianStatistics/W2D1_Tutorial4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # NMA 2020 W2D1 -- (Bonus) Tutorial 4: Bayesian Decision Theory & Cost functions __Content creators:__ Vincent Valton, Konrad Kording, with help from Matthew Krause __Content reviewers:__ Matthew Krause, Jesse Livezey, Karolina Stosio, Saeed Salehi # Tutorial Objectives *This tutorial is optional! Please do not feel pressured to finish it!* In the previous tutorials, we investigated the posterior, which describes beliefs based on a combination of current evidence and prior experience. This tutorial focuses on Bayesian Decision Theory, which combines the posterior with **cost functions** that allow us to quantify the potential impact of making a decision or choosing an action based on that posterior. Cost functions are therefore critical for turning probabilities into actions! In Tutorial 3, we used the mean of the posterior $p(x | \tilde x)$ as a proxy for the response $\hat x$ for the participants. What prompted us to use the mean of the posterior as a **decision rule**? In this tutorial we will see how different common decision rules such as the choosing the mean, median or mode of the posterior distribution correspond to minimizing different cost functions. In this tutorial, you will 1. Implement three commonly-used cost functions: mean-squared error, absolute error, and zero-one loss 2. Discover the concept of expected loss, and 3. Choose optimal locations on the posterior that minimize these cost functions. You will verify that it these locations can be found analytically as well as empirically. ``` #@title Video 1: Introduction from IPython.display import YouTubeVideo video = YouTubeVideo(id='z2DF4H_sa-k', width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ``` --- Please execute the cell below to initialize the notebook environment --- ### Setup ``` # Imports import numpy as np import matplotlib.pyplot as plt #@title Figure Settings import ipywidgets as widgets plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") %matplotlib inline %config InlineBackend.figure_format = 'retina' # @title Helper Functions def my_gaussian(x_points, mu, sigma): """Returns un-normalized Gaussian estimated at points `x_points` DO NOT EDIT THIS FUNCTION !!! Args : x_points (numpy array of floats) - points at which the gaussian is evaluated mu (scalar) - mean of the Gaussian sigma (scalar) - std of the gaussian Returns: (numpy array of floats): un-normalized Gaussian (i.e. without constant) evaluated at `x` """ return np.exp(-(x_points-mu)**2/(2*sigma**2)) def visualize_loss_functions(mse=None, abse=None, zero_one=None): """Visualize loss functions Args: - mse (func) that returns mean-squared error - abse: (func) that returns absolute_error - zero_one: (func) that returns zero-one loss All functions should be of the form f(x, x_hats). See Exercise #1. Returns: None """ x = np.arange(-3, 3.25, 0.25) fig, ax = plt.subplots(1) if mse is not None: ax.plot(x, mse(0, x), linewidth=2, label="Mean Squared Error") if abse is not None: ax.plot(x, abse(0, x), linewidth=2, label="Absolute Error") if zero_one_loss is not None: ax.plot(x, zero_one_loss(0, x), linewidth=2, label="Zero-One Loss") ax.set_ylabel('Cost') ax.set_xlabel('Predicted Value ($\hat{x}$)') ax.set_title("Loss when the true value $x$=0") ax.legend() plt.show() def moments_myfunc(x_points, function): """Returns the mean, median and mode of an arbitrary function DO NOT EDIT THIS FUNCTION !!! Args : x_points (numpy array of floats) - x-axis values function (numpy array of floats) - y-axis values of the function evaluated at `x_points` Returns: (tuple of 3 scalars): mean, median, mode """ # Calc mode of an arbitrary function mode = x_points[np.argmax(function)] # Calc mean of an arbitrary function mean = np.sum(x_points * function) # Calc median of an arbitrary function cdf_function = np.zeros_like(x_points) accumulator = 0 for i in np.arange(x.shape[0]): accumulator = accumulator + posterior[i] cdf_function[i] = accumulator idx = np.argmin(np.abs(cdf_function - 0.5)) median = x_points[idx] return mean, median, mode def loss_plot(x, loss, min_loss, loss_label, show=False, ax=None): if not ax: fig, ax = plt.subplots() ax.plot(x, loss, '-r', linewidth=2, label=loss_label) ax.axvline(min_loss, ls='dashed', color='red', label='Minimum') ax.set_ylabel('Expected Loss') ax.set_xlabel('Orientation (Degrees)') ax.legend() if show: plt.show() def loss_plot_subfigures(x, MSEloss, min_MSEloss, loss_MSElabel, ABSEloss, min_ABSEloss, loss_ABSElabel, ZeroOneloss, min_01loss, loss_01label): fig_w, fig_h = plt.rcParams.get('figure.figsize') fig, ax = plt.subplots(nrows=2, ncols=3, figsize=(fig_w*2, fig_h*2), sharex=True) ax[0, 0].plot(x, MSEloss, '-r', linewidth=2, label=loss_MSElabel) ax[0, 0].axvline(min_MSEloss, ls='dashed', color='red', label='Minimum') ax[0, 0].set_ylabel('Expected Loss') ax[0, 0].set_xlabel('Orientation (Degrees)') ax[0, 0].set_title("Mean Squared Error") ax[0, 0].legend() pmoments_plot(x, posterior, ax=ax[1,0]) ax[0, 1].plot(x, ABSEloss, '-b', linewidth=2, label=loss_ABSElabel) ax[0, 1].axvline(min_ABSEloss, ls='dashdot', color='blue', label='Minimum') ax[0, 1].set_ylabel('Expected Loss') ax[0, 1].set_xlabel('Orientation (Degrees)') ax[0, 1].set_title("Absolute Error") ax[0, 1].legend() pmoments_plot(x, posterior, ax=ax[1,1]) ax[0, 2].plot(x, ZeroOneloss, '-g', linewidth=2, label=loss_01label) ax[0, 2].axvline(min_01loss, ls='dotted', color='green', label='Minimum') ax[0, 2].set_ylabel('Expected Loss') ax[0, 2].set_xlabel('Orientation (Degrees)') ax[0, 2].set_title("0-1 Loss") ax[0, 2].legend() pmoments_plot(x, posterior, ax=ax[1,2]) plt.show() def pmoments_plot(x, posterior, prior=None, likelihood=None, show=False, ax=None): if not ax: fig, ax = plt.subplots() if prior: ax.plot(x, prior, '-r', linewidth=2, label='Prior') if likelihood: ax.plot(x, likelihood, '-b', linewidth=2, label='Likelihood') ax.plot(x, posterior, '-g', linewidth=4, label='Posterior') mean, median, mode = moments_myfunc(x, posterior) ax.axvline(mean, ls='dashed', color='red', label='Mean') ax.axvline(median, ls='dashdot', color='blue', label='Median') ax.axvline(mode, ls='dotted', color='green', label='Mode') ax.set_ylabel('Probability') ax.set_xlabel('Orientation (Degrees)') ax.legend() if show: plt.show() def generate_example_pdfs(): """Generate example probability distributions as in T2""" x=np.arange(-5, 5, 0.01) prior_mean = 0 prior_sigma1 = .5 prior_sigma2 = 3 prior1 = my_gaussian(x, prior_mean, prior_sigma1) prior2 = my_gaussian(x, prior_mean, prior_sigma2) alpha = 0.05 prior_combined = (1-alpha) * prior1 + (alpha * prior2) prior_combined = prior_combined / np.sum(prior_combined) likelihood_mean = -2.7 likelihood_sigma = 1 likelihood = my_gaussian(x, likelihood_mean, likelihood_sigma) likelihood = likelihood / np.sum(likelihood) posterior = prior_combined * likelihood posterior = posterior / np.sum(posterior) return x, prior_combined, likelihood, posterior def plot_posterior_components(x, prior, likelihood, posterior): with plt.xkcd(): fig = plt.figure() plt.plot(x, prior, '-r', linewidth=2, label='Prior') plt.plot(x, likelihood, '-b', linewidth=2, label='Likelihood') plt.plot(x, posterior, '-g', linewidth=4, label='Posterior') plt.legend() plt.title('Sample Output') plt.show() ``` ### The Posterior Distribution This notebook will use a model similar to the puppet & puppeteer sound experiment developed in Tutorial 2, but with different probabilities for $p_{common}$, $p_{independent}$, $\sigma_{common}$ and $\sigma_{independent}$. Specifically, our model will consist of these components, combined according to Bayes' rule: $$ \begin{eqnarray} \textrm{Prior} &=& \begin{cases} \mathcal{N_{common}}(0, 0.5) & 95\% \textrm{ weight}\\ \mathcal{N_{independent}}(0, 3.0) & 5\% \textrm{ weight} \\ \end{cases}\\\\ \textrm{Likelihood} &= &\mathcal{N}(-2.7, 1.0) \end{eqnarray} $$ We will use this posterior as an an example through this notebook. Please run the cell below to import and plot the model. You do not need to edit anything. These parameter values were deliberately chosen for illustration purposes: there is nothing intrinsically special about them, but they make several of the exercises easier. ``` x, prior, likelihood, posterior = generate_example_pdfs() plot_posterior_components(x, prior, likelihood, posterior) ``` # Section 1: The Cost Functions Next, we will implement the cost functions. A cost function determines the "cost" (or penalty) of estimating $\hat{x}$ when the true or correct quantity is really $x$ (this is essentially the cost of the error between the true stimulus value: $x$ and our estimate: $\hat x$ -- Note that the error can be defined in different ways): $$\begin{eqnarray} \textrm{Mean Squared Error} &=& (x - \hat{x})^2 \\ \textrm{Absolute Error} &=& \big|x - \hat{x}\big| \\ \textrm{Zero-One Loss} &=& \begin{cases} 0,& \text{if } x = \hat{x} \\ 1, & \text{otherwise} \end{cases} \end{eqnarray} $$ In the cell below, fill in the body of these cost function. Each function should take one single value for $x$ (the true stimulus value : $x$) and one or more possible value estimates: $\hat{x}$. Return an array containing the costs associated with predicting $\hat{x}$ when the true value is $x$. Once you have written all three functions, uncomment the final line to visulize your results. _Hint:_ These functions are easy to write (1 line each!) but be sure *all* three functions return arrays of `np.float` rather than another data type. ``` def mse(x, x_hats): """Mean-squared error cost function Args: x (scalar): One true value of $x$ x_hats (scalar or ndarray): Estimate of x Returns: same shape/type as x_hats): MSE costs associated with predicting x_hats instead of x$ """ ############################################################################## # Complete the MSE cost function # ### Comment out the line below to test your function raise NotImplementedError("You need to complete the MSE cost function!") ############################################################################## my_mse = ... return my_mse def abs_err(x, x_hats): """Absolute error cost function Args: x (scalar): One true value of $x$ x_hats (scalar or ndarray): Estimate of x Returns: (same shape/type as x_hats): absolute error costs associated with predicting x_hats instead of x$ """ ############################################################################## # Complete the absolute error cost function # ### Comment out the line below to test your function raise NotImplementedError("You need to complete the absolute error function!") ############################################################################## my_abs_err = ... return my_abs_err def zero_one_loss(x, x_hats): """Zero-One loss cost function Args: x (scalar): One true value of $x$ x_hats (scalar or ndarray): Estimate of x Returns: (same shape/type as x_hats) of the 0-1 Loss costs associated with predicting x_hat instead of x """ ############################################################################## # Complete the zero-one loss cost function # ### Comment out the line below to test your function raise NotImplementedError("You need to complete the 0-1 loss cost function!") ############################################################################## my_zero_one_loss = ... return my_zero_one_loss ## When you are done with the functions above, uncomment the line below to ## visualize them # visualize_loss_functions(mse, abs_err, zero_one_loss) # to_remove solution def mse(x, x_hats): """Mean-squared error cost function Args: x (scalar): One true value of $x$ x_hats (scalar or ndarray): Estimate of x Returns: same shape/type as x_hats): MSE costs associated with predicting x_hats instead of x$ """ ############################################################################## # Complete the MSE cost function # ### Comment out the line below to test your function #raise NotImplementedError("You need to complete the MSE cost function!") ############################################################################## my_mse = (x - x_hats)**2 return my_mse def abs_err(x, x_hats): """Absolute error cost function Args: x (scalar): One true value of $x$ x_hats (scalar or ndarray): Estimate of x Returns: (same shape/type as x_hats): absolute error costs associated with predicting x_hats instead of x$ """ ############################################################################## # Complete the absolute error cost function # ### Comment out the line below to test your function #raise NotImplementedError("You need to complete the absolute error function!") ############################################################################## my_abs_err = np.abs(x - x_hats) return my_abs_err def zero_one_loss(x, x_hats): """Zero-One loss cost function Args: x (scalar): One true value of $x$ x_hats (scalar or ndarray): Estimate of x Returns: (same shape/type as x_hats) of the 0-1 Loss costs associated with predicting x_hat instead of x """ ############################################################################## # Complete the zero-one loss cost function # ### Comment out the line below to test your function #raise NotImplementedError("You need to complete the 0-1 loss cost function!") ############################################################################## my_zero_one_loss = (x != x_hats).astype(np.float) return my_zero_one_loss ## When you are done with the functions above, uncomment the line below to ## visualize them with plt.xkcd(): visualize_loss_functions(mse, abs_err, zero_one_loss) ``` # Section 2: Expected Loss ``` #@title Video 2: Expected Loss from IPython.display import YouTubeVideo video = YouTubeVideo(id='FTBpCfylV_Y', width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ``` A posterior distribution tells us about the confidence or credibility we assign to different choices. A cost function describes the penalty we incur when choosing an incorrect option. These concepts can be combined into an *expected loss* function. Expected loss is defined as: $$ \begin{eqnarray} \mathbb{E}[\text{Loss} | \hat{x}] = \int L[\hat{x},x] \odot p(x|\tilde{x}) dx \end{eqnarray} $$ where $L[ \hat{x}, x]$ is the loss function, $p(x|\tilde{x})$ is the posterior, and $\odot$ represents the [Hadamard Product](https://en.wikipedia.org/wiki/Hadamard_product_(matrices)) (i.e., elementwise multiplication), and $\mathbb{E}[\text{Loss} | \hat{x}]$ is the expected loss. In this exercise, we will calculate the expected loss for the: means-squared error, the absolute error, and the zero-one loss over our bimodal posterior $p(x | \tilde x)$. **Suggestions:** * We already pre-completed the code (commented-out) to calculate the mean-squared error, absolute error, and zero-one loss between $x$ and an estimate $\hat x$ using the functions you created in exercise 1 * Calculate the expected loss ($\mathbb{E}[MSE Loss]$) using your posterior (imported above as `posterior`) & each of the loss functions described above (MSELoss, ABSELoss, and Zero-oneLoss). * Find the x position that minimizes the expected loss for each cost function and plot them using the `loss_plot` function provided (commented-out) ## Exercise 2: Finding the expected loss empirically via integration ``` def expected_loss_calculation(x, posterior): ExpectedLoss_MSE = np.zeros_like(x) ExpectedLoss_ABSE = np.zeros_like(x) ExpectedLoss_01 = np.zeros_like(x) for idx in np.arange(x.shape[0]): estimate = x[idx] ################################################################### ## Insert code below to find the expected loss under each loss function ## ## remove the raise when the function is complete raise NotImplementedError("Calculate the expected loss over all x values!") ################################################################### MSELoss = mse(estimate, x) ExpectedLoss_MSE[idx] = ... ABSELoss = abs_err(estimate, x) ExpectedLoss_ABSE[idx] = ... ZeroOneLoss = zero_one_loss(estimate, x) ExpectedLoss_01[idx] = ... ################################################################### ## Now, find the `x` location that minimizes expected loss ## ## remove the raise when the function is complete raise NotImplementedError("Finish the Expected Loss calculation") ################################################################### min_MSE = ... min_ABSE = ... min_01 = ... return (ExpectedLoss_MSE, ExpectedLoss_ABSE, ExpectedLoss_01, min_MSE, min_ABSE, min_01) ## Uncomment the lines below to plot the expected loss as a function of the estimates #ExpectedLoss_MSE, ExpectedLoss_ABSE, ExpectedLoss_01, min_MSE, min_ABSE, min_01 = expected_loss_calculation(x, posterior) #loss_plot(x, ExpectedLoss_MSE, min_MSE, f"Mean Squared Error = {min_MSE:.2f}") #loss_plot(x, ExpectedLoss_ABSE, min_ABSE, f"Absolute Error = {min_ABSE:.2f}") #loss_plot(x, ExpectedLoss_01, min_01, f"Zero-One Error = {min_01:.2f}") # to_remove solution def expected_loss_calculation(x, posterior): ExpectedLoss_MSE = np.zeros_like(x) ExpectedLoss_ABSE = np.zeros_like(x) ExpectedLoss_01 = np.zeros_like(x) for idx in np.arange(x.shape[0]): estimate = x[idx] ################################################################### ## Insert code below to find the expected loss under each loss function ## ## remove the raise when the function is complete #raise NotImplementedError("Calculate the expected loss over all x values!") ################################################################### MSELoss = mse(estimate, x) ExpectedLoss_MSE[idx] = np.sum(MSELoss * posterior) ABSELoss = abs_err(estimate, x) ExpectedLoss_ABSE[idx] = np.sum(ABSELoss * posterior) ZeroOneLoss = zero_one_loss(estimate, x) ExpectedLoss_01[idx] = np.sum(ZeroOneLoss * posterior) ################################################################### ## Now, find the `x` location that minimizes expected loss ## ## remove the raise when the function is complete # raise NotImplementedError("Finish the Expected Loss calculation") ################################################################### min_MSE = x[np.argmin(ExpectedLoss_MSE)] min_ABSE = x[np.argmin(ExpectedLoss_ABSE)] min_01 = x[np.argmin(ExpectedLoss_01)] return (ExpectedLoss_MSE, ExpectedLoss_ABSE, ExpectedLoss_01, min_MSE, min_ABSE, min_01) ## Uncomment the lines below to plot the expected loss as a function of the estimates ExpectedLoss_MSE, ExpectedLoss_ABSE, ExpectedLoss_01, min_MSE, min_ABSE, min_01 = expected_loss_calculation(x, posterior) with plt.xkcd(): loss_plot(x, ExpectedLoss_MSE, min_MSE, f"Mean Squared Error = {min_MSE:.2f}") loss_plot(x, ExpectedLoss_ABSE, min_ABSE, f"Absolute Error = {min_ABSE:.2f}") loss_plot(x, ExpectedLoss_01, min_01, f"Zero-One Error = {min_01:.2f}") ``` # Section 3: Analytical Solutions ``` #@title Video 3: Analytical Solutions from IPython.display import YouTubeVideo video = YouTubeVideo(id='wmDD51N9rs0', width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ``` In the previous exercise, we found the minimum expected loss via brute-force: we searched over all possible values of $x$ and found the one that minimized each of our loss functions. This is feasible for our small toy example, but can quickly become intractable. Fortunately, the three loss functions examined in this tutorial have are minimized at specific points on the posterior, corresponding to the itss mean, median, and mode. To verify this property, we have replotted the loss functions from Exercise 2 below, with the posterior on the same scale beneath. The mean, median, and mode are marked on the posterior. Which loss form corresponds to each summary statistics? ``` loss_plot_subfigures(x, ExpectedLoss_MSE, min_MSE, f"Mean Squared Error = {min_MSE:.2f}", ExpectedLoss_ABSE, min_ABSE, f"Absolute Error = {min_ABSE:.2f}", ExpectedLoss_01, min_01, f"Zero-One Error = {min_01:.2f}") #to_remove explanation """ As you might recall from W1D3, the mean minimizes the mean-squared error. Absolute error is minimized by the median, while zero-one loss is minimized at the posterior's mode. """ ``` # Section 4: Conclusion ``` #@title Video 4: Outro from IPython.display import YouTubeVideo video = YouTubeVideo(id='3nTvamDVx2s', width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ``` In this tutorial, we learned about three kinds of cost functions: mean-squared error, absolute error, and zero-one loss. We used expected loss to quantify the results of making a decision, and showed that optimizing under different cost functions led us to choose different locations on the posterior. Finally, we found that these optimal locations can be identified analytically, sparing us from a brute-force search. Here are some additional questions to ponder: * Suppose your professor offered to grade your work with a zero-one loss or mean square error. * When might you choose each? * Which would be easier to learn from? * All of the loss functions we considered are symmetrical. Are there situations where an asymmetrical loss function might make sense? How about a negative one?
github_jupyter
<small><small><i> All the IPython Notebooks in this lecture series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/05_Python_Files)** </i></small></small> # Python Errors and Built-in Exceptions In this class, you will learn about different types of errors and exceptions that are built-in to Python. They are raised whenever the Python interpreter encounters errors. We can make certain mistakes while writing a program that lead to errors when we try to run it. A python program terminates as soon as it encounters an unhandled error. These errors can be broadly classified into two classes: 1. Syntax errors 2. Logical errors (Exceptions) ## 1. Python Syntax Errors Error caused by not following the proper structure (syntax) of the language is called **syntax error** or **parsing error**. For example: ``` if a < 3 ``` As shown in the example, an arrow indicates where the parser ran into the syntax error. We can notice here that a colon **`:`** is missing in the **`if`** statement. ## 2. Python Logical Errors (Exceptions) Errors that occur at runtime (after passing the syntax test) are called **exceptions** or **logical errors**. For instance, they occur when we try to open a file(for reading) that does not exist (**`FileNotFoundError`**), try to divide a number by zero (**`ZeroDivisionError`**), or try to import a module that does not exist (**`ImportError`**). Whenever these types of runtime errors occur, Python creates an exception object. If not handled properly, it prints a traceback to that error along with some details about why that error occurred. Let's look at how Python treats these errors: ``` 1 / 0 open("imaginary.txt") ``` ## Python Built-in Exceptions Illegal operations can raise exceptions. There are plenty of built-in exceptions in Python that are raised when corresponding errors occur. We can view all the built-in exceptions using the built-in **`local()`** function as follows: ```python print(dir(locals()['__builtins__'])) ``` **`locals()['__builtins__']`** will return a module of built-in exceptions, functions, and attributes. **`dir`** allows us to list these attributes as strings. Some of the common built-in exceptions in Python programming along with the error that cause them are listed below: | Exception | Cause of Error | |:----| :--- | | **`AssertionError`** | Raised when an **`assert`** statement fails. | | **`AttributeError`** | Raised when attribute assignment or reference fails. | | **`EOFError`** | Raised when the **`input()`** function hits end-of-file condition. | | **`FloatingPointError`** | Raised when a floating point operation fails. | | **`GeneratorExit`** | Raise when a generator's **`close()`** method is called. | | **`ImportError`** | Raised when the imported module is not found. | | **`IndexError`** | Raised when the index of a sequence is out of range. | | **`KeyError`** | Raised when a key is not found in a dictionary. | | **`KeyboardInterrupt`** | Raised when the user hits the interrupt key (**`Ctrl+C`** or **`Delete`**). | | **`MemoryError`** | Raised when an operation runs out of memory. | | **`NameError`** | Raised when a variable is not found in local or global scope. | | **`NotImplementedError`** | Raised by abstract methods. | | **`OSError`** | Raised when system operation causes system related error. | | **`OverflowError`** | Raised when the result of an arithmetic operation is too large to be represented. | | **`ReferenceError`** | Raised when a weak reference proxy is used to access a garbage collected referent. | | **`RuntimeError`** | Raised when an error does not fall under any other category. | | **`StopIteration`** | Raised by **`next()`** function to indicate that there is no further item to be returned by iterator. | | **`SyntaxError`** | Raised by parser when syntax error is encountered. | | **`IndentationError`** | Raised when there is incorrect indentation. | | **`TabError`** | Raised when indentation consists of inconsistent tabs and spaces. | | **`SystemError`** | Raised when interpreter detects internal error. | | **`SystemExit`** | Raised by **`sys.exit()`** function. | | **`TypeError`** | Raised when a function or operation is applied to an object of incorrect type. | | **`UnboundLocalError`** | Raised when a reference is made to a local variable in a function or method, but no value has been bound to that variable. | | **`UnicodeError`** | Raised when a Unicode-related encoding or decoding error occurs. | | **`UnicodeEncodeError`** | Raised when a Unicode-related error occurs during encoding. | | **`UnicodeDecodeError`** | Raised when a Unicode-related error occurs during decoding. | | **`UnicodeTranslateError`** | Raised when a Unicode-related error occurs during translating. | | **`ValueError`** | Raised when a function gets an argument of correct type but improper value. | | **`ZeroDivisionError`** | Raised when the second operand of division or modulo operation is zero. | If required, we can also define our own exceptions in Python. To learn more about them, visit Python **[User-defined Exceptions](https://github.com/milaan9/05_Python_Files/blob/main/005_Python_User_defined_Exceptions.ipynb)**. We can handle these built-in and user-defined exceptions in Python using **`try`**, **`except`** and **`finally`** statements. To learn more about them, visit **[Python try, except and finally statements](https://github.com/milaan9/05_Python_Files/blob/main/004_Python_Exceptions_Handling.ipynb)**.
github_jupyter
``` from mlpy.linalg import Vector # quiz questions 1 print('addition, subtraction and scalar multiplication') a = Vector([8.218, -9.341]) b = Vector([-1.129, 2.111]) print("Q1:",a.plus(b).coordinates) a = Vector([7.119, 8.215]) b = Vector([-8.223, 0.878]) print("Q2:",a.minus(b).coordinates) a = Vector([1.671,-1.012,-0.318]) b = 7.41 print("Q3:",a.scalarmult(b).coordinates) print('') # quiz questions 2 print('magnitude and direction') a = Vector([-0.221,7.437]) print("mag of",a.coordinates,":",a.magnitude()) a = Vector([8.813,-1.331,-6.247]) print("mag of",a.coordinates,":",a.magnitude()) a = Vector([5.581,-2.136]) print("dir of",a.coordinates,":",a.direction()) a = Vector([1.996,3.108,-4.554]) print("dir of",a.coordinates,":",a.direction()) print('') #quiz questions 3 print('dot product and angle') a = Vector([7.887,4.138]) b = Vector([-8.802, 6.776]) print("Q1 dot:",a.dot(b)) a = Vector([-5.955, -4.904, -1.874]) b = Vector([-4.496, -8.755, 7.103]) print("Q2 dot:",a.dot(b)) a = Vector([3.183,-7.627]) b = Vector([-2.668,5.319]) print("Q3 rads:",a.radiansto(b)) a = Vector([7.35,0.221,5.188]) b = Vector([2.751,8.259,3.985]) print("Q4 degs:", a.degreesto(b)) print('') # quiz questions 4 print('parallelism & orthogonality') a = Vector([-7.579,-7.88]) b = Vector([22.737,23.64]) print("Para:",a.isparallel(b),"Orth:",a.isright(b)) a = Vector([-2.029,9.97,4.172]) b = Vector([-9.231,-6.639,-7.245]) print("Para:",a.isparallel(b),"Orth:",a.isright(b)) a = Vector([-2.328,-7.284,-1.214]) b = Vector([-1.821,1.072,-2.94]) print("Para:",a.isparallel(b),"Orth:",a.isright(b)) a = Vector([2.118,4.827]) b = Vector([0.0,0.0]) print("Para:",a.isparallel(b),"Orth:",a.isright(b)) print('') # quiz questions 5 print('vector projections & orthogonals') v = Vector([3.039, 1.879]) b = Vector([0.825, 2.036]) print("A proj_b(v):",v.projection(b).coordinates) v = Vector([-9.88, -3.264, -8.159]) b = Vector([-2.155, -9.353, -9.473]) print("B orth_b(v):",v.orthogonal(b).coordinates) v = Vector([3.009, -6.172, 3.692, -2.51]) b = Vector([6.404, -9.144, 2.759, 8.718]) print("C proj_b(v):",v.projection(b).coordinates) print("C orth_b(v):",v.orthogonal(b).coordinates) print('') # quiz questions 6 print('cross products') a = Vector([8.462, 7.893, -8.187]) b = Vector([6.984, -5.975, 4.778]) c = a.cross(b) print('cross product vector:', c.coordinates) print('check dot prods == 0:', 'a',a.dot(c), 'b',b.dot(c)) a = Vector([-8.987, -9.838, 5.031]) b = Vector([-4.268, -1.861, -8.866]) print('area of parallelogram:', a.trianglearea(b) * 2) print('area of parallelogram:', a.cross(b).magnitude()) a = Vector([1.5, 9.547, 3.691]) b = Vector([-6.007, 0.124, 5.772]) print('area of triangle:', a.trianglearea(b)) print('area of triangle:', a.cross(b).magnitude()*0.5) a = Vector([8.462, 7.893]) b = Vector([6.984, -5.975]) c = a.cross(b) print('2D vector test!!:', c.coordinates) ```
github_jupyter
# 02 - ML Experimentation with Custom Model The purpose of this notebook is to use [custom training](https://cloud.google.com/ai-platform-unified/docs/training/custom-training) to train a keras classifier to predict whether a given trip will result in a tip > 20%. The notebook covers the following tasks: 1. Preprocess the data locally using Apache Beam. 2. Train and test custom model locally using a Keras implementation. 3. Submit a Dataflow job to preprocess the data at scale. 4. Submit a custom training job to Vertex AI using a [pre-built container](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers). 5. Upload the trained model to Vertex AI. 6. Track experiment parameters from [Vertex AI Metadata](https://cloud.google.com/vertex-ai/docs/ml-metadata/introduction). 7. Submit a [hyperparameter tuning job](https://cloud.google.com/vertex-ai/docs/training/hyperparameter-tuning-overview) to Vertex AI. We use [Vertex TensorBoard](https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-overview) and [Vertex ML Metadata](https://cloud.google.com/vertex-ai/docs/ml-metadata/introduction) to track, visualize, and compare ML experiments. ## Setup ### Import libraries ``` import os import logging from datetime import datetime import numpy as np import tensorflow as tf import tensorflow_transform as tft import tensorflow.keras as keras from google.cloud import aiplatform as vertex_ai from google.cloud.aiplatform import hyperparameter_tuning as hp_tuning from src.common import features, datasource_utils from src.model_training import data, model, defaults, trainer, exporter from src.preprocessing import etl logging.getLogger().setLevel(logging.INFO) tf.get_logger().setLevel('INFO') print(f"TensorFlow: {tf.__version__}") print(f"TensorFlow Transform: {tft.__version__}") ``` ### Setup Google Cloud project ``` PROJECT = '[your-project-id]' # Change to your project id. REGION = 'us-central1' # Change to your region. BUCKET = '[your-bucket-name]' # Change to your bucket name. SERVICE_ACCOUNT = "[your-service-account]" if PROJECT == "" or PROJECT is None or PROJECT == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT = shell_output[0] if SERVICE_ACCOUNT == "" or SERVICE_ACCOUNT is None or SERVICE_ACCOUNT == "[your-service-account]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.account)' 2>/dev/null SERVICE_ACCOUNT = shell_output[0] if BUCKET == "" or BUCKET is None or BUCKET == "[your-bucket-name]": # Get your bucket name to GCP projet id BUCKET = PROJECT # Try to create the bucket if it doesn'exists ! gsutil mb -l $REGION gs://$BUCKET print("") PARENT = f"projects/{PROJECT}/locations/{REGION}" print("Project ID:", PROJECT) print("Region:", REGION) print("Bucket name:", BUCKET) print("Service Account:", SERVICE_ACCOUNT) print("Vertex API Parent URI:", PARENT) ``` ### Set configurations ``` VERSION = 'v01' DATASET_DISPLAY_NAME = 'chicago-taxi-tips' MODEL_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier-{VERSION}' WORKSPACE = f'gs://{BUCKET}/{DATASET_DISPLAY_NAME}' EXPERIMENT_ARTIFACTS_DIR = os.path.join(WORKSPACE, 'experiments') RAW_SCHEMA_LOCATION = 'src/raw_schema/schema.pbtxt' TENSORBOARD_DISPLAY_NAME = f'tb-{DATASET_DISPLAY_NAME}' EXPERIMENT_NAME = f'{MODEL_DISPLAY_NAME}' ``` ## Create Vertex TensorBoard instance ``` tensorboard_resource = vertex_ai.Tensorboard.create(display_name=TENSORBOARD_DISPLAY_NAME) tensorboard_resource_name = tensorboard_resource.gca_resource.name print("TensorBoard resource name:", tensorboard_resource_name) ``` ## Initialize workspace ``` REMOVE_EXPERIMENT_ARTIFACTS = False if tf.io.gfile.exists(EXPERIMENT_ARTIFACTS_DIR) and REMOVE_EXPERIMENT_ARTIFACTS: print("Removing previous experiment artifacts...") tf.io.gfile.rmtree(EXPERIMENT_ARTIFACTS_DIR) if not tf.io.gfile.exists(EXPERIMENT_ARTIFACTS_DIR): print("Creating new experiment artifacts directory...") tf.io.gfile.mkdir(EXPERIMENT_ARTIFACTS_DIR) print("Workspace is ready.") print("Experiment directory:", EXPERIMENT_ARTIFACTS_DIR) ``` ## Initialize Vertex AI experiment ``` vertex_ai.init( project=PROJECT, location=REGION, staging_bucket=BUCKET, experiment=EXPERIMENT_NAME ) run_id = f"run-local-{datetime.now().strftime('%Y%m%d%H%M%S')}" vertex_ai.start_run(run_id) EXPERIMENT_RUN_DIR = os.path.join(EXPERIMENT_ARTIFACTS_DIR, EXPERIMENT_NAME, run_id) print("Experiment run directory:", EXPERIMENT_RUN_DIR) ``` ## 1. Preprocess the data using Apache Beam The Apache Beam pipeline of data preprocessing is implemented in the [preprocessing](src/preprocessing) directory. ``` EXPORTED_DATA_PREFIX = os.path.join(EXPERIMENT_RUN_DIR, 'exported_data') TRANSFORMED_DATA_PREFIX = os.path.join(EXPERIMENT_RUN_DIR, 'transformed_data') TRANSFORM_ARTIFACTS_DIR = os.path.join(EXPERIMENT_RUN_DIR, 'transform_artifacts') ``` ### Get Source Query from Managed Dataset ``` ML_USE = 'UNASSIGNED' LIMIT = 5120 raw_data_query = datasource_utils.get_training_source_query( project=PROJECT, region=REGION, dataset_display_name=DATASET_DISPLAY_NAME, ml_use=ML_USE, limit=LIMIT ) print(raw_data_query) ``` ### Test Data Preprocessing Locally ``` args = { 'runner': 'DirectRunner', 'raw_data_query': raw_data_query, 'write_raw_data': True, 'exported_data_prefix': EXPORTED_DATA_PREFIX, 'transformed_data_prefix': TRANSFORMED_DATA_PREFIX, 'transform_artifact_dir': TRANSFORM_ARTIFACTS_DIR, 'temporary_dir': os.path.join(WORKSPACE, 'tmp'), 'gcs_location': f'gs://{BUCKET}/bq_tmp', 'project': PROJECT } vertex_ai.log_params(args) print("Data preprocessing started...") etl.run_transform_pipeline(args) print("Data preprocessing completed.") !gsutil ls {EXPERIMENT_RUN_DIR} ``` ## 2. Train a custom model locally using a Keras The `Keras` implementation of the custom model is in the [model_training](src/model_training) directory. ``` LOG_DIR = os.path.join(EXPERIMENT_RUN_DIR, 'logs') EXPORT_DIR = os.path.join(EXPERIMENT_RUN_DIR, 'model') ``` ### Read transformed data ``` tft_output = tft.TFTransformOutput(TRANSFORM_ARTIFACTS_DIR) transform_feature_spec = tft_output.transformed_feature_spec() transform_feature_spec train_data_file_pattern = os.path.join(TRANSFORMED_DATA_PREFIX,'train/data-*.gz') eval_data_file_pattern = os.path.join(TRANSFORMED_DATA_PREFIX,'eval/data-*.gz') for input_features, target in data.get_dataset( train_data_file_pattern, transform_feature_spec, batch_size=3).take(1): for key in input_features: print(f"{key} {input_features[key].dtype}: {input_features[key].numpy().tolist()}") print(f"target: {target.numpy().tolist()}") ``` ### Create hyperparameters ``` hyperparams = { "hidden_units": [64, 32] } hyperparams = defaults.update_hyperparams(hyperparams) hyperparams ``` ### Create and test model inputs and outputs ``` classifier = model.create_binary_classifier(tft_output, hyperparams) classifier.summary() keras.utils.plot_model( classifier, show_shapes=True, show_dtype=True ) classifier(input_features) ``` ### Train the model locally. ``` logging.getLogger().setLevel(logging.INFO) hyperparams["learning_rate"] = 0.001 hyperparams["num_epochs"] = 5 hyperparams["batch_size"] = 512 vertex_ai.log_params(hyperparams) classifier = trainer.train( train_data_dir=train_data_file_pattern, eval_data_dir=eval_data_file_pattern, tft_output_dir=TRANSFORM_ARTIFACTS_DIR, hyperparams=hyperparams, log_dir=LOG_DIR, ) val_loss, val_accuracy = trainer.evaluate( model=classifier, data_dir=eval_data_file_pattern, raw_schema_location=RAW_SCHEMA_LOCATION, tft_output_dir=TRANSFORM_ARTIFACTS_DIR, hyperparams=hyperparams, ) vertex_ai.log_metrics( {"val_loss": val_loss, "val_accuracy": val_accuracy}) !tb-gcp-uploader --tensorboard_resource_name={tensorboard_resource_name} \ --logdir={LOG_DIR} \ --experiment_name={EXPERIMENT_NAME} --one_shot=True ``` ### Export the trained model ``` saved_model_dir = os.path.join(EXPORT_DIR) exporter.export_serving_model( classifier=classifier, serving_model_dir=saved_model_dir, raw_schema_location=RAW_SCHEMA_LOCATION, tft_output_dir=TRANSFORM_ARTIFACTS_DIR, ) ``` ### Inspect model serving signatures ``` !saved_model_cli show --dir={saved_model_dir} --tag_set=serve --signature_def=serving_tf_example !saved_model_cli show --dir={saved_model_dir} --tag_set=serve --signature_def=serving_default ``` ### Test the exported SavedModel ``` serving_model = tf.saved_model.load(saved_model_dir) print("Saved model is loaded.") # Test the serving_tf_example with TF Examples file_names = tf.data.TFRecordDataset.list_files(EXPORTED_DATA_PREFIX + '/data-*.tfrecord') for batch in tf.data.TFRecordDataset(file_names).batch(3).take(1): predictions = serving_model.signatures['serving_tf_example'](batch) for key in predictions: print(f"{key}: {predictions[key]}") # Test the serving_default with feature dictionary import tensorflow_data_validation as tfdv from tensorflow_transform.tf_metadata import schema_utils raw_schema = tfdv.load_schema_text(RAW_SCHEMA_LOCATION) raw_feature_spec = schema_utils.schema_as_feature_spec(raw_schema).feature_spec instance = { "dropoff_grid": "POINT(-87.6 41.9)", "euclidean": 2064.2696, "loc_cross": "", "payment_type": "Credit Card", "pickup_grid": "POINT(-87.6 41.9)", "trip_miles": 1.37, "trip_day": 12, "trip_hour": 6, "trip_month": 2, "trip_day_of_week": 4, "trip_seconds": 555, } for feature_name in instance: dtype = raw_feature_spec[feature_name].dtype instance[feature_name] = tf.constant([[instance[feature_name]]], dtype) predictions = serving_model.signatures['serving_default'](**instance) for key in predictions: print(f"{key}: {predictions[key].numpy()}") ``` ## Start a new Vertex AI experiment run ``` vertex_ai.init( project=PROJECT, staging_bucket=BUCKET, experiment=EXPERIMENT_NAME) run_id = f"run-gcp-{datetime.now().strftime('%Y%m%d%H%M%S')}" vertex_ai.start_run(run_id) EXPERIMENT_RUN_DIR = os.path.join(EXPERIMENT_ARTIFACTS_DIR, EXPERIMENT_NAME, run_id) print("Experiment run directory:", EXPERIMENT_RUN_DIR) ``` ## 3. Submit a Data Processing Job to Dataflow ``` EXPORTED_DATA_PREFIX = os.path.join(EXPERIMENT_RUN_DIR, 'exported_data') TRANSFORMED_DATA_PREFIX = os.path.join(EXPERIMENT_RUN_DIR, 'transformed_data') TRANSFORM_ARTIFACTS_DIR = os.path.join(EXPERIMENT_RUN_DIR, 'transform_artifacts') ML_USE = 'UNASSIGNED' LIMIT = 1000000 raw_data_query = datasource_utils.get_training_source_query( project=PROJECT, region=REGION, dataset_display_name=DATASET_DISPLAY_NAME, ml_use=ML_USE, limit=LIMIT ) etl_job_name = f"etl-{MODEL_DISPLAY_NAME}-{run_id}" args = { 'job_name': etl_job_name, 'runner': 'DataflowRunner', 'raw_data_query': raw_data_query, 'exported_data_prefix': EXPORTED_DATA_PREFIX, 'transformed_data_prefix': TRANSFORMED_DATA_PREFIX, 'transform_artifact_dir': TRANSFORM_ARTIFACTS_DIR, 'write_raw_data': False, 'temporary_dir': os.path.join(WORKSPACE, 'tmp'), 'gcs_location': os.path.join(WORKSPACE, 'bq_tmp'), 'project': PROJECT, 'region': REGION, 'setup_file': './setup.py' } vertex_ai.log_params(args) logging.getLogger().setLevel(logging.ERROR) print("Data preprocessing started...") etl.run_transform_pipeline(args) print("Data preprocessing completed.") !gsutil ls {EXPERIMENT_RUN_DIR} ``` ## 4. Submit a Custom Training Job to Vertex AI ``` LOG_DIR = os.path.join(EXPERIMENT_RUN_DIR, 'logs') EXPORT_DIR = os.path.join(EXPERIMENT_RUN_DIR, 'model') ``` ### Test the training task locally ``` !python -m src.model_training.task \ --model-dir={EXPORT_DIR} \ --log-dir={LOG_DIR} \ --train-data-dir={TRANSFORMED_DATA_PREFIX}/train/* \ --eval-data-dir={TRANSFORMED_DATA_PREFIX}/eval/* \ --tft-output-dir={TRANSFORM_ARTIFACTS_DIR} \ --num-epochs=3 \ --hidden-units=32,32 \ --experiment-name={EXPERIMENT_NAME} \ --run-name={run_id} \ --project={PROJECT} \ --region={REGION} \ --staging-bucket={BUCKET} ``` ### Prepare training package ``` TRAINER_PACKAGE_DIR = os.path.join(WORKSPACE, 'trainer_packages') TRAINER_PACKAGE_NAME = f'{MODEL_DISPLAY_NAME}_trainer' print("Trainer package upload location:", TRAINER_PACKAGE_DIR) !rm -r src/__pycache__/ !rm -r src/.ipynb_checkpoints/ !rm -r src/raw_schema/.ipynb_checkpoints/ !rm -f {TRAINER_PACKAGE_NAME}.tar {TRAINER_PACKAGE_NAME}.tar.gz !mkdir {TRAINER_PACKAGE_NAME} !cp setup.py {TRAINER_PACKAGE_NAME}/ !cp -r src {TRAINER_PACKAGE_NAME}/ !tar cvf {TRAINER_PACKAGE_NAME}.tar {TRAINER_PACKAGE_NAME} !gzip {TRAINER_PACKAGE_NAME}.tar !gsutil cp {TRAINER_PACKAGE_NAME}.tar.gz {TRAINER_PACKAGE_DIR}/ !rm -r {TRAINER_PACKAGE_NAME} !rm -r {TRAINER_PACKAGE_NAME}.tar.gz ``` ### Prepare the training job ``` TRAIN_RUNTIME = 'tf-cpu.2-5' TRAIN_IMAGE = f"us-docker.pkg.dev/vertex-ai/training/{TRAIN_RUNTIME}:latest" print("Training image:", TRAIN_IMAGE) num_epochs = 10 learning_rate = 0.001 hidden_units = "64,64" trainer_args = [ f'--train-data-dir={TRANSFORMED_DATA_PREFIX + "/train/*"}', f'--eval-data-dir={TRANSFORMED_DATA_PREFIX + "/eval/*"}', f'--tft-output-dir={TRANSFORM_ARTIFACTS_DIR}', f'--num-epochs={num_epochs}', f'--learning-rate={learning_rate}', f'--project={PROJECT}', f'--region={REGION}', f'--staging-bucket={BUCKET}', f'--experiment-name={EXPERIMENT_NAME}' ] package_uri = os.path.join(TRAINER_PACKAGE_DIR, f'{TRAINER_PACKAGE_NAME}.tar.gz') worker_pool_specs = [ { "replica_count": 1, "machine_spec": { "machine_type": 'n1-standard-4', "accelerator_count": 0 }, "python_package_spec": { "executor_image_uri": TRAIN_IMAGE, "package_uris": [package_uri], "python_module": "src.model_training.task", "args": trainer_args, } } ] ``` ### Submit the training job ``` print("Submitting a custom training job...") training_job_display_name = f"{TRAINER_PACKAGE_NAME}_{run_id}" training_job = vertex_ai.CustomJob( display_name=training_job_display_name, worker_pool_specs=worker_pool_specs, base_output_dir=EXPERIMENT_RUN_DIR, ) training_job.run( service_account=SERVICE_ACCOUNT, tensorboard=tensorboard_resource_name, sync=True ) ``` ## 5. Upload exported model to Vertex AI Models ``` !gsutil ls {EXPORT_DIR} ``` ### Generate the Explanation metadata ``` explanation_config = features.generate_explanation_config() explanation_config ``` ### Upload model ``` SERVING_RUNTIME='tf2-cpu.2-5' SERVING_IMAGE = f"us-docker.pkg.dev/vertex-ai/prediction/{SERVING_RUNTIME}:latest" print("Serving image:", SERVING_IMAGE) explanation_metadata = vertex_ai.explain.ExplanationMetadata( inputs=explanation_config["inputs"], outputs=explanation_config["outputs"], ) explanation_parameters = vertex_ai.explain.ExplanationParameters( explanation_config["params"] ) vertex_model = vertex_ai.Model.upload( display_name=MODEL_DISPLAY_NAME, artifact_uri=EXPORT_DIR, serving_container_image_uri=SERVING_IMAGE, parameters_schema_uri=None, instance_schema_uri=None, explanation_metadata=explanation_metadata, explanation_parameters=explanation_parameters, labels={ 'dataset_name': DATASET_DISPLAY_NAME, 'experiment': run_id } ) vertex_model.gca_resource ``` ## 6. Extract experiment run parameters ``` experiment_df = vertex_ai.get_experiment_df() experiment_df = experiment_df[experiment_df.experiment_name == EXPERIMENT_NAME] experiment_df.T print("Vertex AI Experiments:") print( f"https://console.cloud.google.com/vertex-ai/locations{REGION}/experiments/{EXPERIMENT_NAME}/metrics?project={PROJECT}" ) ``` ## 7. Submit a Hyperparameter Tuning Job to Vertex AI For more information about configuring a hyperparameter study, refer to [Vertex AI Hyperparameter job configuration](https://cloud.google.com/vertex-ai/docs/training/using-hyperparameter-tuning). ### Configure a hyperparameter job ``` metric_spec = { 'ACCURACY': 'maximize' } parameter_spec = { 'learning-rate': hp_tuning.DoubleParameterSpec(min=0.0001, max=0.01, scale='log'), 'hidden-units': hp_tuning.CategoricalParameterSpec(values=["32,32", "64,64", "128,128"]) } tuning_job_display_name = f"hpt_{TRAINER_PACKAGE_NAME}_{run_id}" hp_tuning_job = vertex_ai.HyperparameterTuningJob( display_name=tuning_job_display_name, custom_job=training_job, metric_spec=metric_spec, parameter_spec=parameter_spec, max_trial_count=4, parallel_trial_count=2, search_algorithm=None # Bayesian optimization. ) ``` ### Submit the hyperparameter tuning job ``` print("Submitting a hyperparameter tunning job...") hp_tuning_job.run( service_account=SERVICE_ACCOUNT, tensorboard=tensorboard_resource_name, restart_job_on_worker_restart=False, sync=True, ) ``` ### Retrieve trial results ``` hp_tuning_job.trials best_trial = sorted( hp_tuning_job.trials, key=lambda trial: trial.final_measurement.metrics[0].value, reverse=True )[0] print("Best trial ID:", best_trial.id) print("Validation Accuracy:", best_trial.final_measurement.metrics[0].value) print("Hyperparameter Values:") for parameter in best_trial.parameters: print(f" - {parameter.parameter_id}:{parameter.value}") ```
github_jupyter
``` import os PROJECT_ID = 'qwiklabs-gcp-da02053fb2a13c97' # CHANGE THIS BUCKET = 'qwiklabs-gcp-da02053fb2a13c97' # CHANGE THIS MODEL_BASE = 'taxi_trained/export/exporter' MODEL_PATH = os.path.join(MODEL_BASE,os.listdir(MODEL_BASE)[-1]) MODEL_NAME = 'taxifare' VERSION_NAME = 'v1' ``` # Deploy for Online Prediction To get our predictions, in addition to the features provided by the client, we also need to fetch the latest traffic information from BigQuery. We then combine these and invoke our tensorflow model. This is visualized by the 'on-demand' portion (red arrows) in the below diagram: <img src="../../taxicab_traffic/assets/architecture.png" > To do this we'll take advantage of [AI Platforms Custom Prediction Routines](https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routines) which allows us to execute custom python code in response to every online prediction request. There are 5 steps to creating a custom prediction routine: 1. Upload Model Artifacts to GCS 2. Implement Predictor interface 3. Package the prediction code and dependencies 4. Deploy 5. Invoke API ## 1. Upload Model Artifacts to GCS Here we upload our model weights so that AI Platform can access them. ``` !gsutil cp -r $MODEL_PATH/* gs://$BUCKET/taxifare/model/ ``` ## 2. Implement Predictor Interface Interface Spec: https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routines#predictor-class This tells AI Platform how to load the model artifacts, and is where we specify our custom prediction code. **Excercise 1:** Complete the SQL `query_string` to return the latest (proxy) traffic information. To check your answer reference the solution. Note: the correct PROJECT_ID will automatically be inserted using the bash `sed` command in the subsequent cell. ``` %%writefile predictor.py import tensorflow as tf from google.cloud import bigquery PROJECT_ID = 'will_be_replaced' class TaxifarePredictor(object): def __init__(self, predict_fn): self.predict_fn = predict_fn def predict(self, instances, **kwargs): bq = bigquery.Client(PROJECT_ID) query_string = """ ###TODO### """ trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0] instances['trips_last_5min'] = [trips for _ in range(len(list(instances.items())[0][1]))] predictions = self.predict_fn(instances) return predictions['predictions'].tolist() # convert to list so it is JSON serialiable (requirement) @classmethod def from_path(cls, model_dir): predict_fn = tf.contrib.predictor.from_saved_model(model_dir,'predict') return cls(predict_fn) !sed -i -e 's/will_be_replaced/{PROJECT_ID}/g' predictor.py ``` ### Test Predictor Class Works Locally ``` import predictor instances = {'dayofweek' : [6,5], 'hourofday' : [12,11], 'pickuplon' : [-73.99,-73.99], 'pickuplat' : [40.758,40.758], 'dropofflat' : [40.742,40.758], 'dropofflon' : [-73.97,-73.97]} predictor = predictor.TaxifarePredictor.from_path(MODEL_PATH) predictor.predict(instances) ``` ## 3. Package Predictor Class and Dependencies We must package the predictor as a tar.gz source distribution package. Instructions for this are specified [here](http://cloud.google.com/ml-engine/docs/custom-prediction-routines#predictor-tarball). The AI Platform runtime comes preinstalled with several packages [listed here](https://cloud.google.com/ml-engine/docs/tensorflow/runtime-version-list). However it does not come with `google-cloud-bigquery` so we list that as a dependency below. ``` %%writefile setup.py from setuptools import setup setup( name='taxifare_custom_predict_code', version='0.1', scripts=['predictor.py'], install_requires=[ 'google-cloud-bigquery==1.16.0', ]) !python setup.py sdist --formats=gztar !gsutil cp dist/taxifare_custom_predict_code-0.1.tar.gz gs://$BUCKET/taxifare/predict_code/ ``` ## 4. Deploy This is similar to how we deploy standard models to AI Platform, with a few extra command line arguments. Note the use of the `--service-acount` parameter below. The default service account does not have permissions to read from BigQuery, so we [specify a different service account](https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models#service-account) that does have permission. Specifically we use the [Compute Engine default service account](https://cloud.google.com/compute/docs/access/service-accounts#compute_engine_default_service_account) which has the IAM project editor role. ``` !gcloud beta ai-platform models create $MODEL_NAME --regions us-central1 --enable-logging --enable-console-logging #!gcloud ai-platform versions delete $VERSION_NAME --model taxifare --quiet !gcloud beta ai-platform versions create $VERSION_NAME \ --model $MODEL_NAME \ --origin gs://$BUCKET/taxifare/model \ --service-account $(gcloud projects list --filter="$PROJECT_ID" --format="value(PROJECT_NUMBER)")-compute@developer.gserviceaccount.com \ --runtime-version 1.14 \ --python-version 3.5 \ --package-uris gs://$BUCKET/taxifare/predict_code/taxifare_custom_predict_code-0.1.tar.gz \ --prediction-class predictor.TaxifarePredictor ``` ## 5. Invoke API **Warning:** You will see `ImportError: file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth` when you run this. While it looks like an error this is actually just a warning and is safe to ignore, the subsequent cell will still work. ``` import googleapiclient.discovery instances = {'dayofweek' : [6], 'hourofday' : [12], 'pickuplon' : [-73.99], 'pickuplat' : [40.758], 'dropofflat' : [40.742], 'dropofflon' : [-73.97]} service = googleapiclient.discovery.build('ml', 'v1') name = 'projects/{}/models/{}/versions/{}'.format(PROJECT_ID, MODEL_NAME, VERSION_NAME) response = service.projects().predict( name=name, body={'instances': instances} ).execute() if 'error' in response: raise RuntimeError(response['error']) else: print(response['predictions']) ``` Try re-running the query again after 15 seconds (the windowing period for DataFlow), note how the prediction changes in response to the new traffic data!
github_jupyter
# Chaper 2 - N-armed Bandits ### Deep Reinforcement Learning _in Action_ ##### Listing 2.1 ``` def get_best_action(actions): best_action = 0 max_action_value = 0 for i in range(len(actions)): #A cur_action_value = get_action_value(actions[i]) #B if cur_action_value > max_action_value: best_action = i max_action_value = cur_action_value return best_action ``` ##### Listing 2.2 ``` import numpy as np from scipy import stats import random import matplotlib.pyplot as plt n = 10 probs = np.random.rand(n) #A eps = 0.1 ``` ##### Listing 2.3 ``` def get_reward(prob, n=10): reward = 0; for i in range(n): if random.random() < prob: reward += 1 return reward reward_test = [get_reward(0.7) for _ in range(2000)] np.mean(reward_test) sum = 0 x = [4,5,6,7] for j in range(len(x)): sum = sum + x[j] sum plt.figure(figsize=(9,5)) plt.xlabel("Reward",fontsize=22) plt.ylabel("# Observations",fontsize=22) plt.hist(reward_test,bins=9) ``` ##### Listing 2.4 ``` # 10 actions x 2 columns # Columns: Count #, Avg Reward record = np.zeros((n,2)) def get_best_arm(record): arm_index = np.argmax(record[:,1],axis=0) return arm_index def update_record(record,action,r): new_r = (record[action,0] * record[action,1] + r) / (record[action,0] + 1) record[action,0] += 1 record[action,1] = new_r return record ``` ##### Listing 2.5 ``` fig,ax = plt.subplots(1,1) ax.set_xlabel("Plays") ax.set_ylabel("Avg Reward") fig.set_size_inches(9,5) rewards = [0] for i in range(500): if random.random() > 0.2: choice = get_best_arm(record) else: choice = np.random.randint(10) r = get_reward(probs[choice]) record = update_record(record,choice,r) mean_reward = ((i+1) * rewards[-1] + r)/(i+2) rewards.append(mean_reward) ax.scatter(np.arange(len(rewards)),rewards) ``` ##### Listing 2.6 ``` def softmax(av, tau=1.12): softm = ( np.exp(av / tau) / np.sum( np.exp(av / tau) ) ) return softm probs = np.random.rand(n) record = np.zeros((n,2)) fig,ax = plt.subplots(1,1) ax.set_xlabel("Plays") ax.set_ylabel("Avg Reward") fig.set_size_inches(9,5) rewards = [0] for i in range(500): p = softmax(record[:,1],tau=0.7) choice = np.random.choice(np.arange(n),p=p) r = get_reward(probs[choice]) record = update_record(record,choice,r) mean_reward = ((i+1) * rewards[-1] + r)/(i+2) rewards.append(mean_reward) ax.scatter(np.arange(len(rewards)),rewards) ``` ##### Listing 2.9 ``` class ContextBandit: def __init__(self, arms=10): self.arms = arms self.init_distribution(arms) self.update_state() def init_distribution(self, arms): # Num states = Num Arms to keep things simple self.bandit_matrix = np.random.rand(arms,arms) #each row represents a state, each column an arm def reward(self, prob): reward = 0 for i in range(self.arms): if random.random() < prob: reward += 1 return reward def get_state(self): return self.state def update_state(self): self.state = np.random.randint(0,self.arms) def get_reward(self,arm): return self.reward(self.bandit_matrix[self.get_state()][arm]) def choose_arm(self, arm): reward = self.get_reward(arm) self.update_state() return reward import numpy as np import torch arms = 10 N, D_in, H, D_out = 1, arms, 100, arms env = ContextBandit(arms=10) state = env.get_state() reward = env.choose_arm(1) print(state) model = torch.nn.Sequential( torch.nn.Linear(D_in, H), torch.nn.ReLU(), torch.nn.Linear(H, D_out), torch.nn.ReLU(), ) loss_fn = torch.nn.MSELoss() env = ContextBandit(arms) def one_hot(N, pos, val=1): one_hot_vec = np.zeros(N) one_hot_vec[pos] = val return one_hot_vec def running_mean(x,N=50): c = x.shape[0] - N y = np.zeros(c) conv = np.ones(N) for i in range(c): y[i] = (x[i:i+N] @ conv)/N return y def train(env, epochs=5000, learning_rate=1e-2): cur_state = torch.Tensor(one_hot(arms,env.get_state())) #A optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) rewards = [] for i in range(epochs): y_pred = model(cur_state) #B av_softmax = softmax(y_pred.data.numpy(), tau=2.0) #C av_softmax /= av_softmax.sum() #D choice = np.random.choice(arms, p=av_softmax) #E cur_reward = env.choose_arm(choice) #F one_hot_reward = y_pred.data.numpy().copy() #G one_hot_reward[choice] = cur_reward #H reward = torch.Tensor(one_hot_reward) rewards.append(cur_reward) loss = loss_fn(y_pred, reward) optimizer.zero_grad() loss.backward() optimizer.step() cur_state = torch.Tensor(one_hot(arms,env.get_state())) #I return np.array(rewards) rewards = train(env) plt.plot(running_mean(rewards,N=500)) ```
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns dating_data = pd.read_csv("Speed Dating Data.csv", encoding="ISO-8859-1") ``` Our questions do not vary by waves, so we are using the most data that we can (i.e. using all waves and only dropping missing value rows) ``` # get last and first dates for each iid dating_data_order_min = dating_data[['iid', 'order']].groupby('iid').min() dating_data_order_max = dating_data[['iid', 'order']].groupby('iid').max() # number of unique iids len(dating_data['iid'].unique()) dating_data = dating_data.merge(dating_data_order_min, how='left', on='iid', suffixes=('', '_min')) dating_data = dating_data.merge(dating_data_order_max, how='left', on='iid', suffixes=('', '_max')) first_dec = dating_data.loc[dating_data['order'] == dating_data['order_min']] last_dec = dating_data.loc[dating_data['order'] == dating_data['order_max']] first_dec = first_dec.loc[~first_dec['dec'].isna()] last_dec = last_dec.loc[~last_dec['dec'].isna()] a = len(first_dec[(first_dec['dec'] == 0)].index) b = len(first_dec[(first_dec['dec'] == 1)].index) c = len(last_dec[(last_dec['dec'] == 0)].index) d = len(last_dec[(last_dec['dec'] == 1)].index) print(a,b) print(c,d) contingency_table = np.array([[a,b],[c,d]]) from scipy import stats stats.chi2_contingency(contingency_table) data_gender0 = dating_data.loc[dating_data['gender'] == 0] data_gender1 = dating_data.loc[dating_data['gender'] == 1] a = len(data_gender0[(data_gender0['dec'] == 0)].index) b = len(data_gender0[(data_gender0['dec'] == 1)].index) c = len(data_gender1[(data_gender1['dec'] == 0)].index) d = len(data_gender1[(data_gender1['dec'] == 1)].index) print(a,b) print(c,d) contingency_table = np.array([[a,b],[c,d]]) stats.chi2_contingency(contingency_table) plt.figure(figsize=(10, 5)) sns.histplot(data=dating_data, x="age", kde=True, stat='density', hue="gender", common_norm=False, multiple="dodge") plt.title("Age density distribution, by gender") plt.show() plt.figure(figsize=(10, 5)) sns.histplot(data=data_gender0, x="age", kde=True, stat='density', hue="dec_o", common_norm=False, multiple="dodge") plt.title("Age density distribution, by the decision of the other person, females") plt.show() x1 = data_gender0.loc[data_gender0['dec_o']==0, 'age'].to_numpy() x2 = data_gender0.loc[data_gender0['dec_o']==1, 'age'].to_numpy() stats.kstest(x1,x2) x1 = data_gender1.loc[data_gender1['dec_o']==0, 'age'].to_numpy() x2 = data_gender1.loc[data_gender1['dec_o']==1, 'age'].to_numpy() stats.kstest(x1,x2) plt.figure(figsize=(10, 5)) sns.histplot(data=data_gender1, x="age", kde=True, stat='density', hue="dec_o", common_norm=False, multiple="dodge") plt.title("Age density distribution, by the decision of the other person, males") plt.show() ```
github_jupyter
<img src="http://akhavanpour.ir/notebook/images/srttu.gif" alt="SRTTU" style="width: 150px;"/> [![Azure Notebooks](https://notebooks.azure.com/launch.png)](https://notebooks.azure.com/import/gh/Alireza-Akhavan/class.vision) <div style="text-align: right;direction:rtl;font-family:tahoma"> # دیتاست mnist <div style="text-align: right;direction:rtl;font-family:tahoma"> بر اساس توضیحات یان لیکان؛ این مجموعه داده ی ارقام دست نویس متشکل از 60.000 نمونه آموزشی و 10.000 نمونه آزمایشی یا test است. <br> این تصاویر قبلا پیش پردازش شده اند <br> </div> <p> "database of handwritten digits that has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image" (<a href="http://yann.lecun.com/exdb/mnist/">yann.lecun.com/exdb/mnist/</a>). </p> ``` import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data/', one_hot=True) ``` <div style="text-align: right;direction:rtl;font-family:tahoma"> آرگومان ورودی <span style="background-color:#dcdcdc "> One-hot = True</span> بیانگر این است که این آرایه, یک آرایه ی تماما 0 باشد؛ و تنها عنصری که در اندیسی که نمایانگر آن عدد است 1 باشد. برای مثال: <pre> Number representation: 0 One-hot encoding: [0] [1] [2] [3] [4] [5] Array/vector: 1 0 0 0 0 0 Number representation: 5 One-hot encoding: [0] [1] [2] [3] [4] [5] Array/vector: 0 0 0 0 0 1 </pre> <div style="text-align: right;direction:rtl;font-family:tahoma"> # آشنایی با داده های وارد شده <div style="text-align: right;direction:rtl;font-family:tahoma"> داده های import شده به سه بخش زیر تقسیم میشوند:<p></p> <ul> <li> داده های Train یا آموزشی <span style="background-color:#dcdcdc">(mnist.train)</span> <ul> <li>شامل 55,000 تا داده</li> <li>mnist.train.images برای inputs یا ورودی</li> <li>mnist.train.labels برای outputs یا خروجی</li> </ul> </li> <li> داده های Validation یا اعتبار سنجی <span style="background-color:#dcdcdc">(mnist.validation)</span> <ul> <li>شامل 5,000 تا داده</li> <li>mnist.validation.images برای inputs یا ورودی</li> <li>mnist.validation.labels برای outputs یا خروجی</li> </ul> </li> <li> داده های Test یا آزمایشی <span style="background-color:#dcdcdc">(mnist.test)</span> <ul> <li>شامل 10,000 تا داده</li> <li>mnist.test.images برای inputs یا ورودی</li> <li>mnist.test.labels برای outputs یا خروجی</li> </ul> </li> </ul> </div> <div class="alert alert-block alert-info"> <div style="direction:rtl;text-align:right;font-family:B Lotus, B Nazanin, Tahoma"> دانشگاه تربیت دبیر شهید رجایی<br>مباحث ویژه 2 - یادگیری عمیق پیشرفته<br>علیرضا اخوان پور<br>97-98<br> </div> <a href="https://www.srttu.edu/">SRTTU.edu</a> - <a href="http://class.vision">Class.Vision</a> - <a href="http://AkhavanPour.ir">AkhavanPour.ir</a> </div>
github_jupyter
# This program is employed to compute distance between hard data and representative patterns A vote mechanism is realized List version Sep. 2, 2021 ``` # import necessary package import numpy as np import matplotlib.pyplot as plt from scipy.spatial.distance import euclidean import random %matplotlib inline %config InlineBackend.figure_format='retina' ``` # Functions ## Called functions ``` def Standardization_MinMaxScaler(Image, Height, Width): '''Perform standardization operation on hard data''' LowerBound = -9998 InvestigatedImage = np.copy(Image).reshape(-1) AvailableLocation = np.argwhere(InvestigatedImage > LowerBound).reshape(-1) RegionOfInterest = InvestigatedImage[InvestigatedImage>LowerBound] elevation_min = np.min(RegionOfInterest) elevation_max = np.max(RegionOfInterest) InvestigatedImage[AvailableLocation] = np.copy( (RegionOfInterest - elevation_min)/(elevation_max - elevation_min) ) InvestigatedImage = InvestigatedImage.reshape((Height, Width)) return np.copy(InvestigatedImage), np.ndarray.tolist(InvestigatedImage), elevation_min, elevation_max def Extract_DSPattern_From_SimulationDomain(SimulationDomain, SimulationDomain_List, SG_height, SG_width, Center_y, Center_x, NeighborsAmount, SearchingRadius, UpperBound, LowerBound): '''Extract patterns from simulation domain''' # store the DS pattern values and relative coordinate conditioning_value_list = [] conditioning_y_list = [] conditioning_x_list = [] circle = 1 while(True): # the top line coordinate_x = -circle for coordinate_y in range(-circle,circle+1): location_y = Center_y+coordinate_y location_x = Center_x+coordinate_x if(location_y >=0 and location_y<SG_height and location_x >= 0 and location_x<SG_width): value = SimulationDomain_List[location_y][location_x] if(value >= LowerBound and value <= UpperBound): conditioning_value_list.append(value) conditioning_y_list.append(coordinate_y) conditioning_x_list.append(coordinate_x) # the right line coordinate_y = circle for coordinate_x in range(-circle+1,circle+1): location_y = Center_y+coordinate_y location_x = Center_x+coordinate_x if(location_y >=0 and location_y<SG_height and location_x >= 0 and location_x<SG_width): value = SimulationDomain_List[location_y][location_x] if(value >= LowerBound and value <= UpperBound): conditioning_value_list.append(value) conditioning_y_list.append(coordinate_y) conditioning_x_list.append(coordinate_x) # the bottom line coordinate_x = +circle for coordinate_y in range(-circle,circle): location_y = Center_y+coordinate_y location_x = Center_x+coordinate_x if(location_y >=0 and location_y<SG_height and location_x >= 0 and location_x<SG_width): value = SimulationDomain_List[location_y][location_x] if(value >= LowerBound and value <= UpperBound): conditioning_value_list.append(value) conditioning_y_list.append(coordinate_y) conditioning_x_list.append(coordinate_x) # the left line coordinate_y = -circle for coordinate_x in range(-circle+1,circle): location_y = Center_y+coordinate_y location_x = Center_x+coordinate_x if(location_y >=0 and location_y<SG_height and location_x >= 0 and location_x<SG_width): value = SimulationDomain_List[location_y][location_x] if(value >= LowerBound and value <= UpperBound): conditioning_value_list.append(value) conditioning_y_list.append(coordinate_y) conditioning_x_list.append(coordinate_x) if(len(conditioning_value_list) > NeighborsAmount): break elif(circle > SearchingRadius): break else: circle += 1 # plt.imshow(SimulationDomain[max(Center_y-circle,0):min(Center_y+circle+1,SG_height),max(Center_x-circle,0):min(Center_x+circle+1,SG_width)], # vmin=0.0,vmax=1.0,cmap='jet') # plt.colorbar() # plt.title(f'Hard Data Patterns') # plt.show() return conditioning_value_list, conditioning_y_list, conditioning_x_list def Extract_DSPattern_From_TrainingImage(TrainingImage, TrainingImage_List, TI_height, TI_width, Center_y, Center_x, Conditioning_y, Conditioning_x, UpperBound, LowerBound): training_values_list = [] for relative_y, relative_x in zip(Conditioning_y, Conditioning_x): location_y = Center_y + relative_y location_x = Center_x + relative_x if(location_y >=0 and location_y < TI_height and location_x >= 0 and location_x < TI_width): value = TrainingImage_List[location_y][location_x] if(value >= LowerBound and value <= UpperBound): training_values_list.append(value) else: training_values_list.append((UpperBound+LowerBound)/2) else: training_values_list.append((UpperBound+LowerBound)/2) return training_values_list ``` * Modified ``` def Calculate_Consistency_Representative_Conditioning_PatternVSPattern(Conditioning_values, Training_values, UpperBound, LowerBound): '''calcualte difference between a representative and a conditioning pattern''' neighborsAmount = len(Conditioning_values) # euclidean distance if Conditioning_values: difference = euclidean(Training_values, Conditioning_values) # normalized euclidean distance difference = difference / (np.sqrt(neighborsAmount) * (UpperBound - LowerBound) ) # print(f'normalized difference {difference}') else: difference = np.nan return difference def Read_Representative_Locations(TI_Amount): '''read information about representative patterns''' Clusters_Amount = np.loadtxt(fname='data/Result_RepresentativePatterns_ClusterAmount.txt',dtype=int,delimiter=',') Clusters_Amount = Clusters_Amount.reshape(TI_Amount) Representativs_TI_Index = np.zeros(np.sum(Clusters_Amount)) start = Clusters_Amount[0] for TI_index in range(1,TI_Amount): end = start + Clusters_Amount[TI_index] Representativs_TI_Index[start:end] = TI_index start = end locations = np.loadtxt(fname='data/Result_RepresentativePatterns_Locations.txt',dtype=int,delimiter=',') Representatives_y = locations[::2] locations = np.delete(arr=locations,obj=0) Representatives_x = locations[::2] # print(f'Representatives amount {Representativs_TI_Index.shape}') # print(f'Representatives amount {Representatives_y.shape}') # print(f'Representatives amount {Representatives_x.shape}') return Clusters_Amount.astype(int), Representativs_TI_Index.astype(int), Representatives_y.astype(int), Representatives_x.astype(int) def Compute_Distance_HardData_TI(Distance_HD_Pattern_List, Representatives_TI_Index, Num_conditioning, Num_Representatives, TI_Amount): '''the minimum distance between hard data and representative become the indicator between HD and TI''' Distance_HD_TI_List = np.ndarray.tolist( np.full((Num_conditioning, TI_Amount),fill_value=10.0) ) for index_conditioning in range(Num_conditioning): for index_representatives, index_TI in enumerate(Representatives_TI_Index): distance_pattern = Distance_HD_Pattern_List[index_conditioning][index_representatives] if(Distance_HD_TI_List[index_conditioning][index_TI] > distance_pattern): Distance_HD_TI_List[index_conditioning][index_TI] = distance_pattern return Distance_HD_TI_List def TrainingImage_Election_RandomSearching(Distance_HD_TI_List, Num_Conditioning, TI_Amount): '''elect a training image from candidate''' Distance_HD_TI = np.array(Distance_HD_TI_List) iteration_max = 10000 Fitness_mean = np.full((TI_Amount),fill_value=9999999.9) Fitness_variance = np.full((TI_Amount),fill_value=9999999.9) SelectedTI_matrix = np.zeros((TI_Amount,TI_Amount),dtype=int) for TI_Amount_Selected in tqdm(range(1,TI_Amount)): # during this iteration, we select #TI_Amount_Selected training images for iteration_counter in range(iteration_max): TrainingImage_indices = np.arange(start=0,stop=TI_Amount,step=1,dtype=int) np.random.shuffle(TrainingImage_indices) Test_TI = TrainingImage_indices[:TI_Amount_Selected] DistanceMatrix = Distance_HD_TI[:, Test_TI] DistanceMatrix_min = np.amin(DistanceMatrix, axis=1) fitness_mean = np.mean(DistanceMatrix_min) fitness_variance = np.var(DistanceMatrix_min) if(fitness_mean < Fitness_mean[TI_Amount_Selected] and fitness_variance<Fitness_variance[TI_Amount_Selected]): solution = np.zeros(TI_Amount,dtype=int) solution[Test_TI] = 1 SelectedTI_matrix[TI_Amount_Selected] = solution Fitness_mean[TI_Amount_Selected] = fitness_mean Fitness_variance[TI_Amount_Selected] = fitness_variance plt.plot(np.arange(1,TI_Amount,1),Fitness_mean[1:]) plt.title(f'The mean distance between hard data pattern and representatives') plt.xlabel(f'The number of selected TI') plt.ylabel(f'The mean distance') plt.show() return SelectedTI_matrix def TrainingImage_Election_ExhaustiveSearching(Distance_HD_TI_List, Num_Conditioning, TI_Amount, TI_Selected_Num): '''elect a training image from candidate''' print(f'Launching an exhaustive searching program to find the best TI set') Distance_HD_TI = np.array(Distance_HD_TI_List) fitness_mean_currentBest = 9999999.9 fitness_variance_currentBest = 9999999.9 # conduct an exhaustive searching Test_TI = np.arange(start=0,stop=TI_Selected_Num,step=1,dtype=int) while(True): # print(f'The testing TI set is {Test_TI}') DistanceMatrix = Distance_HD_TI[:, Test_TI] DistanceMatrix_min = np.amin(DistanceMatrix, axis=1) fitness_mean = np.mean(DistanceMatrix_min) if(fitness_mean < fitness_mean_currentBest): fitness_mean_currentBest = fitness_mean fitness_variance_currentBest = np.var(DistanceMatrix_min) CurrentBest_TI = np.copy(Test_TI) elif( fitness_mean == fitness_mean_currentBest ): fitness_variance = np.var(DistanceMatrix_min) if(fitness_variance < fitness_variance_currentBest): fitness_mean_currentBest = fitness_mean fitness_variance_currentBest = fitness_variance CurrentBest_TI = np.copy(Test_TI) if(np.min(Test_TI) == TI_Amount-1): break # update the TI candidate set improve_bit = TI_Selected_Num - 1 while(True): value = Test_TI[improve_bit] value += 1 if(value == TI_Amount): Test_TI[improve_bit] = 0 improve_bit -= 1 else: Test_TI[improve_bit] = value break print(f'The best TI set is {CurrentBest_TI}') print(f'The minimum mean distance is {fitness_mean_currentBest}') return CurrentBest_TI def TrainingImage_Election_ParticleSwarmOptimization(Distance_HD_TI_List, Num_Conditioning, TI_Amount, TI_Selected_Num): '''elect a training image from candidate''' print(f'Launching the particle swarm optimization algorithm') Distance_HD_TI = np.array(Distance_HD_TI_List) iteration_max = 10000 # max number of iterations particle_size = 15 # the number of particles w = 0.8 # inertia constant c1 = 2 # cognative constant c2 = 2 # social constant global_best_fitness = 9999999.9 global_best_position = np.zeros(TI_Selected_Num) population_position = np.random.uniform(low=0,high=TI_Amount,size=(particle_size, TI_Selected_Num)) population_position = population_position.astype(int) # population_position = np.sort(population_position, axis=1) population_individual_best_fitness = np.full(shape=particle_size,fill_value=9999999.9) population_individual_best_position = np.copy(population_position) population_velocity = np.random.uniform(low=-1, high=1, size=(particle_size,TI_Selected_Num)) for iteration_counter in range(iteration_max): # evaluate each individual particle for index_individual in range(particle_size): Test_TI = np.copy(population_position[index_individual]) DistanceMatrix = Distance_HD_TI[:, Test_TI] DistanceMatrix_min = np.amin(DistanceMatrix, axis=1) fitness = np.mean(DistanceMatrix_min) # update the individual best if(fitness < population_individual_best_fitness[index_individual]): population_individual_best_fitness[index_individual] = fitness population_individual_best_position[index_individual] = np.copy(Test_TI) # update the global best if(fitness < global_best_fitness): global_best_fitness = fitness global_best_position = np.copy(Test_TI) for index_individual in range(particle_size): # update the velocity r1 = random.random() r2 = random.random() cognitive_velocity = c1 * r1 * (population_individual_best_position[index_individual] - population_position[index_individual]) social_velocity = c2 * r2 * (global_best_position - population_position[index_individual]) velocity = w * population_velocity[index_individual] + cognitive_velocity + social_velocity # update the position position = population_position[index_individual] + velocity # position = np.sort(position) position = np.clip(position, a_min=0, a_max=TI_Amount-1) population_position[index_individual] = position.astype(int) global_best_position = np.sort(global_best_position) print(f'Global best position {global_best_position}') print(f'Global best fitness {global_best_fitness}') return global_best_position ``` ## Main Function * Modified ``` def Main_ConsistencyCalculation_RepresentativeSet_ConditioningSet(TrainingImageSet, TrainingImageSet_List, TI_Amount, TI_Height, TI_Width, SimulationGrid, SG_Height, SG_Width): '''calculate difference between each representative and each conditioning pattern''' DS_NeighborsAmount = 30 DS_SearchingRadius = 15 ConditioningStride = 10 UpperBound = 1.0 LowerBound = 0.0 ## Normalize scale to 0-1, where -9999999 is for undefined areas SimulationGrid_standardized, SimulationGrid_standardized_List, elevation_max, elevation_min \ = Standardization_MinMaxScaler(Image = SimulationGrid) plt.imshow(SimulationGrid_standardized,vmin=0.0,vmax=1.0,cmap='jet') plt.colorbar() plt.title(f'Flight lines After Standardization') plt.show() ## initialize consistency matrix DistanceMatrix_HardData_Representatives = [] ## get information about representatives Clusters_Amount, Representatives_TI_Index, Representatives_y, Representatives_x \ = Read_Representative_Locations(TI_Amount) Representatives_TI_Index_List = Representatives_TI_Index.tolist() Representatives_y_List = Representatives_y.tolist() Representatives_x_List = Representatives_x.tolist() ## iterate all conditioning patterns DistanceMatrix_HardData_Representatives_List = [] Locations_y_List = np.ndarray.tolist( np.arange(start=0,stop=SG_Height,step=ConditioningStride) ) Locations_x_List = np.ndarray.tolist( np.arange(start=0,stop=SG_Width, step=ConditioningStride) ) Conditioning_Amount = 0 for location_y in Locations_y_List: for location_x in Locations_x_List: # print(f'Conditioning pattern #{Conditioning_Amount}') # extract a conditioning pattern conditioning_values_List, conditioning_y_List, conditioning_x_List \ = Extract_DSPattern_From_SimulationDomain(SimulationDomain = SimulationGrid_standardized, SimulationDomain_List = SimulationGrid_standardized_List, SG_height = SG_Height, SG_width = SG_Width, Center_y = location_y, Center_x = location_x, NeighborsAmount = DS_NeighborsAmount, SearchingRadius = DS_SearchingRadius, UpperBound = UpperBound, LowerBound = LowerBound) Conditioning_Amount += 1 # scan all representatives for TI_index, training_y, training_x in zip(Representatives_TI_Index_List, Representatives_y_List, Representatives_x_List): # print(f' Training image #{TI_index}') # extract a training pattern according to relative coordinates training_values_List \ = Extract_DSPattern_From_TrainingImage(TrainingImage = TrainingImageSet[TI_index], TrainingImage_List = TrainingImageSet_List[TI_index], TI_height = TI_Height, TI_width = TI_Width, Center_y = training_y, Center_x = training_x, Conditioning_y = conditioning_y_List, Conditioning_x = conditioning_x_List, UpperBound = UpperBound, LowerBound = LowerBound) # calculate consistency between a conditioning pattern and a training pattern difference \ = Calculate_Consistency_Representative_Conditioning_PatternVSPattern(Conditioning_values = conditioning_values_List, Training_values = training_values_List, UpperBound = UpperBound, LowerBound = LowerBound) # store difference in computer memory DistanceMatrix_HardData_Representatives_List.append(difference) # reshape consistency matrix(n_conditioning, n_representatives) Representatives_Amount = np.sum(Clusters_Amount) DistanceMatrix_HardData_Representatives = np.array(DistanceMatrix_HardData_Representatives_List) DistanceMatrix_HardData_Representatives \ = DistanceMatrix_HardData_Representatives.reshape((Conditioning_Amount, Representatives_Amount)) DistanceMatrix_HardData_Representatives_List = np.ndarray.tolist(DistanceMatrix_HardData_Representatives) # calculate distance between conditioning pattern and training images DistanceMatrix_HardData_TrainingImages_List \ = Compute_Distance_HardData_TI(Distance_HD_Pattern_List = DistanceMatrix_HardData_Representatives_List, Representatives_TI_Index = Representatives_TI_Index, Num_conditioning = Conditioning_Amount, Num_Representatives = Representatives_Amount, TI_Amount = TI_Amount) # conduct a campaign to elect training images Ranks_TI, Votes_TI \ = TrainingImage_Election(Distance_HD_TI_List = DistanceMatrix_HardData_TrainingImages_List, Num_Conditioning = Conditioning_Amount, TI_Amount = TI_Amount) fig, ax = plt.subplots() ax.plot(np.arange(0,TI_Amount,1), Votes_TI) ax.set(xlabel='TI index', ylabel='Vote Count', title='Training Image Election') ax.invert_yaxis() ax.grid() plt.show() return Ranks_TI, Votes_TI def Main_ConsistencyCalculation_RepresentativeSet_ConditioningSet(TrainingImageSet, TrainingImageSet_List, TI_Amount, TI_Height, TI_Width, SimulationGrid, SG_Height, SG_Width, TI_Selected_Num): '''calculate difference between each representative and each conditioning pattern''' DS_NeighborsAmount = 30 DS_SearchingRadius = 15 ConditioningStride = 10 UpperBound = 1.0 LowerBound = 0.0 SimulationGrid_standardized, SimulationGrid_standardized_List, elevation_max, elevation_min = \ Standardization_MinMaxScaler(Image = SimulationGrid, Height = SG_Height, Width = SG_Width) plt.imshow(SimulationGrid_standardized,vmin=0.0,vmax=1.0,cmap='jet') plt.colorbar() plt.title(f'Flight lines After Standardization') plt.show() # initialize consistency matrix DistanceMatrix_HardData_Representatives = [] # get information about representatives Clusters_Amount, Representatives_TI_Index, Representatives_y, Representatives_x = Read_Representative_Locations(TI_Amount) Representatives_TI_Index_List = Representatives_TI_Index.tolist() Representatives_y_List = Representatives_y.tolist() Representatives_x_List = Representatives_x.tolist() # iterate all conditioning patterns DistanceMatrix_HardData_Representatives_List = [] Conditioning_Amount = 0 Locations_y_List = np.ndarray.tolist( np.arange(start=0,stop=SG_Height,step=ConditioningStride) ) Locations_x_List = np.ndarray.tolist( np.arange(start=0,stop=SG_Width, step=ConditioningStride) ) for location_y in Locations_y_List: for location_x in Locations_x_List: # print(f'Conditioning pattern #{Conditioning_Amount}') # extract a conditioning pattern conditioning_values_List, conditioning_y_List, conditioning_x_List = Extract_DSPattern_From_SimulationDomain(SimulationDomain = SimulationGrid_standardized, SimulationDomain_List = SimulationGrid_standardized_List, SG_height = SG_Height, SG_width = SG_Width, Center_y = location_y, Center_x = location_x, NeighborsAmount = DS_NeighborsAmount, SearchingRadius = DS_SearchingRadius, UpperBound = UpperBound, LowerBound = LowerBound ) Conditioning_Amount += 1 # scan all representatives for TI_index, training_y, training_x in zip(Representatives_TI_Index_List, Representatives_y_List, Representatives_x_List): # print(f' Training image #{TI_index}') # extract a training pattern according to relative coordinates training_values_List = Extract_DSPattern_From_TrainingImage(TrainingImage = TrainingImageSet[TI_index], TrainingImage_List = TrainingImageSet_List[TI_index], TI_height = TI_Height, TI_width = TI_Width, Center_y = training_y, Center_x = training_x, Conditioning_y = conditioning_y_List, Conditioning_x = conditioning_x_List, UpperBound = UpperBound, LowerBound = LowerBound) # calculate consistency between a conditioning pattern and a training pattern difference = Calculate_Consistency_Representative_Conditioning_PatternVSPattern(Conditioning_values = conditioning_values_List, Training_values = training_values_List, UpperBound = UpperBound, LowerBound = LowerBound) # store difference in computer memory DistanceMatrix_HardData_Representatives_List.append(difference) # reshape consistency matrix(n_conditioning, n_representatives) Representatives_Amount = np.sum(Clusters_Amount) DistanceMatrix_HardData_Representatives = np.array(DistanceMatrix_HardData_Representatives_List) DistanceMatrix_HardData_Representatives = DistanceMatrix_HardData_Representatives.reshape((Conditioning_Amount, Representatives_Amount)) DistanceMatrix_HardData_Representatives_List = np.ndarray.tolist(DistanceMatrix_HardData_Representatives) # calculate distance between conditioning pattern and training images DistanceMatrix_HardData_TrainingImages_List = Compute_Distance_HardData_TI(Distance_HD_Pattern_List = DistanceMatrix_HardData_Representatives_List, Representatives_TI_Index = Representatives_TI_Index, Num_conditioning = Conditioning_Amount, Num_Representatives = Representatives_Amount, TI_Amount = TI_Amount) # conduct a campaign to elect training images # SelectedTI_matrix = TrainingImage_Election_RandomSearching(Distance_HD_TI_List = DistanceMatrix_HardData_TrainingImages_List, # Num_Conditioning = Conditioning_Amount, # TI_Amount = TI_Amount) # Selected_TI = np.argwhere(SelectedTI_matrix[TI_Selected_Num] == 1).reshape(-1) # Selected_TI = TrainingImage_Election_ExhaustiveSearching(Distance_HD_TI_List = DistanceMatrix_HardData_TrainingImages_List, # Num_Conditioning = Conditioning_Amount, # TI_Amount = TI_Amount, # TI_Selected_Num = TI_Selected_Num) # print(f'The selected training images are #{Selected_TI}') Selected_TI = TrainingImage_Election_ParticleSwarmOptimization(Distance_HD_TI_List = DistanceMatrix_HardData_TrainingImages_List, Num_Conditioning = Conditioning_Amount, TI_Amount = TI_Amount, TI_Selected_Num = TI_Selected_Num) print(f'The selected training images are #{Selected_TI}') return Selected_TI ``` # Run Load all training images ``` # input the candidate TIs TI_Amount = 166 TI_Height = 200 TI_Width = 200 LowerBound = -9998 TrainingImages = np.loadtxt(fname='data/TI_Standardization_166.txt',dtype=float,delimiter=',') TrainingImages = TrainingImages.reshape((TI_Amount, TI_Height, TI_Width)) TrainingImages_List = TrainingImages.tolist() index_TI = 77 plt.imshow(TrainingImages[index_TI],vmin=0,vmax=1,cmap='terrain') plt.colorbar() ``` Load radar line data (hard data) ``` # read hard data LineData = np.load('data/LineData.npy') LineBloc_space = np.load('data/DemoBloc_space.npy') plt.figure(figsize=(8,2.5)) plt.imshow(LineData, interpolation='none', cmap='terrain') plt.colorbar(fraction=0.01) plt.title('Flight radar lines') plt.show() plt.figure(figsize=(8,2.5)) plt.imshow(LineBloc_space) plt.colorbar(fraction=0.01) plt.title('Flight radar lines areas') plt.show() ``` Run main functions ``` # main function # the parameter of investigating area import time start_time = time.time() Num_Selected_TrainingImage = 3 Selected_TrainingImage_Index = np.zeros((0)) TI_selection_all = [] ## The last local area is empty for index_area in np.unique(LineBloc_space): # for index_area in range(1): print(f'Local Area {index_area}') SG_height = np.argwhere([LineBloc_space==index_area])[:,1].max() - \ np.argwhere([LineBloc_space==index_area])[:,1].min()+1 SG_width = np.argwhere([LineBloc_space==index_area])[:,2].max() - \ np.argwhere([LineBloc_space==index_area])[:,2].min()+1 # slice research area HardData = LineData[LineBloc_space==index_area].reshape(SG_height, SG_width) plt.imshow(HardData,vmin=-2500,vmax=2000,cmap='jet') plt.colorbar() plt.title(f'Flight lines') plt.show() # rank TIs according to consistency Selected_TI = Main_ConsistencyCalculation_RepresentativeSet_ConditioningSet(TrainingImageSet = TrainingImages, TrainingImageSet_List = TrainingImages_List, TI_Amount = TI_Amount, TI_Height = TI_Height, TI_Width = TI_Width, SimulationGrid = HardData, SG_Height = SG_height, SG_Width = SG_width, TI_Selected_Num = Num_Selected_TrainingImage) TI_selection_all.append(Selected_TI) Selected_TrainingImage_Index = np.concatenate((Selected_TrainingImage_Index, Selected_TI)) Selected_TrainingImage_Index = np.unique(Selected_TrainingImage_Index).astype(int) print(f'The resulting selected training images are {Selected_TrainingImage_Index}') elapsed_time = time.time() - start_time print(elapsed_time) # The last local area LineData[-1] is empty and not saved np.save('Selected_TrainingImage_Index', Selected_TrainingImage_Index) np.save('TI_selection_all', TI_selection_all) ```
github_jupyter
# グローバーのアルゴリズム このセクションでは、グローバーのアルゴリズムの紹介と、それを使用して非構造化検索の問題を解決する方法を紹介します。 次に、Qiskitを使用して量子アルゴリズムを実装し、シミュレーターとデバイスで実行します。 ## 目次 1. [はじめに](#introduction) 2. [例: 2量子ビットの場合](#2qubits) 2.1 [シミュレーション](#2qubits-simulation) 2.2 [量子デバイス](#2qubits-device) 3. [例: 3量子ビットの場合](#3qubits) 3.1 [シミュレーション](#3qubits-simulation) 3.2 [量子デバイス](#3qubits-device) 4. [演習](#problems) 5. [グローバーのアルゴリズムを使って数独を解く](#sudoku) 5. [リファレンス](#references) ## 1. はじめに <a id='introduction'></a> 古典コンピューターを凌駕する量子コンピューターの数あるアドバンテージの1つに、データベース検索を高速に行えるというのを聞いたことがあるかも知れません。Groverのアルゴリズムはこの能力を実証します。Groverのアルゴリズムは、非構造化データの検索問題に対して二次のオーダーの高速化ができるだけではなく、検索問題以外にも利用することができます。つまり、その他様々のアルゴリズムの実行時間を二次のオーダーで改善する一般的なテクニック、もしくはサブルーチンとして利用することができます。これは振幅増幅テクニックと呼ばれています。 ### 非構造化データの検索 $N$個の大きなアイテムリストがあるとします。その中で、一つだけアタリ$w$があるとします。リスト内の各アイテムを特定の色のボックスと考えてください。 紫のアタリ$w$を除いて、リスト内のすべてのアイテムが灰色であるとします。 ![image1](images/grover_list.png) 紫のアタリの箱(*マークのついたアイテム*)を見つけるためには、古典計算では平均で $N/2$ 個の箱を探す必要があります。 最悪の場合は、$N$ 個探す必要があります。ところが、量子コンピューターでは、グローバーの振幅増幅のテクニックを使って、 おおよそ $\sqrt N$ ステップでマークされたアイテムを探し出すことができます。 二次の高速化は、大きなリスト内のマークされたアイテムを探すためには実際の所、大きな時間の節約になります。 さらに、このアルゴリズムはリスト自体の内部構造を利用しないので、*一般化*することができ、多くの古典の問題でも二次の速度向上をもたらしてくれます。 ### オラクルの作成 この教科書の例では、「データベース」は、量子ビットが存在する可能性のあるすべての計算基底の状態で構成されています。例えば、3量子ビットの場合、リストは状態$|000\rangle, |001\rangle, \dots |111\rangle$ です。(つまり、状態$|0\rangle \rightarrow |7\rangle$ です。) グローバーのアルゴリズムは、解となる状態に負の位相を追加するオラクルを解きます。 つまり 計算基底の任意の状態 $|x\rangle$ において: $$ U_\omega|x\rangle = \bigg\{ \begin{aligned} \phantom{-}|x\rangle \quad \text{if} \; x \neq \omega \\ -|x\rangle \quad \text{if} \; x = \omega \\ \end{aligned} $$ このオラクルは、対角行列になり、マークのついたアイテムに対応する要素は負の位相を持ちます。例えば、3量子ビットで$\omega = \text{101}$のとき、オラクルは以下の行列になります: $$ U_\omega = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \end{bmatrix} \begin{aligned} \\ \\ \\ \\ \\ \\ \leftarrow \omega = \text{101}\\ \\ \\ \\ \end{aligned} $$ グローバーのアルゴリズムを非常に強力にしているのは、問題をこの形のオラクルに変換するのがとても簡単だからです。解を _見つける_ のは難しいけれども、解を _検証_ するのは比較的簡単な計算上の問題はたくさんあります。例えば、すべてのルールが満たされていることを確認することで、[数独](https://en.wikipedia.org/wiki/Sudoku)の解を簡単に確認できます。このような問題に対しては、提案された解$x$を取る関数 $f$で、$x$が解でない場合 ($x \neq \omega$)は$f(x) = 0$を返し、正しい解 の場合($x = \omega$)は、$f(x) = 1$を返すような関数を作成できます。このようなオラクルは次のように書くことができます: $$ U_\omega|x\rangle = (-1)^{f(x)}|x\rangle $$ そして、このオラクルの行列は対角行列で以下のような形をしています: $$ U_\omega = \begin{bmatrix} (-1)^{f(0)} & 0 & \cdots & 0 \\ 0 & (-1)^{f(1)} & \cdots & 0 \\ \vdots & 0 & \ddots & \vdots \\ 0 & 0 & \cdots & (-1)^{f(2^n)} \\ \end{bmatrix} $$ <details> <summary>グローバーのオラクルの回路の構築(クリックして展開)</summary> <p> 古典的な関数$f(x)$がある場合に、以下のような形の可逆な回路に変換できます: </p><p> <img alt="A Classical Eeversible Oracle" src="images/grover_boolean_oracle.svg"> </p><p> 「出力」量子ビットを$|{-}\rangle$の状態に初期化すると、位相キックバックにより、これがグローバーのオラクルに変わります(ドイチ・ジョサのオラクルの動作と同じです): </p><p> <img alt="Grover Oracle Constructed from a Classical Reversible Oracle" src="images/grover_phase_oracle.svg"> </p><p> 補助量子ビット ($|{-}\rangle$)は無視します。 </p> </details> この章の次のパートでは、アルゴリズムのコアとなる概念を教えることを目指しています。事前に $\omega$が分かっているオラクルの例を作成するので、これらのオラクルが役立つかどうかを気にする必要はありません。この章の終わりに、ある問題(数独)を解くオラクルを作成する例を取り上げます。 ### 振幅増幅 では、アルゴリズムはどのように動作するのでしょう?リストを調べる前は、私たちはマークされたアイテムがどこにあるのか知りません。従って、私たちの推測は、この式で表される均一な重ね合わせ状態での位置特定と大差ありません: $|s \rangle = \frac{1}{\sqrt{N}} \sum_{x = 0}^{N -1} | x \rangle.$ もしこの時点で標準基底 $\{ | x \rangle \}$でこの重ね合わせ状態を測定した場合、5番目の量子法則に従って、 $\frac{1}{N} = \frac{1}{2^n}$の確率で、標準基底のうちの一つに収束します。予想通り、正しい$w$ を当てる確率は$2^n$ のうちの1つです。従って、正しいアイテムを推測するには、平均$N = 2^n$回トライする必要があります。 そこで振幅増幅と呼ばれる処理を加えましょう。この処理により、量子コンピューターが正しいアイテムを見つける確率を大幅に高めることが出来ます。この処理では、マークされたアイテムの振幅を増幅し、その他のアイテムの振幅を小さくします。この結果、最終状態を測定すると、正しいアイテムをほぼ確実に取り出すことができるようになります。 このアルゴリズムには2つの反転という面白い幾何学的解釈があり、2次元平面での回転として表せます。私たちが考慮すべきは、アタリ$| w \rangle$と均一な重ね合わせ状態$| s \rangle$ の2つの特別な状態のみです。この2つのベクトルは、ベクトル空間 $\mathbb{C}^N.$ において、2次元の平面を張ります。$| w \rangle$ 状態は、$N^{-1/2}$ の振幅で重ね合わせ状態に入っているため、これら2つのベクトルは完全に直交しているわけではありません。しかし、$|s \rangle$ から $| w \rangle$ を削除し、正規化し直す事で$| w \rangle$ に直交する追加の状態 $|s'\rangle$ を導入することができます **Step 1**: 振幅増幅は均一な重ね合わせ状態 $| s \rangle$ から開始します。均一な重ね合わせ状態は、 $| s \rangle = H^{\otimes n} | 0 \rangle^n$により簡単に作成できます。 ![image2](images/grover_step1.jpg) 左の図は、$|w\rangle$ と $|s'\rangle$ によって張られる、2次元平面に対応しています。初期状態が$|s\rangle = \sin \theta | w \rangle + \cos \theta | s' \rangle$(ここで$\theta = \arcsin \langle s | w \rangle = \arcsin \frac{1}{\sqrt{N}}$ )で表されます。 右の図は、$N = 2^2 = 4$の場合の、状態 $| s \rangle$の振幅を表す棒グラフです。振幅の平均値は破線で示されています。 **Step 2**: 反転のオラクル $U_f$ を状態$|s\rangle$に適用します。 ![image3](images/grover_step2.jpg) 幾何学的には、状態 $|s\rangle$ を$|s'\rangle$ に対して反転させることに対応しています。この変換が意味することは、$|w\rangle$の状態の振幅が負の値になるということで、結果として平均振幅が低くなることを意味しています。(訳注:右側のグラフで破線が下がっていることに着目してください)。 **Step 3**: 次に、$|s\rangle$ に対する追加の反転 ($U_s$) を適用します:$U_s = 2|s\rangle\langle s| - \mathbb{1}$. この変換の結果、状態は$U_s U_f| s \rangle$ となり、変換 が完了します。(訳注:右側のグラフでwに対応する振幅が増幅されていることに着目してください)。 ![image4](images/grover_step3.jpg) 2つの反転は常に回転と対応しています。$U_s U_f$ による変換は、初期状態 $|s\rangle$ をアタリ$|w\rangle$ に近づけるような回転となります。(訳注:step 3の左側の図を参照)。$U_s$ による反転の効果は、振幅の棒グラフにおいて、平均振幅での反転と解釈できます。最初の反転で平均振幅の値が低くなったので、この変換は、負の振幅をもった $|w\rangle$ をオリジナルの値から大雑把にいって約3倍程度増幅し、他の振幅は小さくします。その後、**step 2** に戻ってこれを繰り返します。アタリ $w$に近くなるまで、この処理を何回か繰り返します。 $t$ 回繰り返した後、状態は $| \psi_t \rangle = (U_s U_f)^t | s \rangle$に変換されます。 回転を何回適用する必要があるでしょうか? おおよそ$\sqrt{N}$ 回転で十分なことが分かっています。これは、状態 $| \psi \rangle$ の振幅を調べることで明確になります。$| w \rangle$ の振幅が適用回数と共に線型的($\sim t N^{-1/2}$)に増えていくことが見てとれます。確率ではなく振幅を扱っているので、ベクトル空間の値には平方根として入ります。そのため、この処理で増幅されるのは、ただの確率ではなく振幅です。 もし解が複数、$M$個ある場合、おおよそ $\sqrt{(N/M)}$ 回転で十分なことが分かっています。 ![image5](images/grover_circuit_high_level.png) ## 2. 例: 2量子ビットの場合 <a id='2qubits'></a> では、 2量子ビットの場合の$N=4$のグローバーのアルゴリズムをみてみましょう。このケースでは、初期状態$|s\rangle$をアタリ$|w\rangle$にするために必要な回転は<b>1回転</b>です[3]: <ol> <li> 上の導入に従って、$N=4$ の場合、 $$\theta = \arcsin \frac{1}{2} = \frac{\pi}{6}.$$ </li> <li> $t$ 回の繰り返しの後、以下のようになります。 $$(U_s U_\omega)^t | s \rangle = \sin \theta_t | \omega \rangle + \cos \theta_t | s' \rangle ,$$ここで $$\theta_t = (2t+1)\theta.$$ </li> <li> $| \omega \rangle$を得るためには$\theta_t = \frac{\pi}{2}$である必要があり、よって$\theta=\frac{\pi}{6}$を上記の例に入れると $t=1$となります。つまり、 $t=1$回の回転後に、求めている要素が見つかると言うことです。 </li> </ol> 次にある特定のオラクルを使った例を示します。 #### $\lvert \omega \rangle = \lvert 11 \rangle$のオラクル $\lvert w \rangle = \lvert 11 \rangle$の場合を見てみましょう。この場合のオラクル $U_\omega$は以下のように振舞います: $$U_\omega | s \rangle = U_\omega \frac{1}{2}\left( |00\rangle + |01\rangle + |10\rangle + |11\rangle \right) = \frac{1}{2}\left( |00\rangle + |01\rangle + |10\rangle - |11\rangle \right).$$ または: $$ U_\omega = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \\ \end{bmatrix} $$ これは、制御Zゲートということが分かります。つまり、この例では、オラクルは制御Zゲートのみで作られます: ![image6](images/grover_circuit_2qbuits_oracle_11.svg) #### 反転 $U_s$ 回路を完成させるには、反転$U_s = 2|s\rangle\langle s| - \mathbb{1}$を追加する必要があります。これは$|s\rangle$に関する反転であるため、$|s\rangle$に直交するすべての状態に負の位相を追加します。 これを行う1つの方法は、状態を$|s\rangle \rightarrow |0\rangle$に変換する操作を使用することです。これは、各量子ビットにアダマールゲートを適用することで実装できます。 $$H^{\otimes n}|s\rangle = |0\rangle$$ 次に、$|0\rangle$に直行する状態に負の位相を追加する回路を適用します: $$U_0 \frac{1}{2}\left( \lvert 00 \rangle + \lvert 01 \rangle + \lvert 10 \rangle + \lvert 11 \rangle \right) = \frac{1}{2}\left( \lvert 00 \rangle - \lvert 01 \rangle - \lvert 10 \rangle - \lvert 11 \rangle \right)$$ つまり、$\lvert 00 \rangle$を除いて、各状態の符号が反転します。 簡単に確認するために、$U_0$を実装する1つの方法を以下に示します: ![Circuit for reflection around |0>](images/grover_circuit_2qbuits_reflection_0.svg) 最後に、状態を$|0\rangle \rightarrow |s\rangle$ に変換する操作を実行します(再びHゲートを使います): $$H^{\otimes n}U_0 H^{\otimes n} = U_s$$ $U_s$ の回路の完成形は以下のようになります: ![Circuit for reflection around |s>](images/grover_circuit_2qbuits_reflection.svg) #### $\lvert w \rangle = |11\rangle$の場合の全体の回路 $N=4$の特定のケースでは、必要な回転は1回のみなので、上記のコンポーネントを組み合わせて、$\lvert w \rangle = |11\rangle$の場合のグローバーのアルゴリズムの全体の回路を構築できます: ![image10](images/grover_circuit_2qubits_full_11.svg) ### 2.1 Qiskitでの実装 上記の$\lvert w \rangle = |11\rangle$の場合の2量子ビットの例について、グローバーのアルゴリズムを実装します。 ``` # 初期化 import matplotlib.pyplot as plt import numpy as np # Qiskitをインポート from qiskit import IBMQ, Aer, QuantumCircuit, ClassicalRegister, QuantumRegister, execute from qiskit.providers.ibmq import least_busy from qiskit.quantum_info import Statevector # 基本的な描画ツールをインポート from qiskit.visualization import plot_histogram ``` まず、2量子ビット回路を用意します: ``` n = 2 grover_circuit = QuantumCircuit(n) ``` あとは、上記の回路のコマンドを書き出すだけです。 まず、状態を$|s\rangle$に初期化する必要があります。 (任意の数の量子ビットに対して)後で再び使用できるように、一般的な関数を作成しましょう: ``` def initialize_s(qc, qubits): """qcの 'qubits' にH-gate を適用""" for q in qubits: qc.h(q) return qc grover_circuit = initialize_s(grover_circuit, [0,1]) grover_circuit.draw() ``` $|w\rangle = |11\rangle$のためのオラクルを適用します。 このオラクルは2量子ビットに固有のものです。 ``` grover_circuit.cz(0,1) # オラクル grover_circuit.draw() ``` <span id="general_diffuser"></span>ここで、Diffuser($U_s$)を適用します。 $|s\rangle$に初期化する回路と同様に、後で他の問題で使用できるように、一般的なDiffuser(任意の数の量子ビット用)を作成します。 ``` # Diffusion operator (U_s) grover_circuit.h([0,1]) grover_circuit.z([0,1]) grover_circuit.cz(0,1) grover_circuit.h([0,1]) grover_circuit.draw() ``` これで回路が完成しました。 ### 2.1.1 シミュレーターでの実験 <a id='2qubits-simulation'></a> シミュレーションで回路を実行してみましょう。 まず、正しい状態ベクトルかどうかを確認します: ``` sv_sim = Aer.get_backend('statevector_simulator') job_sim = execute(grover_circuit, sv_sim) statevec = job_sim.result().get_statevector() from qiskit_textbook.tools import vector2latex vector2latex(statevec, pretext="|\\psi\\rangle =") ``` 予想どおり、$|11\rangle$以外のすべての状態の振幅は0です。これは、$|11\rangle$を測定する可能性が100%であることを意味します: ``` grover_circuit.measure_all() qasm_simulator = Aer.get_backend('qasm_simulator') shots = 1024 results = execute(grover_circuit, backend=qasm_simulator, shots=shots).result() answer = results.get_counts() plot_histogram(answer) ``` ### 2.1.2 実機での実験 <a id='2qubits-device'></a> 実デバイスでは回路を以下のようにして実行します。 ``` # IBM Qアカウントをロードして、最も空いているバックエンドデバイスの情報を得ます。 provider = IBMQ.load_account() device = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 3 and not x.configuration().simulator and x.status().operational==True)) print("Running on current least busy device: ", device) # 最も空いているバックエンドで回路を実行します。キュー内のジョブの実行をモニターします。 from qiskit.tools.monitor import job_monitor job = execute(grover_circuit, backend=device, shots=1024, optimization_level=3) job_monitor(job, interval = 2) # 計算結果を得ます results = job.result() answer = results.get_counts(grover_circuit) plot_histogram(answer) ``` ほとんどの場合で、状態$|11\rangle$が測定されていることが確認できます。$|11\rangle$以外の結果は、量子計算のエラーによるものです。 ## 3. 例:3量子ビットの場合 <a id='3qubits'></a> 3量子ビットのグローバーのアルゴリズムについて、2つのマークされた状態$\lvert101\rangle$と$\lvert110\rangle$を持つ例を参考文献[2]にある実装に従って見ていきます。 フェーズオラクルを使用して問題を解決するための量子回路は次のとおりです: ![image11](images/grover_circuit_3qubits.png) <ol> <li> $\lvert000\rangle$で初期化された3量子ビットにアダマールゲートを適用して、均一な重ね合わせを作成します: $$\lvert \psi_1 \rangle = \frac{1}{\sqrt{8}} \left( \lvert000\rangle + \lvert001\rangle + \lvert010\rangle + \lvert011\rangle + \lvert100\rangle + \lvert101\rangle + \lvert110\rangle + \lvert111\rangle \right) $$ </li> <li> $\lvert101\rangle$ と $\lvert110\rangle$にフェーズオラクルを使って印をつけます: $$\lvert \psi_2 \rangle = \frac{1}{\sqrt{8}} \left( \lvert000\rangle + \lvert001\rangle + \lvert010\rangle + \lvert011\rangle + \lvert100\rangle - \lvert101\rangle - \lvert110\rangle + \lvert111\rangle \right) $$ </li> <li> 平均振幅の周りで反転を行います: <ol> <li> アダマールゲートをかけます $$\lvert \psi_{3a} \rangle = \frac{1}{2} \left( \lvert000\rangle +\lvert011\rangle +\lvert100\rangle -\lvert111\rangle \right) $$ </li> <li> Xゲートをかけます $$\lvert \psi_{3b} \rangle = \frac{1}{2} \left( -\lvert000\rangle +\lvert011\rangle +\lvert100\rangle +\lvert111\rangle \right) $$ </li> <li> 制御制御Zをかけます(制御が1,2で標的が3です) $$\lvert \psi_{3c} \rangle = \frac{1}{2} \left( -\lvert000\rangle +\lvert011\rangle +\lvert100\rangle -\lvert111\rangle \right) $$ </li> <li> Xゲートをかけます $$\lvert \psi_{3d} \rangle = \frac{1}{2} \left( -\lvert000\rangle +\lvert011\rangle +\lvert100\rangle -\lvert111\rangle \right) $$ </li> <li> アダマールゲートをかけます $$\lvert \psi_{3e} \rangle = \frac{1}{\sqrt{2}} \left( -\lvert101\rangle -\lvert110\rangle \right) $$ </li> </ol> </li> <li> $\lvert101\rangle$ と $\lvert110\rangle$の状態を得るために3量子ビットを測定します。 </li> </ol> 8個の可能性の中に2つの解があるため、1回の反復(ステップ2と3)を実行するだけでよいことに注意してください。 ### 3.1 Qiskit での実装 <a id='3qubit-implementation'></a> では、[上記の例](#3qubits) の$3$量子ビットのグローバーのアルゴリズムを実装し、2つの印のついた状態$\lvert101\rangle$ と $\lvert110\rangle$を検索します。 注:Qiskitは、この文献とは逆の方向に量子ビットを並べるため、回路が水平方向に反転して表示されていることに注意してください。 状態$\lvert101\rangle$ と $\lvert110\rangle$に印をつけるフェーズオラクルを作成します(ステップ1)。 ``` qc = QuantumCircuit(3) qc.cz(0, 2) qc.cz(1, 2) oracle_ex3 = qc.to_gate() oracle_ex3.name = "U$_\omega$" ``` 前のセクションでは、2量子ビットに固有のDiffuserを使用しました。下のセルでは、任意の数の量子ビット用の一般的なDiffuserを作成します。 <details> <summary> 詳細:一般的なDiffuserの作成(クリックして展開) </summary> $U_s$を $U_0$から作ることを思い出してください: $$ U_s = H^{\otimes n} U_0 H^{\otimes n} $$ そして、マルチ制御Zゲート ($MCZ$) は状態$|11\dots 1\rangle$の位相を反転します: $$ MCZ = \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & -1 \\ \end{bmatrix} \begin{aligned} \\ \\ \\ \leftarrow \text{Add negative phase to} \; |11\dots 1\rangle\\ \end{aligned} $$ 各量子ビットにXゲートを適用すると、変換が実行されます: $$ \begin{aligned} |00\dots 0\rangle & \rightarrow |11\dots 1\rangle\\ |11\dots 1\rangle & \rightarrow |00\dots 0\rangle \end{aligned} $$ よって: $$ U_0 = X^{\otimes n} (MCZ) X^{\otimes n} $$ これらの特性を一緒に使用すると、Hゲート、Xゲート、および単一のマルチ制御Zゲートを使用して𝑈𝑠を作成できます: $$ U_s = H^{\otimes n} U_0 H^{\otimes n} = H^{\otimes n} X^{\otimes n} (MCZ) X^{\otimes n} H^{\otimes n} $$ この回路は-1のグローバル位相を追加することに注意してください。 </details> ``` def diffuser(nqubits): qc = QuantumCircuit(nqubits) # Hゲートで |s> -> |00..0> に変換 for qubit in range(nqubits): qc.h(qubit) # Xゲートで |00..0> -> |11..1> に変換 for qubit in range(nqubits): qc.x(qubit) # マルチ制御Zゲートをかけます qc.h(nqubits-1) qc.mct(list(range(nqubits-1)), nqubits-1) # マルチ制御トフォリ qc.h(nqubits-1) # |11..1> -> |00..0> に変換 for qubit in range(nqubits): qc.x(qubit) # |00..0> -> |s> に変換 for qubit in range(nqubits): qc.h(qubit) # Diffuserをゲートにします U_s = qc.to_gate() U_s.name = "$U_s$" return U_s ``` 次に、回路を完成させるために、最初の部分で均一な重ね合わせを作成し、最後の部分で測定を入れます。8つの可能性のうちから2つの解を求めるためがあるため、1回の反復を実行するだけでよいことに注意してください。 ``` n = 3 grover_circuit = QuantumCircuit(n) grover_circuit = initialize_s(grover_circuit, [0,1,2]) grover_circuit.append(oracle_ex3, [0,1,2]) grover_circuit.append(diffuser(n), [0,1,2]) grover_circuit.measure_all() grover_circuit.draw() ``` ### 3.1.1 シミュレーターでの実験 <a id='3qubits-simulation'></a> 上記の回路をシミュレーターで実行します。 ``` backend = Aer.get_backend('qasm_simulator') results = execute(grover_circuit, backend=backend, shots=1024).result() answer = results.get_counts() plot_histogram(answer) ``` ご覧のとおり、アルゴリズムは印のついた状態 $\lvert101\rangle$と$\lvert110\rangle$を検出します。 ### 3.1.2 実デバイスでの実験 <a id='3qubits-device'></a> 実デバイスでは以下のようにして回路を実行できます。 ``` backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 3 and not x.configuration().simulator and x.status().operational==True)) print("least busy backend: ", backend) # 最も空いているバックエンドで回路を実行します。キュー内のジョブの実行をモニターします。 from qiskit.tools.monitor import job_monitor job = execute(grover_circuit, backend=backend, shots=1024, optimization_level=3) job_monitor(job, interval = 2) # 計算結果を得ます results = job.result() answer = results.get_counts(grover_circuit) plot_histogram(answer) ``` (うまくいけば)$\lvert101\rangle$ と$\lvert110\rangle$を測定する可能性が高くなります。 他の結果は、量子計算のエラーによるものです。 ## 4. 問題 <a id='problems'></a> 以下の関数`grover_problem_oracle`は、複数の量子ビット(`n`)と`variant`を取り、n量子ビットのオラクルを返します。 この関数は、同じ `n` と`variant`に対して常に同じオラクルを返します。 `grover_problem_oracle`を呼び出すときに、`print_solutions = True` を設定すると、各Oracleの解を確認できます。 ``` from qiskit_textbook.problems import grover_problem_oracle ## 使用例 n = 4 oracle = grover_problem_oracle(n, variant=1) # n量子ビットのオラクルの0番目の変数 qc = QuantumCircuit(n) qc.append(oracle, [0,1,2,3]) qc.draw() ``` 1. `grover_problem_oracle(4, variant=2)` は4量子ビットを使用していて、1つの解を持ちます。<br> a. この解を測定する確率が90%を超えるには、何回の反復が必要ですか? <br> b. グローバーのアルゴリズムを使用して、この解となる状態を見つけてください。<br> c. 上記の問題1aで計算した反復数をさらに増やすとどうなりますか?それはなぜでしょうか?<br> 2. 2つの解と4つの量子ビットの場合、解を測定する確率が90%を超えるには、何回の反復が必要ですか。 `grover_problem_oracle(4, variant=1)`を使用して回答をテストしてください(2つの解があります)。 3. 以下を入力とする関数`grover_solver(oracle, iterations)` を作成してください: - ゲートとしてのグローバーオラクル(`oracle`) - 反復の数(整数)(`iterations`) その際、'`iterations`'の反復を使用して、'`oracle`' ゲートでグローバーのアルゴリズムを実行する `QuantumCircuit` を返すようにしてください。 ## 5. グローバーのアルゴリズムで数独を解く <a id="sudoku"></a> この章でこれまで使われていたオラクルは、事前にその解が分かっているものから作成されています。ここでは、グローバーのアルゴリズムを使用して、事前に解を知らなくても解ける単純な問題を解きます。その問題は2×2のバイナリーの数独で、以下の2つのシンプルなルールに基づいています: - 同じ値を2回含む列はない - 同じ値を2回含む行はない 数独の各正方形を次の図のような変数に割り当てて: ![2×2 binary sudoku, with each square allocated to a different variable](images/binary_sudoku.png) 回路にこの数独の解を出力させたいと思います。 グローバーのアルゴリズムを使ってこの問題を解くのは実用的ではありませんが(おそらく頭の中で解決策を見つけることができます!)、この例では、古典的な[決定問題](https://en.wikipedia.org/wiki/Decision_problem)をグローバーのアルゴリズムのオラクルに変換することを示すことが目的です。 ### 5.1 問題を回路に変換する この問題を解くためのオラクルを作成したいと考えています。まず、正しい解を特定する回路を作成します。 [計算の原子](../ch-states/atoms-computation.html) の章で量子回路を使用して古典的な加算器を作成した方法と同様に、可変ビットの状態が有効な解であるかどうかをチェックする _古典的な_ 関数を量子回路上に作成する必要があります。 二つの列と行をそれぞれチェックする必要があるため、チェックすべき条件は4つです: ``` v0 ≠ v1 # 上の行をチェック v2 ≠ v3 # 下の行をチェック v0 ≠ v2 # 左の列をチェック v1 ≠ v3 # 右の列をチェック ``` 古典的な(計算基底の)状態を比較していることを忘れないでください。 便宜上、この一連の比較を条項(clause)のlistにまとめます: ``` clause_list = [[0,1], [0,2], [1,3], [2,3]] ``` 各変数の値を回路のビットに割り当てます。上記の条項を計算でチェックするために、`XOR` ゲートを使用します(`XOR` ゲートは、[計算の原子](../ch-states/atoms-computation.html) の章で学びました)。 ``` def XOR(qc, a, b, output): qc.cx(a, output) qc.cx(b, output) ``` 以下の回路の`output0`のビットは、`input0 ≠ input1`の場合にのみ反転することを確認してください: ``` # ビットに名前を付けるために別々のレジスタを使用します in_qubits = QuantumRegister(2, name='input') out_qubit = QuantumRegister(1, name='output') qc = QuantumCircuit(in_qubits, out_qubit) XOR(qc, in_qubits[0], in_qubits[1], out_qubit) qc.draw() ``` この回路は、`input0 == input1` かどうかをチェックし、出力を`output0`に格納します。 各条項をチェックするために、`clause_list`のペアごとにこの回路を繰り返し、出力を新しいビットに格納します: ``` # ビットに名前を付けるために別々のレジスタを作成します var_qubits = QuantumRegister(4, name='v') # 変数ビット clause_qubits = QuantumRegister(4, name='c') # 条項のチェック結果を格納するビット # 量子回路の作成 qc = QuantumCircuit(var_qubits, clause_qubits) # 各条項をチェックするためにXOR ゲートを使います i = 0 for clause in clause_list: XOR(qc, clause[0], clause[1], clause_qubits[i]) i += 1 qc.draw() ``` `v0, v1, v2, v3`の割り当てがこの数独の解である場合、ビット`c0, c1, c2, c3`の最終状態はすべて`1`になります。 チェック回路を完了するには、すべての条項が満たされている場合にのみ、1ビットを`1` にする必要があります。このようにして、1ビットだけを調べて、この割り当てが解決策であるかどうかを確認します。これは、マルチコントロール・トフォリゲートを使用して行うことができます。 ``` # ビットに名前を付けるために別々のレジスタを作成します var_qubits = QuantumRegister(4, name='v') clause_qubits = QuantumRegister(4, name='c') output_qubit = QuantumRegister(1, name='out') qc = QuantumCircuit(var_qubits, clause_qubits, output_qubit) # 条項(clause)の計算 i = 0 for clause in clause_list: XOR(qc, clause[0], clause[1], clause_qubits[i]) i += 1 # すべての条項が満たされていたら、'output' ビットを反転 qc.mct(clause_qubits, output_qubit) qc.draw() ``` 上記の回路は、ビット`v0`, `v1`, `v2` および `v3`の初期割り当てを入力として受け取り、他のすべてのビットは`0`に初期化する必要があります。回路の実行後、`out0`ビットの状態は、この割り当てが解決策であるかないかを教えてくれます; `out0 = 0`は、この割り当てが解 _ではない_ ことを意味し、`out0 = 1`は、この割り当てが解 _である_ ことを意味します。 **重要:** この先を読み続ける前に、上記の回路を完全に理解し、上の段落で述べたように機能していることを確認しておいてください。 ### 5.2 逆計算、そしてオラクルの完了 位相キックバックを使って、このチェック回路をGroverオラクルに変えることができます。 ここまでを要約すると、3つのレジスターがありました: - 数独変数($x = v_3, v_2, v_1, v_0$)を格納するレジスター - 条項(clause)の結果を格納するレジスター(これは状態$|0000\rangle$で始まり、$|0\rangle$と略します) - チェック回路の出力を格納する1量子ビット($|\text{out}_0\rangle$) オラクルを作成するには、変換を実行するための回路($U_\omega$)が必要です: $$ U_\omega|x\rangle|0\rangle|\text{out}_0\rangle = |x\rangle|0\rangle|\text{out}_0\oplus f(x)\rangle $$ 量子ビット`out0`を重ね合わせ状態 $|{-}\rangle$ に設定すると、次のようになります。 $$ \begin{aligned} U_\omega|x\rangle|0\rangle|{-}\rangle &= U_\omega|x\rangle|0\rangle\otimes\tfrac{1}{\sqrt{2}}(|0\rangle - |1\rangle)\\ &= |x\rangle|0\rangle\otimes\tfrac{1}{\sqrt{2}}(|0\oplus f(x)\rangle - |1\oplus f(x)\rangle) \end{aligned} $$ $f(x) = 0$の場合、以下の状態になります: $$ \begin{aligned} &= |x\rangle|0\rangle\otimes \tfrac{1}{\sqrt{2}}(|0\rangle - |1\rangle)\\ &= |x\rangle|0\rangle|-\rangle\\ \end{aligned} $$ (つまり、変更なしです。)しかし、$f(x) = 1$の場合(つまり$x = \omega$の場合)、$|{-}\rangle$の量子ビットに負の位相が導入されます。 $$ \begin{aligned} &= \phantom{-}|x\rangle|0\rangle\otimes\tfrac{1}{\sqrt{2}}(|1\rangle - |0\rangle)\\ &= \phantom{-}|x\rangle|0\rangle\otimes -\tfrac{1}{\sqrt{2}}(|0\rangle - |1\rangle)\\ &= -|x\rangle|0\rangle|-\rangle\\ \end{aligned} $$ これは、状態$|0\rangle|{-}\rangle$に2つの補助レジスターを使用して機能するオラクルです: $$ U_\omega|x\rangle|0\rangle|{-}\rangle = \Bigg\{ \begin{aligned} \phantom{-}|x\rangle|0\rangle|-\rangle \quad \text{for} \; x \neq \omega \\ -|x\rangle|0\rangle|-\rangle \quad \text{for} \; x = \omega \\ \end{aligned} $$ チェック回路をGroverオラクルに適合させるには、2番目のレジスタ(`c`)のビットが計算後に常に状態$|0000\rangle$に戻ることを保証する必要があります。これを行うには、回路の実行後に `c0 = c1 = c2 = c3 = 0`を保証する条項(clause)を計算する回路の部分を繰り返します。このステップを _「逆計算」_ と呼びます。 ``` var_qubits = QuantumRegister(4, name='v') clause_qubits = QuantumRegister(4, name='c') output_qubit = QuantumRegister(1, name='out') cbits = ClassicalRegister(4, name='cbits') qc = QuantumCircuit(var_qubits, clause_qubits, output_qubit, cbits) def sudoku_oracle(qc, clause_list, var_qubits, clause_qubits, cbits): # 条項(clause)の計算 i = 0 for clause in clause_list: XOR(qc, clause[0], clause[1], clause_qubits[i]) i += 1 # すべての条項が満たされていたら、'output' ビットを反転 qc.mct(clause_qubits, output_qubit) # 条項を逆計算して条項のチェックビットを0にリセット i = 0 for clause in clause_list: XOR(qc, clause[0], clause[1], clause_qubits[i]) i += 1 sudoku_oracle(qc, clause_list, var_qubits, clause_qubits, cbits) qc.draw() ``` まとめると、上記の回路は以下の内容を実行します: $$ U_\omega|x\rangle|0\rangle|\text{out}_0\rangle = \Bigg\{ \begin{aligned} |x\rangle|0\rangle|\text{out}_0\rangle \quad \text{for} \; x \neq \omega \\ |x\rangle|0\rangle\otimes X|\text{out}_0\rangle \quad \text{for} \; x = \omega \\ \end{aligned} $$ そして、$|\text{out}_0\rangle$の初期状態が$|{-}\rangle$に等しい場合、以下のようになります: $$ U_\omega|x\rangle|0\rangle|{-}\rangle = \Bigg\{ \begin{aligned} \phantom{-}|x\rangle|0\rangle|-\rangle \quad \text{for} \; x \neq \omega \\ -|x\rangle|0\rangle|-\rangle \quad \text{for} \; x = \omega \\ \end{aligned} $$ ### 5.3 アルゴリズム全体 このオラクルをグローバーのアルゴリズムに入れます! ``` var_qubits = QuantumRegister(4, name='v') clause_qubits = QuantumRegister(4, name='c') output_qubit = QuantumRegister(1, name='out') cbits = ClassicalRegister(4, name='cbits') qc = QuantumCircuit(var_qubits, clause_qubits, output_qubit, cbits) # 'out0' を状態 |->に初期化 qc.initialize([1, -1]/np.sqrt(2), output_qubit) # 量子ビットを |s> の状態に初期化 qc.h(var_qubits) qc.barrier() # for visual separation ## 最初の反復 # オラクルの適用 sudoku_oracle(qc, clause_list, var_qubits, clause_qubits, cbits) qc.barrier() # for visual separation # diffuserを適用 qc.append(diffuser(4), [0,1,2,3]) ## 2回目の反復 sudoku_oracle(qc, clause_list, var_qubits, clause_qubits, cbits) qc.barrier() # for visual separation # diffuserを適用 qc.append(diffuser(4), [0,1,2,3]) # 変数の量子ビットを測定 qc.measure(var_qubits, cbits) qc.draw() # シミュレーションして結果をプロットします qasm_simulator = Aer.get_backend('qasm_simulator') result = execute(qc, backend=qasm_simulator, shots=1024).result() plot_histogram(result.get_counts()) ``` 他のどのビット文字列よりもはるかに高い測定確率を持つ2つのビット文字列、`0110`と`1001`があります。これらは以下の割り当てに対応します: ``` v0 = 0 v1 = 1 v2 = 1 v3 = 0 ``` と ``` v0 = 1 v1 = 0 v2 = 0 v3 = 1 ``` これが私たちの数独の2つの解です!この章の目的は、実際の問題からGroverオラクルを作成する方法を示すことでした。今回の問題はささいなものでしたが、この問題を解くプロセスは任意の決定問題に適用できます(十分に大きさなサイズの回路を使います)。 以上をまとめると、今回の問題を解く手順は次のとおりです: 1. 正しい解を特定する可逆な古典回路を作成する 2. 位相キックバックと逆計算を使って、この回路をオラクルに変える 3. グローバーのアルゴリズムを使って、このオラクルを解く ## 6. 参考文献 <a id='references'></a> 1. L. K. Grover (1996), "A fast quantum mechanical algorithm for database search", Proceedings of the 28th Annual ACM Symposium on the Theory of Computing (STOC 1996), [doi:10.1145/237814.237866](http://doi.acm.org/10.1145/237814.237866), [arXiv:quant-ph/9605043](https://arxiv.org/abs/quant-ph/9605043) 2. C. Figgatt, D. Maslov, K. A. Landsman, N. M. Linke, S. Debnath & C. Monroe (2017), "Complete 3-Qubit Grover search on a programmable quantum computer", Nature Communications, Vol 8, Art 1918, [doi:10.1038/s41467-017-01904-7](https://doi.org/10.1038/s41467-017-01904-7), [arXiv:1703.10535 ](https://arxiv.org/abs/1703.10535) 3. I. Chuang & M. Nielsen, "Quantum Computation and Quantum Information", Cambridge: Cambridge University Press, 2000. ``` import qiskit qiskit.__qiskit_version__ ```
github_jupyter
## 1. 숫자 - 기본 연산자 : `+, -, *, /` ``` print(2 + 2) print(50 - 5 * 6) print((50 - 5 * 6) / 4) # always return float type ``` - 거듭제곱(`**`), 몫(`//`), 나머지(`%`) ``` print(5 ** 2) print(17 // 3) print(17 % 3) ``` ## 2. 자료구조 : 리스트, 딕셔너리, 튜플, - 다른 값들을 덩어리로 묶는데 사용하는 여러가지 컴파운드 자료형 지원 ### 2-1. 리스트 - 대괄호(`[]`) 사이에 쉼표로 구분된 값들의 목록으로 표현되는 시퀀스(sequence) 자료형, 서로다른 항목 가능 - 문자열과 같이 숫자로 인덱싱, 슬라이스 연산 및 개별 요소 수정도 가능(가변) - 이어붙이기 연산도 지원 - 주요 함수(메서드) `append, pop` ref) https://docs.python.org/ko/3/tutorial/datastructures.html ``` squares1 = [1, 4, 9, 16, 25] print(squares1) squares1[0] = 0 print(squares1) squares2 = ['a', 1] print(squares2) print(squares1+squares2) squares1.append('36') print(squares1) print(squares2.pop()) print(squares2) ``` ### 2-2. 딕셔너리 - 중복되지 않는다는 제약조건을 가진 키:값 쌍의 집합 - 중괄호(`{}`) 안에 쉽료로 분리된 키:값 쌍들의 목록을 넣어 정의 - 키(불변)로 인덱싱 및 인덱싱된 값(가변) 수정 가능 - 주요 함수(메서드) `del, items, keys, values` ``` tel1 = {'jack': 4098, 'sape': 4139} tel2 = dict(sape=4139, guido=4127, jack=4098) print(tel1, tel2) print(tel1['jack']) tel1['jack'] = 5098 print(tel1) tel1['guido'] = 4127 print(tel1) del(tel1['sape']) print(tel1) print(tel1.items()) print(tel1.keys()) ``` ### 2-3. 튜플 - 쉼표로 구분되는 여러값으로 구성되는 시퀀스(sequence) 자료형 - 중괄호(`()`) 안에 쉽료로 구성요소를 분리하여 입력 - 튜플안 요소들은 인덱싱, 언팩킹으로 활용 및 접근 가능하나 불변 자료형이므로 슬라이싱과 개별요소의 수정은 불가 - 단) 개별요소가 리스트와 같이 가변형 자료일 경우 수정 가능 ``` t = (12345, 54321, 'hello!') print(t) print(t[0]) u = t, (1, 2, 3, 4, 5) print(u) v = (1, 2, 3, 4, 5) v[0] = 5 w = ([1, 2, 3], [3, 2, 1]) print(w) w[0][1] = 0 print(w) ``` ## 3. 흐름 제어: if, for ### 3-1. if - `if` 다음의 조건을 판단하여 참인 경우, 선택적으로 실행 - 없거나 여러개의 elif 부가 있을 수 있고, else 는 선택적 - `if ... elif ...` 로 switch, case 문 대체 ``` x = int(input("Please enter an integer: ")) if x < 0: print('Negative') elif x == 0: print('Zero') elif x == 1: print('Single') else: print('More') ``` ### 3-2. for - 시퀀스 (리스트나 문자열)의 항목들을 그 시퀀스에 들어있는 순서대로 순회하기 위해 사용 ``` words = ['cat', 'window', 'defenestrate'] for w in words: print(w, len(w)) for i, w in enumerate(words): print(i, w) ``` - 숫자들의 시퀀스로 순회 `range()`사용, `range`는 리스트인 것처럼 동작하나 리스트는 아님 - ex) `range(start, end, step)` ``` for i in range(5): print(i) for i in [0, 1, 2, 3, 4]: print(i) print(range(5)) for i in range(0, 10, 3): print(i) ``` ## 4. 함수 - `def` 키워드로 함수 정의 시작, 함수 이름과 형식 매개변수 목록이 뒤따름, 함수 바디는 들여쓰기 된 다음줄에서 시작, 반환값은 `return` 다음 값 - 함수에서의 모든 변수 대입들은 값을 지역 심볼 테이블에 저장, 함수안 변수 참조 순서는 지역 심볼 테이블 -> 전역 심볼 테이블 -> 내장 이름 테이블 - 함수 인자: 위치 인자, 기본 인자 값, 키워드 인자값 - 하나이상의 값 `return` 가능 ex) `return a, b` : 반환 값은 `(a,b)`의 튜플 형식 ``` # 위치 인자 기본 인자 def add(a, b=2): c = a + b return c print(add(2)) print(add(2,4)) print(add(2,b=4)) ``` ## 5. 기타 ### with 구문 - 특별한 컨테스트(리소스)에서 실행함을 나타나는데 사용 - ex) 파일 읽기 쓰기, DB 데이터 조회 - 대표적으로 파일이나 소켓과 같은 특정한 리소스에 엑세스 하는 경우 해당 리소스를 열고, 리소스를 회수 반환 - with 구문을 활용하면, 특정 블럭내의 동작으로 제한, 블록을 벗어나는 경우 컨텍스트(리소스)가 해제처리를 보장 ``` with open("x.txt", 'w') as f: f.write('Python') with open("x.txt") as f: data = f.read() print(data) ```
github_jupyter
# BLU04- Learning Notebook - Part 3 of 3 - Time series modelling concepts With the multi-index, rolling windows and resampling methods we've shown you, there are already a lot of questions about historical data you can answer. But what about predicting the future values for stock markets, electricity demand, pollution levels, etc? To do that, we need to enter into the realm of **forecasting**. In BLU05 you will learn how to do this using classical models, and in BLU06 using ML models. But before that, it's important to understand the fundamental concepts behind time series modelling, which apply both to classical and ML models. This is the focus of this notebook. ## Time Series Concepts Time series can be thought of as a (linear or non-linear) composition of 4 components: **trend**, **cyclical**, **seasonal** and **irregular** $$Y_t = Trend + Cyclical + Seasonal + Irregular$$ Or $$Y_t = Trend \cdot Cyclical \cdot Seasonal \cdot Irregular$$ Or another non-linear combination of all four (in BLU05 we will tell you how to know whether the composition should be additive or multiplicative). Each one of the previous four components is also a time series. ### 1. Trend The trend is the component of the time series that allows us to see if, in general, the dependent variable we are observing is increasing, without taking into consideration local fluctuations. Usually, people look at the trend to see if the mean value of a series is (monotonically) increasing or decreasing. The trend can be modelled as a linear or non-linear process, even though people prefer to assume it is linear. In order to understand the concept of trend, let's look at a dataset with [monthly totals of a US airline passengers from 1949 to 1960](https://www.kaggle.com/chirag19/air-passengers) ``` from matplotlib import pyplot as plt %matplotlib inline import pandas as pd from utils import load_airlines_series data = load_airlines_series() data.plot(); plt.ylabel('Thousands of passengers') plt.title("Monthly thousands of US airline passengers from 1949 to 1960", size=14) ``` The first thing you probably notice is that, in general, the *passengers are increasing* even though there are annual peaks. In order to visualize the trend, let's plot a simple linear regression that maps the time and the sales The process is quite simple: ``` from sklearn.linear_model import LinearRegression ``` create an integer numpy array with the number of the time step ``` # don't worry if you don't understand this first step, it's quite simple: # X will just be the 0, 1, 2, 3 on the left instead of the dates X = data.reset_index().index.values.reshape(-1, 1) print(data.reset_index().head(5)) ``` fit a simple linear regression that maps the time step to the observed time series ``` slr = LinearRegression(fit_intercept=True) slr.fit(X, data) linear_trend = pd.Series(slr.predict(X), index=data.index) data.plot(label="original data") linear_trend.plot(label="linear trend") plt.ylabel('Thousands of passengers') plt.title("Monthly thousands of US airline passengers from 1949 to 1960", size=14) plt.legend(); ``` Let's also check the R² score of this simple model ``` slr.score(X, data) ``` Not bad for a really simple model! Another very common approach to trend estimation is the **moving average**. You already used this method in a previous notebook. Indeed, a moving average can be used for smoothing a signal but also for estimating its trend. With this method, you need to set the window size. Let's start with a window size of 6 (months) ``` moving_avg_6_months = data.rolling(6).mean() moving_avg_6_months.head(10) moving_avg_6_months.plot(); plt.ylabel('Thousands of passengers') plt.title("Moving average over 6 months of thousands of US airline passengers from 1949 to 1960", size=14) ``` As you noticed, there are 5 NaNs in the moving average series. That is because the first 5 elements do not have enough data for a window of 6. We can "fix" this by setting `min_periods` to 0 ``` moving_avg_6_months_ = data.rolling(6, min_periods=0).mean() moving_avg_6_months_.head(10) ``` Setting `min_periods` to something smaller than 6 will make pandas copy the values from the original series into the moving average series. To see that is the case, let's look at the original series ``` data.head(10) ``` Let's see how it looks in a plot ``` data.plot(label="original data") linear_trend.plot(label="linear trend") moving_avg_6_months_.plot(label="moving average (6 months) trend") plt.ylabel('Thousands of passengers') plt.title("Monthly thousands of US airline passengers from 1949 to 1960", size=14) plt.legend(); ``` We should note that setting `min_periods` to 0 is fine for visualization purposes BUT once you start using trend estimation to prepare your time series for modelling, using `min_periods=0` is not ok at al. What if we used a larger window? Like, for example, 12 months ``` moving_avg_12_months_ = data.rolling(12, min_periods=0).mean() data.plot(label="original data") moving_avg_6_months_.plot(label="moving average (6 months) trend") moving_avg_12_months_.plot(label="moving average (12 months) trend") plt.ylabel('Thousands of passengers') plt.title("Monthly thousands of US airline passengers from 1949 to 1960", size=14) plt.legend(); ``` And what about 24 months? ``` moving_avg_24_months_ = data.rolling(24, min_periods=0).mean() data.plot(label="original data") moving_avg_6_months_.plot(label="moving average (6 months) trend") moving_avg_12_months_.plot(label="moving average (12 months) trend") moving_avg_24_months_.plot(label="moving average (24 months) trend") plt.ylabel('Thousands of passengers') plt.title("Monthly thousands of US airline passengers from 1949 to 1960", size=14) plt.legend(); ``` Every time we increase the window size for the moving average, we get a smoother and more stable time series. But...what is a good window size? The answer is: it depends on your business expectations OR what preprocessing you require for your time series. Also, a final note about trend estimation: several existing techniques for trend estimation include both the trend and cyclical components. ### 2. Cyclical The cyclical component corresponds to repeating patterns that occur in non-regular time intervals. For example, the performance of the world economy in the 20st century would exhibit a strong cyclical component with non-regular cycles (remember the crisis at the 80's and 2007). ### 3. Seasonal Unlike the cyclical component, the seasonal component changes at a fixed rate. For example, the bookings in hotels in certain cities have well known maxima (Autumn) and minima (Summer). Our number of passengers dataset also has clear seasonality: ``` data.plot() plt.ylabel('Thousands of passengers') plt.title("Monthly thousands of US airline passengers from 1949 to 1960", size=14) ``` In this time series, we can see, at least, two things: (1) a positive trend and (2) minima and maxima with a regular pattern every year. To make (2) more clear, let's plot the time series for each year ``` from utils import plot_seasonality_for_airlines plot_seasonality_for_airlines() ``` Around June-July, we get the peak and around October, we get the period with fewer passengers. But how can we convey seasonality to a model? Well, we can start by looking at lags scatter: draw a scatter plot between y(t) and y(t-lag). (In other words, what is the relationship between the number of flights in a month and the month (lag) months before?) **Missing data... again?** Before the next step, we need to make sure that our dataset doesn't have missing dates, normally what we would do is: ``` data.isnull().sum() ``` But actually this only means that there are no NaNs in the dataset. What if our dataset simply doesn't contain the row for a certain month? We wouldn't detect it by using isnull. To solve this we can use the resample method you learned on the last notebook. Since our data has monthly frequency, we can resample it monthly, so that if there's any month missing it will be created. The asfreq method makes it so that any row created by the resample will contain a NaN. ``` data = data.resample('MS').asfreq() data.isnull().sum() ``` Ok, our dataset has all consecutive months without any missing in between, let's proceed! We can now use pandas shift function, which shifts the values without realigning the data (basically a lag on the values). In the example below we can see that a negative shift of -1, means we're looking at y(t+1). Note that the last value will be NaN, this is not problematic for visualization but maybe for modelling. ``` data = data.to_frame() data['lag_1'] = data['thousands of passengers'].shift(-1) data['lag_2'] = data['thousands of passengers'].shift(-2) data['lag_3'] = data['thousands of passengers'].shift(-3) data['lag_8'] = data['thousands of passengers'].shift(-8) data['lag_12'] = data['thousands of passengers'].shift(-12) ``` We can now test some different shifts on the data and see the relationship between y(t) and y(t-lag). For example, we can see that there seems to be some correlation between y(t) and y(t+1). It makes sense that the number of passengers the month before has some relation with the number of passengers in the present month. It's also interesting to note that this correlation seems to decrease as the lag increases, for lags -2, -3 and -8 (note how the plots become wider) ``` plt.scatter(data['thousands of passengers'], data.lag_1) plt.ylabel('Thousands of passengers for lag -1') plt.xlabel('Thousands of passengers') plt.title("Monthly thousands of US airline passengers from 1949 to 1960", size=14) plt.show() plt.scatter(data['thousands of passengers'], data.lag_2) plt.ylabel('Thousands of passengers for lag -2') plt.xlabel('Thousands of passengers') plt.title("Monthly thousands of US airline passengers from 1949 to 1960", size=14) plt.show() plt.scatter(data['thousands of passengers'], data.lag_3) plt.ylabel('Thousands of passengers for lag -3') plt.xlabel('Thousands of passengers') plt.title("Monthly thousands of US airline passengers from 1949 to 1960", size=14) plt.show() plt.scatter(data['thousands of passengers'], data.lag_8) plt.ylabel('Thousands of passengers for lag -8') plt.xlabel('Thousands of passengers') plt.title("Monthly thousands of US airline passengers from 1949 to 1960", size=14) plt.show() ``` But now, for lag = -12 the correlation clearly increases, being even higher than for lag = -1. This seems to indicate that there is a strong yearly seasonality, corroborating what we saw in the previous plots. ``` plt.scatter(data['thousands of passengers'], data.lag_12) plt.ylabel('Thousands of passengers for lag -12') plt.xlabel('Thousands of passengers') plt.title("Monthly thousands of US airline passengers from 1949 to 1960", size=14) plt.show() ``` We can also look at the Pearson correlation using pandas corr() method, which confirms the suspicions from the correlation plots: There seems to be a strong yearly seasonality. ``` data.corr()['thousands of passengers'] ``` **Note:** If our dataset had missing months, the most appropriate way to calculate correlation would be to resample with asfreq, so that the missing months are indicated as NaNs, thus being ignored on the calculation by corr(). If you don't resample it first, the correlation will be wrong, because the shifts will not have the meaning they should due to the missing data. This analysis is hinting at a very important concept: auto-correlation. Indeed, auto-correlation is very important to convey information regarding seasonality to a model, and you will learn more about it in BLU05. ### 4. Irregular After accounting for all the previous components, the remaining component, called *irregular* or *residual*, won't have any pattern. This part of the time series is considered noise. You might be thinking that this component is useless. But, in fact, several modelling techniques analyze this component in order to check a better model can be created. ### 5. Example After introducing all 4 components, let's look into more examples ![four example time series](images/ts-examples.png "") The upper left time series shows both a seasonal component for each year and a strong cyclical component that takes 6-10 years (imagine an ark connecting 1975 to 1981 and another one connecting 1981 and 1991) but no apparent trend. In the upper right corner, we have a strong negative trend but no visible seasonal nor cyclical behavior. This might be due to how small the time series was. In the lower left corner, shows both strong (positive trend) and seasonality but no cyclical component. Finally, the time series at the lower right corner looks like pure noise, with some peaks. No clear pattern. You might be asking "Are there any tools to help me identify all 4 components? After identifying them, what can I do with them?". The answers to these questions will be given in the next BLU ;) ### **Summary of the methods we have learnt in this unit:** * `shift` - shift allows you to shift the data without realigning the index of the time series ### **A few examples:** * Data of the previous time period : data['thousands of passengers'].shift(+1) * Data of the next time period : data['thousands of passengers'].shift(-1)
github_jupyter
``` import torch import torch.nn as nn import torchvision import torchvision.transforms as transforms import numpy as np import pandas as pd import matplotlib.pyplot as plt import plotly.graph_objects as go train_ds = torchvision.datasets.MNIST('.', train = True, transform = transforms.ToTensor(), download = True) test_ds = torchvision.datasets.MNIST('.', train = False, transform = transforms.ToTensor(), download = False) batch_size = 256 train_loader = torch.utils.data.DataLoader(train_ds, batch_size = batch_size, shuffle = True) test_loader = torch.utils.data.DataLoader(test_ds, batch_size = batch_size, shuffle = False) # model creation class ANN_v1(nn.Module): def __init__(self, criterion): super(ANN_v1, self).__init__() self.l1 = nn.Linear(784, 128) self.l2 = nn.ReLU() self.l3 = nn.Linear(128, 10) self.loss = criterion def forward(self, x): x = self.l1(x) x = self.l2(x) x = self.l3(x) return x # declare criterion criterion = nn.CrossEntropyLoss() # initalize the model ann_v1 = ANN_v1(criterion) # specify an optimizer optimizer = torch.optim.Adam(ann_v1.parameters(),lr = 0.001) # pick the device to do calculations with a model passed to device. device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(device) ann_v1.to(device) # define train and prediction functions import time def train(model, train_loader, test_loader, optimizer, epochs): """ Apply batch gradient descent to the model with given data loader. """ train_losses = np.zeros(epochs) test_losses = np.zeros(epochs) train_accuracies = np.zeros(epochs) test_accuracies = np.zeros(epochs) for i in range(epochs): t0 = time.time() tmp_train_losses = [] tmp_train_accuracies = [] tmp_test_losses = [] tmp_test_accuracies = [] # calculate the loss and do backprop with train data to train the model for inputs, targets in train_loader: inputs, targets = inputs.to(device), targets.to(device) inputs = inputs.view(-1, 784) # forward pass outputs = ann_v1.forward(inputs) loss = ann_v1.loss(outputs, targets) tmp_train_losses.append(loss.item()) # back propagation loss.backward() optimizer.step() optimizer.zero_grad() # calculate accuracy acc = get_accuracy(model, inputs, targets) tmp_train_accuracies.append(acc) # calculate the loss for test dataset for inputs, targets in test_loader: inputs, targets = inputs.to(device), targets.to(device) inputs = inputs.view(-1, 784) # forward pass outputs = ann_v1.forward(inputs) loss = ann_v1.loss(outputs, targets) tmp_test_losses.append(loss.item()) # calculate accuracy acc = get_accuracy(model, inputs, targets) tmp_test_accuracies.append(acc) avg_tr_loss = np.mean(tmp_train_losses) avg_te_loss = np.mean(tmp_test_losses) avg_tr_acc = np.mean(tmp_train_accuracies) avg_te_acc = np.mean(tmp_test_accuracies) train_losses[i] = avg_tr_loss test_losses[i] = avg_te_loss train_accuracies[i] = avg_tr_acc test_accuracies[i] = avg_te_acc t1 = time.time() print("-"*50) print(f"Epochs {i+1}/{epochs} \n took {t1-t0:.4f}s \n avg_losses per batch: train: {avg_tr_loss:.4f} | test: {avg_te_loss:.4f} \n avg_accs per batch: train: {avg_tr_acc:.4f} | test: {avg_te_acc:.4f}") print("-"*50) return train_losses, train_accuracies, test_losses, test_accuracies def get_accuracy(model, inputs, targets): # model outputs logits. So we need to convert them into label-based predictions. We can achieve this by simply getting the "maximum" value # of logit column since softmax is also increases if given input increases. with torch.no_grad(): _ , preds = torch.max(model(inputs), 1) accuracy = ((preds == targets).sum().item())/(targets.shape[0]) return accuracy def predict(model, data_loader): n_total = 0. n_correct = 0. with torch.no_grad(): for inputs, targets in data_loader: inputs, targets = inputs.to(device), targets.to(device) inputs = inputs.view(-1, 784) # make predictions _, preds = torch.max(model(inputs), 1) n_correct += (preds == targets).sum().item() n_total += targets.shape[0] return n_correct/n_total train_loss, train_acc, test_loss, test_acc = train(ann_v1, train_loader, test_loader, optimizer, 10) x = list(range(1, 11)) fig = go.Figure() fig.add_trace(go.Scatter(x = x, y = train_loss, mode = 'lines+markers', name = 'avg_train_loss')) fig.add_trace(go.Scatter(x = x, y = test_loss, mode = 'lines+markers', name = 'avg_test_loss')) fig.show() fig.data = [] # remove the traces above fig.add_trace(go.Scatter(x = x, y = train_acc, mode = 'lines+markers', name = 'avg_train_acc')) fig.add_trace(go.Scatter(x = x, y = test_acc, mode = 'lines+markers', name = 'avg_test_acc')) fig.show() # make prediction with test data and get the accuracy print("Make prediction with test data") predict(ann_v1,test_loader) # Plot confusion matrix (code snippet) from sklearn.metrics import confusion_matrix import itertools def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') plt.show() x_test = test_ds.data.numpy() y_test = test_ds.targets.numpy() p_test = np.array([]) for inputs, targets in test_loader: inputs = inputs.to(device) inputs = inputs.view(-1, 784) outputs = ann_v1(inputs) _, preds = torch.max(outputs, 1) # this returns maximum value and argmax but I only need argmax since it represents the same label for image. # concatenate preds with p_test # I use .cpu() to copy predictions to the CPU. Since model calculated it on GPU, I need to get it back to CPU in order to make something with it. # Because we're doing our calculations (operations) mainly on CPU. p_test = np.concatenate((p_test, preds.cpu().numpy())) cm = confusion_matrix(y_test, p_test) plot_confusion_matrix(cm, list(range(10)), title = 'Confusion Matrix for MNIST Digit Recognition') misclassified_idx = np.where(p_test != y_test)[0] print('# of misclassified images: ',len(misclassified_idx)) print(f'{(len(misclassified_idx) / y_test.shape[0]) * 100 }% of the images are misclassified.') i = np.random.choice(misclassified_idx) plt.imshow(x_test[i], cmap = 'gray') plt.title("True label %s | Predicted : %s" % (y_test[i], int(p_test[i]))); ```
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from collections import Counter import matplotlib %matplotlib inline ``` ## See a few samples of data ``` !head ../data/vgsales.csv #!more ../data/vgsales.csv ``` ## Load data ``` # read data from csv file data = pd.read_csv("../data/vgsales.csv", warn_bad_lines = True, error_bad_lines = False, verbose = True) print data.info() print "\n\n data shape: ", data.shape # drop first & last coloumn data.drop(data.columns[[0, 11]], axis = 1, inplace = True) data["Year"] = data["Year"].astype('category') print data.info() print data.shape # print data.head() ``` ## missing value statistics ``` def count_missing(x): return sum(x.isnull()) / float(len(x)) # missing value info print "Missing Value Statistics" print data.apply(count_missing, axis = 0) ``` # Coherence Validation ``` data[(data['Global_Sales'] < (data['NA_Sales'] + data['EU_Sales'] + data['JP_Sales'] + data['Other_Sales']) - 0.02)] ``` ## Central Tendency ``` # basic descriptive statistics, here 50% quartile is just the median data.describe(percentiles = [0.25,0.5,0.75,0.997], include = 'all') ``` ## Spread ``` data.skew(axis = 0) data.kurt(axis = 0) ``` ## Box Plot ``` data.plot.box() data2 = data.loc[(data['NA_Sales'] <= 1) & (data['EU_Sales'] <= 1) & (data['JP_Sales'] <= 1) & (data['Other_Sales'] <= 1) & (data['Global_Sales'] <= 1)] data2.describe(percentiles = [0.25,0.5,0.75,0.997], include = 'all') data2.boxplot(figsize = (8,8)) ``` ## Histogram ``` data['NA_Sales'].hist(bins = 50) ``` ## Categorical Analysis ``` data['Genre'].unique() genre_cnt = data['Genre'].value_counts(sort = False) print genre_cnt genre_freq = genre_cnt / float(data.shape[0]) print genre_freq fig, ax = plt.subplots(figsize=(8,5)) genre_cnt.plot(kind = 'bar', ax = ax, rot = 90) plt.title('Genre Distribution', fontsize = 15) plt.xlabel('Genre', fontsize = 15) plt.ylabel('Sales Number', fontsize = 15) fig, ax = plt.subplots(figsize=(18,15)) data.groupby(['Year', 'Platform']).sum().unstack().plot(y = 'Global_Sales', kind = 'bar', ax = ax, stacked = True, colormap = 'Paired') plt.show() ``` ## Top-N Count Analysis ``` # print data['Genre'].dropna().tolist() genre = Counter(data['Genre'].dropna().tolist()) total = sum(genre.values()) genre = genre.most_common() # parameter can be the N N = len(genre) print "%d categories with total %d records" % (N, total) genre_name = [item[0] for item in genre] genre_counts = [item[1] / float(total) for item in genre] # print genre_counts fig, ax = plt.subplots(figsize=(18,15)) sns.barplot(x = genre_name, y = genre_counts, ax = ax) plt.title("Top-%d Genre" % (N), fontsize = 15) plt.xlabel("Genre", fontsize = 15) plt.ylabel("Top-%d Genre" % (N), fontsize = 15) ticks = plt.setp(ax.get_xticklabels(), fontsize = 15, rotation = 60) ``` ## Top-N Sales ``` platforms = data['Platform'].unique() platform_sales = [] for platform in platforms: platform_sales.append(data[data['Platform'] == platform]['Global_Sales'].dropna().sum()) fig, ax = plt.subplots(figsize = (18, 15)) sns.barplot(x = platforms, y = platform_sales, ax = ax, palette = sns.color_palette("PuBu", 10)) plt.title("Platform Sales", fontsize = 15) plt.xlabel("Platform Category", fontsize = 15) plt.ylabel("Total Sales", fontsize = 15) ticks = plt.setp(ax.get_xticklabels(), fontsize = 15, rotation = 60) ``` ## Heat Map ``` table_sales = pd.pivot_table(data, values = ['Global_Sales'], index = ['Platform'], columns = ['Genre'], aggfunc = np.mean, fill_value = 0, margins = False) fig, ax = plt.subplots(figsize = (18, 15)) sns.heatmap(table_sales, linewidth = .5, annot = True, vmin = 0.01, fmt = '.2f', cmap = 'PuBu') plt.title("Platform-Genre Sales", fontsize = 15) # ticks_y = plt.setp(ax.get_yticklabels(), fontsize = 15) ticks_x = plt.setp(ax.get_xticklabels(), rotation = 60) print ax.get_xticklabels() table_sales = pd.pivot_table(data, values = ['Global_Sales'], index = ['Platform'], columns = ['Genre'], aggfunc = 'count', fill_value = 0, margins = False) # print table_sales fig, ax = plt.subplots(figsize = (18, 15)) sns.heatmap(table_sales, linewidth = .5, annot = True, vmin = 0, fmt = '2.0f', cmap = 'PuBu') plt.title("Platform-Genre Sales", fontsize = 15) # ticks_y = plt.setp(ax.get_yticklabels(), fontsize = 15) # ticks = plt.setp(ax.get_xticklabels(), fontsize = 15, rotation = 60) ``` ## Word Cloud ``` from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator from PIL import Image stopwords = set(STOPWORDS) for genre in data.Genre.unique(): # print data.Name[data.Genre == genre].to_string() if genre == "Sports": wc = WordCloud(background_color = "white", max_font_size = 40, max_words = 200, stopwords = stopwords, random_state = 42) wc.generate(data.Name[data.Genre == genre].to_string()) plt.imshow(wc) plt.title(genre) plt.axis("off") plt.show() ``` ## Pie Chart ``` fig, ax = plt.subplots(figsize = (8, 8)) publisher = data.groupby('Publisher').sum()['Global_Sales'] publisher.sort_values(ascending = False)[:10].plot.pie() ax.set_ylabel("") plt.tight_layout() # print publisher ```
github_jupyter
# <center><b><h1>NeuralNetwork (First Run)</h1></b></center> ``` import itertools import matplotlib import matplotlib.pyplot as plt %matplotlib inline from math import cos, sin, atan import numpy as np import pandas as pd from sklearn import datasets from sklearn.externals import joblib from sklearn.feature_selection import SelectFromModel from sklearn.metrics import accuracy_score, r2_score, recall_score, auc, roc_auc_score, roc_curve from sklearn.metrics import classification_report,confusion_matrix, precision_score from sklearn.neural_network import MLPClassifier from scipy.stats import spearmanr, pearsonr pd.options.mode.chained_assignment = None # default='warn' ``` ## 1. Prepare the data ``` df_X_train = pd.read_csv("../../../Data/female_patients_no_menopause/starting_ratio_1/X_train.csv", index_col=0) df_X_train.shape df_y_train = pd.read_csv("../../../Data/female_patients_no_menopause/starting_ratio_1/y_train.csv", index_col=0) df_y_train.shape df_X_val = pd.read_csv("../../../Data/female_patients_no_menopause/X_val.csv", index_col=0) df_X_val.shape df_y_val = pd.read_csv("../../../Data/female_patients_no_menopause/y_val.csv", index_col=0) df_y_val.shape neural_network_name = 'NeuralNetwork - Female_Patients_No_Menopause - First Run - Base Ratio 1' ``` ## 2. Finding the best number of layers (between 1 and 2) and the best number of neurons ### 2.1 AUC based ``` max_n_neurons = df_X_train.shape[1] * 2 + 1 max_n_randomstate = 100 best_score_sl = actual_score = 0 best_i_sl = 0 for i in range(1,max_n_neurons,1): mlp = MLPClassifier(hidden_layer_sizes=(i,), max_iter=200000,verbose=False) mlp.fit(df_X_train,df_y_train['Class'].values) predictions = mlp.predict(df_X_val.values) fpr, tpr, thresholds = roc_curve(df_y_val['Class'].values, predictions, pos_label=1) actual_score = auc(fpr, tpr) if actual_score > best_score_sl: best_score_sl = actual_score best_i_sl = i print("I: ", i, "Best_I: ",best_i_sl,"Best_Score: ", best_score_sl,"Actual_Score: ", actual_score) print("Best_I: ",best_i_sl,"Best_Score: ", best_score_sl) best_score_twol = actual_score = 0 best_i_twol = best_j_twol = 0 for i in range(1,max_n_neurons,1): for j in range(1,max_n_neurons,1): mlp = MLPClassifier(hidden_layer_sizes=(i,j,), max_iter=200000,verbose=False) mlp.fit(df_X_train,df_y_train['Class'].values) predictions = mlp.predict(df_X_val.values) fpr, tpr, thresholds = roc_curve(df_y_val['Class'].values, predictions, pos_label=1) actual_score = auc(fpr, tpr) if actual_score > best_score_twol: best_score_twol = actual_score best_i_twol = i best_j_twol = j print("I,J: ", i,"-",j) print("Best_I: ", best_i_twol,"Best_J: ", best_j_twol,"Best_Score: ", best_score_twol,"Actual_Score: ", actual_score) print("Best_I: ",best_i_twol,"Best_I: ",best_j_twol,"Best_Score: ", best_score_twol) ``` ## 3 Find the best random state for both single layer and two layers ``` best_score_sl = actual_score = 0 best_random_state_sl = 0 for i in range(1,max_n_randomstate,1): mlp = MLPClassifier(hidden_layer_sizes=(best_i_sl,), max_iter=200000,verbose=False, random_state=i) mlp.fit(df_X_train,df_y_train['Class'].values) predictions = mlp.predict(df_X_val.values) fpr, tpr, thresholds = roc_curve(df_y_val['Class'].values, predictions, pos_label=1) actual_score = auc(fpr, tpr) if actual_score > best_score_sl: best_score_sl = actual_score best_random_state_sl = i print("I: ", i, "Best_Random_State: ",best_random_state_sl,"Best_Score: ", best_score_sl,"Actual_Score: ", actual_score) print("Best_Random_State: ",best_random_state_sl,"Best_Score: ", best_score_sl) best_score_twol = actual_score = 0 best_random_state_twol = 0 for i in range(1,max_n_randomstate,1): mlp = MLPClassifier(hidden_layer_sizes=(best_i_twol,best_j_twol), max_iter=200000,verbose=False, random_state=i) mlp.fit(df_X_train,df_y_train['Class'].values) predictions = mlp.predict(df_X_val.values) fpr, tpr, thresholds = roc_curve(df_y_val['Class'].values, predictions, pos_label=1) actual_score = auc(fpr, tpr) if actual_score > best_score_twol: best_score_twol = actual_score best_random_state_twol = i print("I: ", i, "Best_Random_State: ",best_random_state_twol,"Best_Score: ", best_score_twol,"Actual_Score: ", actual_score) print("Best_Random_State: ",best_random_state_twol,"Best_Score: ", best_score_twol) ``` ## 4. Compute metrics on the best architecture ``` if (best_score_sl > 0.5) and (best_score_sl > best_score_twol): best_architecture = "One Layer" best_neurons = [best_i_sl] mlp = MLPClassifier(hidden_layer_sizes=(best_i_sl,), max_iter=200000,verbose=False, random_state=best_random_state_sl) mlp.fit(df_X_train,df_y_train['Class'].values) elif best_score_twol > 0.5: best_architecture = "Two Layers" best_neurons = [best_i_twol, best_j_twol] mlp = MLPClassifier(hidden_layer_sizes=(best_i_twol,best_j_twol), max_iter=200000,verbose=False, random_state=best_random_state_twol) mlp.fit(df_X_train,df_y_train['Class'].values) else: print("The auc value is below the threshold of 0.5 and it means that there aren't good NN with 1 or 2 hidden layers architecture") predictions = mlp.predict(df_X_val.values) print("The best architecture is : ", best_architecture) layer = 0 for neuron in best_neurons: layer += 1 print("For the hidden layer ", layer, " the best number of neurons is : ", neuron) ``` ### 4.1 Confusion Matrix and Classification Report ``` conf_mat_base_folder = '../../../Data/confusion_matrix/neural_network/female_patients_no_menopause/base_ratio_1/' def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix') print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') # Compute confusion matrix conf_mat = confusion_matrix(df_y_val['Class'].values,predictions) tn, fp, fn, tp = conf_mat.ravel() conf_mat_df = pd.DataFrame([list(pd.Series([tn, fp, fn, tp]))],columns=['tn', 'fp', 'fn', 'tp']) conf_mat_df.columns.names = ['model'] conf_mat_df.rename(index={0: 'All_Patients'},inplace=True) conf_mat_df np.set_printoptions(precision=2) # Plot confusion matrix plt.figure() plot_confusion_matrix(conf_mat, classes=['Non-Fracture', "Fracture"], title='Confusion matrix') plt.savefig(conf_mat_base_folder+neural_network_name+'_confusion_matrix.png', bbox_inches="tight") # Plot normalized confusion matrix plt.figure() plot_confusion_matrix(conf_mat, classes=['Non-Fracture', "Fracture"], normalize=True, title='Normalized confusion matrix') plt.savefig(conf_mat_base_folder+neural_network_name+'_confusion_matrix_normalized.png', bbox_inches="tight") plt.show() print(classification_report(df_y_val['Class'].values,predictions,target_names=['Non-Fracture','Fracture'])) ``` ### 4.2 Accuracy ``` accuracy = (tp + tn) / float(tp+tn+fp+fn) print("Accuracy : ",accuracy) ``` ### 4.3 Recall (or Sensitivity) ``` recall = tp/(tp+fn) print("Recall : ", recall) ``` ### 4.4 Error ``` classification_error = (fp + fn) / float(tp+tn+fp+fn) print("Error : ",classification_error) ``` ### 4.5 Specificity ``` specificity = tn / (tn+fp) print(specificity) ``` ### 4.6 False Positive Rate: When the actual value is negative, how often is the prediction incorrect? ``` false_positive_rate = fp / float(tn+fp) print(false_positive_rate) print(1 - specificity) ``` ### 4.7 Precision: When a positive value is predicted, how often is the prediction correct? ``` precision = tp / float(tp+fp) print(precision) ``` ## 5. Metrics visualization ``` fpr, tpr, thresholds = roc_curve(df_y_val, predictions) plt.plot(fpr, tpr) plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.0]) plt.rcParams['font.size'] = 12 plt.title('ROC curve') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.grid(True) ``` ## 6. Neural Network Visualization ``` nn_model_base_folder = '../../../Models/Neural_Networks/female_patients_no_menopause/base_ratio_1/' class Neuron(): def __init__(self, x, y): self.x = x self.y = y def draw(self, neuron_radius): circle = plt.Circle((self.x, self.y), radius=neuron_radius, fill=False) plt.gca().add_patch(circle) class Layer(): def __init__(self, network, number_of_neurons, number_of_neurons_in_widest_layer): self.vertical_distance_between_layers = 6 self.horizontal_distance_between_neurons = 2 self.neuron_radius = 0.5 self.number_of_neurons_in_widest_layer = number_of_neurons_in_widest_layer self.previous_layer = self.__get_previous_layer(network) self.y = self.__calculate_layer_y_position() self.neurons = self.__intialise_neurons(number_of_neurons) def __intialise_neurons(self, number_of_neurons): neurons = [] x = self.__calculate_left_margin_so_layer_is_centered(number_of_neurons) for iteration in range(number_of_neurons): neuron = Neuron(x, self.y) neurons.append(neuron) x += self.horizontal_distance_between_neurons return neurons def __calculate_left_margin_so_layer_is_centered(self, number_of_neurons): return self.horizontal_distance_between_neurons * (self.number_of_neurons_in_widest_layer - number_of_neurons) / 2 def __calculate_layer_y_position(self): if self.previous_layer: return self.previous_layer.y + self.vertical_distance_between_layers else: return 0 def __get_previous_layer(self, network): if len(network.layers) > 0: return network.layers[-1] else: return None def __line_between_two_neurons(self, neuron1, neuron2): angle = atan((neuron2.x - neuron1.x) / float(neuron2.y - neuron1.y)) x_adjustment = self.neuron_radius * sin(angle) y_adjustment = self.neuron_radius * cos(angle) line = plt.Line2D((neuron1.x - x_adjustment, neuron2.x + x_adjustment), (neuron1.y - y_adjustment, neuron2.y + y_adjustment)) plt.gca().add_line(line) def draw(self, layerType=0): n_neurons = 0 for neuron in self.neurons: neuron.draw( self.neuron_radius ) n_neurons += 1 if self.previous_layer: for previous_layer_neuron in self.previous_layer.neurons: self.__line_between_two_neurons(neuron, previous_layer_neuron) # write Text x_text = self.number_of_neurons_in_widest_layer * self.horizontal_distance_between_neurons if layerType == 0: plt.text(x_text, self.y, 'Input Layer', fontsize = 12) elif layerType == -1: plt.text(x_text, self.y, 'Output Layer', fontsize = 12) else: plt.text(x_text, self.y, 'Hidden Layer '+str(layerType)+" - "+str(n_neurons)+" neurons", fontsize = 12) class NeuralNetwork(): def __init__(self, number_of_neurons_in_widest_layer): self.number_of_neurons_in_widest_layer = number_of_neurons_in_widest_layer self.layers = [] self.layertype = 0 def add_layer(self, number_of_neurons ): layer = Layer(self, number_of_neurons, self.number_of_neurons_in_widest_layer) self.layers.append(layer) def draw(self): plt.figure(figsize=(38,8), dpi=300) for i in range( len(self.layers) ): layer = self.layers[i] if i == len(self.layers)-1: i = -1 layer.draw( i ) plt.axis('scaled') plt.axis('off') plt.title( 'Neural Network architecture', fontsize=15 ) plt.savefig(nn_model_base_folder+neural_network_name+'_network.png', bbox_inches="tight") plt.show() class DrawNN(): def __init__( self, neural_network ): self.neural_network = neural_network def draw( self ): widest_layer = max( self.neural_network ) network = NeuralNetwork( widest_layer ) for l in self.neural_network: network.add_layer(l) network.draw() n_input = df_X_train.shape[1] n_output = 1 if best_score_sl > best_score_twol: nn_structure = [n_input, best_i_sl, n_output] else: nn_structure = [n_input, best_i_twol, best_j_twol, n_output] neural_network = DrawNN( nn_structure ) neural_network.draw() ``` ## 7. Creation new dataframe ``` mod_df = df_X_val.copy() mod_df['real_class'] = df_y_val mod_df['predicted_class'] = predictions mod_df.head() patients_to_change = mod_df[(mod_df['real_class'] == 0) & ( mod_df['predicted_class']==1)] patients_to_change.head() patients_to_change['possible_fracture_score'] = 0 costant_weight = -3.876 age_weight = 0.013 sex_weight =0.197 weight_weight = -0.004 height_weight = -0.019 hipx_weight = 2.396 smoking_weight = 0.28 rheumatoidarthritis_weight = 0.766 secondaryosteoporosis_weight = 0.338 for index,element in patients_to_change.iterrows(): possible_fracture_score = costant_weight + age_weight * mod_df.loc[index,'age'] +\ weight_weight * mod_df.loc[index,'weight'] + height_weight * mod_df.loc[index,'height'] +\ hipx_weight * mod_df.loc[index,'HIPX'] + smoking_weight * mod_df.loc[index,'smoking'] +\ rheumatoidarthritis_weight * mod_df.loc[index,'ReumatoidArthritis'] +\ secondaryosteoporosis_weight * mod_df.loc[index,'SecondaryOsteoporsis']# sex_weight * mod_df.loc[index,'sex'] +\ patients_to_change.loc[index,'possible_fracture_score'] = possible_fracture_score patients_to_change.drop(columns=['age', 'weight', 'height', 'HIPX', 'smoking', 'ReumatoidArthritis', 'SecondaryOsteoporsis', 'Alcohol', 'VitaminD', 'calcium', 'dose_walk', 'dose_moderate', 'dose_vigorous', 'real_class', 'predicted_class'], inplace=True) patients_to_change.sort_values('possible_fracture_score', ascending=False, inplace=True) std_patients = pd.read_csv('../../../Data/female_patients_no_menopause/standardized_patients.csv', index_col=0) std_patients.drop(columns=['sex','menopause','HRT'],axis=1,inplace=True) std_patients.shape for i in range(1,6): new_std_patients = std_patients n_patients = int(patients_to_change.shape[0] * i * 10 / 100) patients_percentage = patients_to_change.head(n_patients) for index,element in patients_percentage.iterrows(): new_std_patients.loc[index,'Class'] = 1 new_std_patients.to_csv('../../../Data/female_patients_no_menopause/starting_ratio_1/'+str(i)+'0_percent/new_std_patients.csv') ``` ## 8. Save the model ``` joblib.dump(mlp, nn_model_base_folder+neural_network_name+'_model.pkl') conf_mat_df.to_csv(conf_mat_base_folder+neural_network_name+'.csv') ```
github_jupyter
# Linear Support Vector Regressor with PolynomialFeatures ### Required Packages ``` import warnings import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as se from sklearn.preprocessing import PolynomialFeatures from sklearn.pipeline import make_pipeline from sklearn.model_selection import train_test_split from sklearn.svm import LinearSVR from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error warnings.filterwarnings('ignore') ``` ### Initialization Filepath of CSV file ``` #filepath file_path="" ``` List of features which are required for model training . ``` #x_values features=[] ``` Target feature for prediction. ``` #y_values target='' ``` ### Data fetching Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools. We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ``` df=pd.read_csv(file_path); df.head() ``` ### Feature Selections It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model. We will assign all the required input features to X and target/outcome to Y. ``` X=df[features] Y=df[target] ``` ### Data preprocessing Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ``` def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) ``` Calling preprocessing functions on the feature and target set. ``` x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head() ``` #### Correlation Map In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ``` f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() ``` ### Data Splitting The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data. ``` x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)#performing datasplitting ``` ### Model Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection. A Support Vector Machine is a discriminative classifier formally defined by a separating hyperplane. In other terms, for a given known/labelled data points, the SVM outputs an appropriate hyperplane that classifies the inputted new cases based on the hyperplane. In 2-Dimensional space, this hyperplane is a line separating a plane into two segments where each class or group occupied on either side. LinearSVR is similar to SVR with kernel=’linear’. It has more flexibility in the choice of tuning parameters and is suited for large samples. #### PolynomialFeatures: Generate polynomial and interaction features. Generate a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, if an input sample is two dimensional and of the form [a, b], the degree-2 polynomial features are [1, a, b, a^2, ab, b^2]. for more ... [click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html) #### Model Tuning Parameters 1. epsilon : float, default=0.0 > Epsilon parameter in the epsilon-insensitive loss function. 2. loss : {‘epsilon_insensitive’, ‘squared_epsilon_insensitive’}, default=’epsilon_insensitive’ > Specifies the loss function. ‘hinge’ is the standard SVM loss (used e.g. by the SVC class) while ‘squared_hinge’ is the square of the hinge loss. The combination of penalty='l1' and loss='hinge' is not supported. 3. C : float, default=1.0 > Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. 4. tol : float, default=1e-4 > Tolerance for stopping criteria. 5. dual : bool, default=True > Select the algorithm to either solve the dual or primal optimization problem. Prefer dual=False when n_samples > n_features. ``` model=make_pipeline(PolynomialFeatures(),LinearSVR()) model.fit(x_train, y_train) ``` #### Model Accuracy We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model. > **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction. ``` print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100)) ``` > **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model. ``` y_pred=model.predict(x_test) print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100)) print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred))) print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred))) ``` #### Prediction Plot First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis. ``` plt.figure(figsize=(14,10)) plt.plot(range(20),y_test[0:20], color = "green") plt.plot(range(20),model.predict(x_test[0:20]), color = "red") plt.legend(["Actual","prediction"]) plt.title("Predicted vs True Value") plt.xlabel("Record number") plt.ylabel(target) plt.show() ``` #### Creator:Shreepad Nade , Github: [Profile](https://github.com/shreepad-nade)
github_jupyter
<style>div.container { width: 100% }</style> <img style="float:left; vertical-align:text-bottom;" height="65" width="172" src="../assets/PyViz_logo_wm_line.png" /> <div style="float:right; vertical-align:text-bottom;"><h2>Tutorial 06. Network Graphs</h2></div> Visualizing and working with network graphs is a common problem in many different disciplines. HoloViews provides the ability to represent and visualize graphs very simply and easily with facilities for interactively exploring the nodes and edges of the graph, especially using the Bokeh plotting interface. It can also make use of Datashader for plotting large graphs, and NetworkX for some convenient graph functions: <div style="margin: 10px"> <a href="http://holoviews.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:150px" src="../assets/holoviews.png"/></a> <a href="http://bokeh.pydata.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:150px" src="../assets/bokeh.png"/></a> <a href="http://numpy.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:150px" src="../assets/numpy.png"/></a> <a href="http://pandas.pydata.org"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:140px" src="../assets/pandas.png"/></a> <a href="http://networkx.github.io"><img style="margin:8px; display:inline; object-fit:scale-down; max-height:140px" src="../assets/networkx.png"/></a> </div> ``` import numpy as np import pandas as pd import holoviews as hv import networkx as nx hv.extension('bokeh') %opts Graph [width=400 height=400] ``` The HoloViews ``Graph`` ``Element`` differs from other elements in HoloViews in that it consists of multiple sub-elements. The ``Graph`` element itself holds the data that indicates whether each node is connected to each other node. By default the element will automatically compute concrete ``x`` and ``y`` positions for the nodes and represent them using a ``Nodes`` element, which is stored on the Graph. The abstract edges and concrete node positions are sufficient to render the ``Graph`` by drawing straight-line edges between the nodes. In order to supply explicit edge paths we can also declare ``EdgePaths``, providing explicit coordinates for each edge to follow. To summarize, a ``Graph`` consists of three different components: * The ``Graph`` itself holds the abstract edges stored as a table of node index pairs. * The ``Nodes`` hold the concrete ``x`` and ``y`` positions of each node along with a node ``index``. The ``Nodes`` may also define any number of value dimensions, which can be revealed when hovering over the nodes or to color the nodes by. * The ``EdgePaths`` can optionally be supplied to declare explicit node paths. #### A simple Graph Let's start by declaring a very simple graph connecting one node to all others. If we simply supply the abstract connectivity of the ``Graph``, it will automatically compute a layout for the nodes using the ``layout_nodes`` operation, which defaults to a circular layout: ``` # Declare abstract edges N = 8 node_indices = np.arange(N) source = np.zeros(N) target = node_indices padding = dict(x=(-1.2, 1.2), y=(-1.2, 1.2)) simple_graph = hv.Graph(((source, target),)).redim.range(**padding) simple_graph ``` #### Accessing the nodes and edges We can easily access the ``Nodes`` and ``EdgePaths`` on the ``Graph`` element using the corresponding properties: ``` simple_graph.nodes + simple_graph.edgepaths ``` #### Supplying explicit paths Next we will extend this example by supplying explicit edges: ``` def bezier(start, end, control, steps=np.linspace(0, 1, 100)): return (1-steps)**2*start + 2*(1-steps)*steps*control+steps**2*end x, y = simple_graph.nodes.array([0, 1]).T paths = [] for node_index in node_indices: ex, ey = x[node_index], y[node_index] paths.append(np.column_stack([bezier(x[0], ex, 0), bezier(y[0], ey, 0)])) bezier_graph = hv.Graph(((source, target), (x, y, node_indices), paths)).redim.range(**padding) bezier_graph ``` ## Interactive features #### Hover and selection policies Thanks to Bokeh we can reveal more about the graph by hovering over the nodes and edges. The ``Graph`` element provides an ``inspection_policy`` and a ``selection_policy``, which define whether hovering and selection highlight edges associated with the selected node or nodes associated with the selected edge. These policies can be toggled by setting the policy to ``'nodes'`` (the default) or ``'edges'``. ``` bezier_graph.options(inspection_policy='edges') ``` In addition to changing the policy, we can also change the colors used when hovering and selecting nodes: ``` %%opts Graph [tools=['hover', 'box_select']] (edge_hover_line_color='green' node_hover_fill_color='red') bezier_graph.options(inspection_policy='nodes') ``` #### Additional information We can also associate additional information with the nodes and edges of a graph. By constructing the ``Nodes`` explicitly we can declare additional value dimensions, which are revealed when hovering and/or can be mapped to the color by specifying the ``color_index``. Similarly, we can associate additional information with each *edge* by supplying a value dimension to the ``Graph`` itself. ``` %%opts Graph [color_index='Type'] (cmap='Set1') node_labels = ['Output']+['Input']*(N-1) edge_labels = list('ABCDEFGH') nodes = hv.Nodes((x, y, node_indices, node_labels), vdims='Type') graph = hv.Graph(((source, target, edge_labels), nodes, paths), vdims='Label').redim.range(**padding) graph + graph.options(inspection_policy='edges') ``` If you want to supply additional node information without speciying explicit node positions you may pass in a ``Dataset`` object consisting only of various value dimensions. ``` %%opts Graph [color_index='Label'] (cmap='Set1') node_info = hv.Dataset(node_labels, vdims='Label') hv.Graph(((source, target), node_info)).redim.range(**padding) ``` ## Working with NetworkX NetworkX is a very useful library when working with network graphs, and the Graph Element provides ways of importing a NetworkX Graph directly. Here we will load the Karate Club graph and use the ``circular_layout`` function provided by NetworkX to lay it out: ``` %%opts Graph [tools=['hover'] color_index='club'] (cmap='Set1') G = nx.karate_club_graph() hv.Graph.from_networkx(G, nx.layout.circular_layout).redim.range(**padding) ``` #### Animating graphs Like all other elements ``Graph`` can be updated in a ``HoloMap`` or ``DynamicMap``. Here we animate how the Fruchterman-Reingold force-directed algorithm lays out the nodes in real time. ``` %%opts Graph [tools=['hover'] color_index='club'] (cmap='Set1') G = nx.karate_club_graph() def get_graph(iteration): np.random.seed(10) return hv.Graph.from_networkx(G, nx.spring_layout, iterations=iteration) hv.HoloMap({i: get_graph(i) for i in range(5, 30, 5)}, kdims='Iterations').redim.range(x=(-1.2, 1.2), y=(-1.2, 1.2)) ``` ## Real world graphs As a final example let's look at a slightly larger graph. We will load a dataset of a Facebook network consisting a number of friendship groups identified by their ``'circle'``. We will load the edge and node data using pandas and then color each node by their friendship group using many of the things we learned above. ``` %opts Nodes Graph [width=800 height=800 xaxis=None yaxis=None] %%opts Graph [color_index='circle'] %%opts Graph (node_size=10 edge_line_width=1) colors = ['#000000']+hv.Cycle('Category20').values edges_df = pd.read_csv('../data/fb_edges.csv') fb_nodes = hv.Nodes(pd.read_csv('../data/fb_nodes.csv')).sort() fb_graph = hv.Graph((edges_df, fb_nodes), label='Facebook Circles') fb_graph = fb_graph.redim.range(x=(-0.05, 1.05), y=(-0.05, 1.05)).options(cmap=colors) fb_graph ``` ## Bundling graphs Later, in [Working with Large Datasets](10_Working_with_Large_Datasets.ipynb) we will see how the [Datashader](http://datashader.org/) library allows us to render very large datasets efficiently. In this section, we use the algorithms for bundling the edges of large graphs that are available in datashader via HoloViews. ``` from holoviews.operation.datashader import datashade, bundle_graph bundled = bundle_graph(fb_graph) bundled ``` ## Datashading graphs For graphs with a large number of edges we can datashade the paths and display the nodes separately. This loses some of the interactive features but will let you visualize quite large graphs. If the number of edges is much greater than the number of nodes, using datashader to render the edges still lets you interact with each node for hovering, even though the connections are now drawn as an image: ``` %%opts Nodes [color_index='circle'] (size=10 cmap=colors) Overlay [show_legend=False] datashade(bundled, normalization='linear', width=800, height=800) * bundled.nodes ``` ### Applying selections Alternatively we can select the nodes and edges by an attribute that resides on either. In this case we will select the nodes and edges for a particular circle and then overlay just the selected part of the graph on the datashaded plot. Note that selections on the ``Graph`` itself will select all nodes that connect to one of the selected nodes. In this way a smaller subgraph can be highlighted and the larger graph can be datashaded to reduce the file size. ``` %%opts Graph (node_fill_color='white') datashade(bundle_graph(fb_graph), normalization='linear', width=800, height=800) *\ bundled.select(circle='circle15') ``` To select just the nodes that are in 'circle15' set the ``selection_mode='nodes'`` overriding the default of 'edges': ``` bundled.select(circle='circle15', selection_mode='nodes') ``` # Onwards Having seen how to visualize and interactively explore graphical data, we now go on to demonstrate how to visualize and explore a specific domain: [Geographic Data](./07_Geographic_Data.ipynb). While domain specific, geographic data is both very common and typically awkward to handle.
github_jupyter
# Visualisation Examples This notebook shows some of the visulisation utility of our toolkit. The core packages for visualisation are: ### `rasterization` contains classes for getting visual data as multi-channel tensors and turning them into interpretable RGB images. Every class has at least a `rasterize` method to get the tensor and a `to_rgb` method to convert it into an image. A few examples are: - `BoxRasterizer`: this object renders agents (e.g. vehicles or pedestrians) as oriented 2D boxes - `SatelliteRasterizer`: this object renders an oriented crop from a satellite map ### `visualization` contains utilities to draw additional information (e.g. trajectories) onto RGB images. These utilities are commonly used after a `to_rgb` call to add other information to the final visualisation. One example is: - `draw_trajectory`: this function draws 2D trajectories from coordinates and yaws offset on an image ``` import matplotlib.pyplot as plt import numpy as np from l5kit.data import ChunkedDataset, LocalDataManager from l5kit.dataset import EgoDataset, AgentDataset from l5kit.rasterization import build_rasterizer from l5kit.configs import load_config_data from l5kit.visualization import draw_trajectory, TARGET_POINTS_COLOR from l5kit.geometry import transform_points from tqdm import tqdm from collections import Counter from l5kit.data import LABELS from prettytable import PrettyTable import os ``` ### First, let's configure where our data lives! The data is expected to live in a folder that can be configured using the `L5KIT_DATA_FOLDER` env variable. You data folder is expected to contain subfolders for the aerial and semantic maps as well as the scenes (`.zarr` files). In this example, the env variable is set to the local data folder. You should make sure the path points to the correct location for you. We built our code to work with a human-readable `yaml` config. This config file holds much useful information, however, we will only focus on a few functionalities concerning loading and visualization here. ``` # set env variable for data os.environ["L5KIT_DATA_FOLDER"] = "PATH_TO_YOUR_DATA" # get config cfg = load_config_data("./visualisation_config.yaml") print(cfg) ``` ### We can look into our current configuration for interesting fields \- when loaded in python, the `yaml`file is converted into a python `dict`. `raster_params` contains all the information related to the transformation of the 3D world onto an image plane: - `raster_size`: the image plane size - `pixel_size`: how many meters correspond to a pixel - `ego_center`: our raster is centered around an agent, we can move the agent in the image plane with this param - `map_type`: the rasterizer to be employed. We currently support a satellite-based and a semantic-based one. We will look at the differences further down in this script ``` print(f'current raster_param: {cfg["raster_params"]}') ``` ## Load the data The same config file is also used to load the data. Every split in the data has its own section, and multiple datasets can be used (as a whole or sliced). In this short example we will only use the first dataset from the `sample` set. You can change this by configuring the 'train_data_loader' variable in the config. You may also have noticed that we're building a `LocalDataManager` object. This will resolve relative paths from the config using the `L5KIT_DATA_FOLDER` env variable we have just set. ``` dm = LocalDataManager() dataset_path = dm.require(cfg["train_data_loader"]["datasets"][0]["key"]) zarr_dataset = ChunkedDataset(dataset_path) zarr_dataset.open() ``` ## Working with the raw data `.zarr` files support most of the traditional numpy array operations. In the following cell we iterate over the frames to get a scatter plot of the AV locations: ``` frames = zarr_dataset.frames coords = np.zeros((len(frames), 2)) for idx_coord, idx_data in enumerate(tqdm(range(len(frames)), desc="getting centroid to plot trajectory")): frame = zarr_dataset.frames[idx_data] coords[idx_coord] = frame["ego_translation"][:2] plt.scatter(coords[:, 0], coords[:, 1], marker='.') axes = plt.gca() axes.set_xlim([-2500, 1600]) axes.set_ylim([-2500, 1600]) ``` Another easy thing to try is to get an idea of the agents types distribution. We can get all the agents `label_probabilities` and get the argmax for each raw. because `.zarr` files map to numpy array we can use all the traditional numpy operations and functions. ``` agents = zarr_dataset.agents probabilities = agents["label_probabilities"] labels_indexes = np.argmax(probabilities, axis=1) counts = [] for idx_label, label in enumerate(LABELS): counts.append(np.sum(labels_indexes == idx_label)) table = PrettyTable(field_names=["label", "counts"]) for count, label in zip(counts, LABELS): table.add_row([label, count]) print(table) ``` ## Working with data abstraction Even though it's absolutely fine to work with the raw data, we also provide classes that abstract data access to offer an easier way to generate inputs and targets. ### Core Objects Along with the `rasterizer`, our toolkit contains other classes you may want to use while you build your solution. The `dataset` package, for example, already implements `PyTorch` ready datasets, so you can hit the ground running and start coding immediately. ### Dataset package We will use two classes from the `dataset` package for this example. Both of them can be iterated and return multi-channel images from the rasterizer along with future trajectories offsets and other information. - `EgoDataset`: this dataset iterates over the AV annotations - `AgentDataset`: this dataset iterates over other agents annotations Both support multi-threading (through PyTorch DataLoader) OOB. ``` rast = build_rasterizer(cfg, dm) dataset = EgoDataset(cfg, zarr_dataset, rast) ``` ## What if I want to visualise the Autonomous Vehicle (AV)? Let's get a sample from the dataset and use our `rasterizer` to get an RGB image we can plot. If we want to plot the ground truth trajectory, we can convert the dataset's `target_position` (displacements in meters in world coordinates) into pixel coordinates in the image space, and call our utility function `draw_trajectory` (note that you can use this function for the predicted trajectories, as well). ``` data = dataset[50] im = data["image"].transpose(1, 2, 0) im = dataset.rasterizer.to_rgb(im) target_positions_pixels = transform_points(data["target_positions"] + data["centroid"][:2], data["world_to_image"]) draw_trajectory(im, target_positions_pixels, data["target_yaws"], TARGET_POINTS_COLOR) plt.imshow(im[::-1]) plt.show() ``` ## What if I want to change the rasterizer? We can do so easily by building a new rasterizer and new dataset for it. In this example, we change the value to `py_satellite` which renders boxes on an aerial image. ``` cfg["raster_params"]["map_type"] = "py_satellite" rast = build_rasterizer(cfg, dm) dataset = EgoDataset(cfg, zarr_dataset, rast) data = dataset[50] im = data["image"].transpose(1, 2, 0) im = dataset.rasterizer.to_rgb(im) target_positions_pixels = transform_points(data["target_positions"] + data["centroid"][:2], data["world_to_image"]) draw_trajectory(im, target_positions_pixels, data["target_yaws"], TARGET_POINTS_COLOR) plt.imshow(im[::-1]) plt.show() ``` ## What if I want to visualise an agent? Glad you asked! We can just replace the `EgoDataset` with an `AgentDataset`. Now we're iterating over agents and not the AV anymore, and the first one happens to be the pace car (you will see this one around a lot in the dataset). ``` dataset = AgentDataset(cfg, zarr_dataset, rast) data = dataset[0] im = data["image"].transpose(1, 2, 0) im = dataset.rasterizer.to_rgb(im) target_positions_pixels = transform_points(data["target_positions"] + data["centroid"][:2], data["world_to_image"]) draw_trajectory(im, target_positions_pixels, data["target_yaws"], TARGET_POINTS_COLOR) plt.imshow(im[::-1]) plt.show() ``` ## System Origin and Orientation At this point you may have noticed that we flip the image on the **Y-axis** before plotting it. When moving from 3D to 2D we stick to a right-hand system, where the origin is in the bottom-left corner with positive x-values going right and positive y-values going up the image plane. The camera is facing down the negative z axis. However, both `opencv` and `pyplot` place the origin in the top-left corner with positive x going right and positive y going down in the image plane. The camera is facing down the positive z-axis. The flip done on the resulting image is for visualisation purposes to accommodate the difference in the two coordinate frames. Further, all our rotations are counter-clockwise for positive value of the angle. ## How does an entire scene look like? It's easy to visualise an individual scene using our toolkit. Both `EgoDataset` and `AgentDataset` provide 2 methods for getting interesting indices: - `get_frame_indices` returns the indices for a given frame. For the `EgoDataset` this matches a single observation, while more than one index could be available for the `AgentDataset`, as that given frame may contain more than one valid agent - `get_scene_indices` returns indices for a given scene. For both datasets, these might return more than one index In this example, we visualise the second scene from the ego's point of view: ``` from IPython.display import display, clear_output import PIL cfg["raster_params"]["map_type"] = "py_satellite" rast = build_rasterizer(cfg, dm) dataset = EgoDataset(cfg, zarr_dataset, rast) scene_idx = 1 indexes = dataset.get_scene_indices(scene_idx) images = [] for idx in indexes: data = dataset[idx] im = data["image"].transpose(1, 2, 0) im = dataset.rasterizer.to_rgb(im) target_positions_pixels = transform_points(data["target_positions"] + data["centroid"][:2], data["world_to_image"]) center_in_pixels = np.asarray(cfg["raster_params"]["ego_center"]) * cfg["raster_params"]["raster_size"] draw_trajectory(im, target_positions_pixels, data["target_yaws"], TARGET_POINTS_COLOR) clear_output(wait=True) display(PIL.Image.fromarray(im[::-1])) ```
github_jupyter
# Support Vector Regression with Normalize This Code template is for regression analysis using Support Vector Regressor(SVR) based on the Support Vector Machine algorithm and feature rescaling technique Normalize. ### Required Packages ``` import warnings import numpy as np import pandas as pd import seaborn as se import matplotlib.pyplot as plt from sklearn.svm import SVR from sklearn.preprocessing import normalize from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error warnings.filterwarnings('ignore') ``` ### Initialization Filepath of CSV file ``` file_path= "" ``` List of features which are required for model training . ``` features = [] ``` Target feature for prediction. ``` target='' ``` ### Data Fetching Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools. We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ``` df=pd.read_csv(file_path) df.head() ``` ### Feature Selections It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model. We will assign all the required input features to X and target/outcome to Y. ``` X=df[features] Y=df[target] ``` ### Data Preprocessing Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ``` def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) ``` Calling preprocessing functions on the feature and target set. ``` x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head() ``` #### Correlation Map In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ``` f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() ``` ### Data Rescaling For rescaling the data **normalize** function of Sklearn is used. Normalization is the process of scaling individual samples to have unit norm. This process can be useful if you plan to use a quadratic form such as the dot-product or any other kernel to quantify the similarity of any pair of samples. The function normalize provides a quick and easy way to scale input vectors individually to unit norm (vector length). ##### For more information on normalize [ click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html) ``` X_norm = normalize(X, axis=0) X=pd.DataFrame(X_norm,columns=x) X.head(5) ``` ### Data Splitting The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data. ``` x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123) ``` ### Model Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection. A Support Vector Machine is a discriminative classifier formally defined by a separating hyperplane. In other terms, for a given known/labelled data points, the SVM outputs an appropriate hyperplane that classifies the inputted new cases based on the hyperplane. In 2-Dimensional space, this hyperplane is a line separating a plane into two segments where each class or group occupied on either side. Here we will use SVR, the svr implementation is based on libsvm. The fit time scales at least quadratically with the number of samples and maybe impractical beyond tens of thousands of samples. #### Model Tuning Parameters 1. C : float, default=1.0 > Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty. 2. kernel : {‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’}, default=’rbf’ > Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to pre-compute the kernel matrix from data matrices; that matrix should be an array of shape (n_samples, n_samples). 3. gamma : {‘scale’, ‘auto’} or float, default=’scale’ > Gamma is a hyperparameter that we have to set before the training model. Gamma decides how much curvature we want in a decision boundary. 4. degree : int, default=3 > Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.Using degree 1 is similar to using a linear kernel. Also, increasing degree parameter leads to higher training times. ``` model=SVR() model.fit(x_train,y_train) ``` #### Model Accuracy We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model. > **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction. ``` print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100)) ``` > **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model. ``` y_pred=model.predict(x_test) print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100)) print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred))) print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred))) ``` #### Prediction Plot First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis. ``` plt.figure(figsize=(14,10)) plt.plot(range(20),y_test[0:20], color = "green") plt.plot(range(20),model.predict(x_test[0:20]), color = "red") plt.legend(["Actual","prediction"]) plt.title("Predicted vs True Value") plt.xlabel("Record number") plt.ylabel(target) plt.show() ``` #### Creator: Saharsh Laud , Github: [Profile](https://github.com/SaharshLaud)
github_jupyter
# Commerce clickstream ML prediction ####Dataset download > [Ad impressions with clicks dataset](https://www.kaggle.com/c/avazu-ctr-prediction/data) The use case here I am taking is of a Commerce company that has an ecommerce website as well as traditional retail stores. They want to analyse the online clickstream data to better understand their customers. We will use a sample clickstream dataset from the data science website Kaggle. We will start with the Ingest and Exploration of data. Next we create features and train and evaluate the ML model. We will join this data with Dynamics products table to try to analyse if products influence the ML model result. The goal of this workflow is to create a machine learning model that, given a new ad impression, predicts whether or not there will be a click. We will also do features exploration to see what features influence the prediction most. We have a big dataset so we will go with supervised learning which relies on historicl data to build a model to predict the result of the next observation. Clickstream data is data about how users interact with your ecommerce websites, what ads they click, what products they view, which pages they spend most time on. It is behavioural data that can give you insights into your products and customers so you can better market to your customer base. The notebook is written in PySpark and executed on Databricks. Note- In the dataset download from Kaggle, train.csv given is 40 million rows, a 6 GB uncompressed file! Excel only shows 1 million, and since i wanted to add a product column, i saved excel as a smaller set of 1 million rows. I filled with some random product numbers taken from Dynamics to be able to make joins. ``` # Reading clicks csv files in a dataframe file_path = "dbfs:/mnt/commercedata/clickstream-ad-ML/adtech/impression/csv/train_1M_p.csv" df_clicks = spark.read.csv(file_path, header=True, inferSchema=True) display(df_clicks.limit(10)) df_clicks.count() df_clicks.printSchema() display(df_clicks.describe()) # taking too long to execute, rewrite """ numeric_data = df_clicks.select(numeric_features).toPandas() axs = pd.plotting.scatter_matrix(numeric_data, figsize=(8, 8)); n = len(numeric_data.columns) for i in range(n): v = axs[i, 0] v.yaxis.label.set_rotation(0) v.yaxis.label.set_ha('right') v.set_yticks(()) h = axs[n-1, i] h.xaxis.label.set_ """ # create a sql view df_clicks.createOrReplaceTempView("vw_clicks") %sql describe vw_clicks display(dbutils.fs.ls("dbfs:/mnt/dynamics365-financeandoperations/d365commerce.sandbox.operations.dynamics.com/Tables/SupplyChain/ProductInformationManagement/Main/EcoResProduct")) # Reading product csv files in a dataframe df_product= spark.read.format("csv").option("header",False).load("dbfs:/mnt/dynamics365-financeandoperations/d365commerce.sandbox.operations.dynamics.com/Tables/SupplyChain/ProductInformationManagement/Main/EcoResProduct/ECORESPRODUCT_00001.csv") display(df_product.limit(10)) # select only relevant columns and create a new dataframe df_productSmall = df_product.selectExpr( '_c12 AS ProductId', '_c16 AS ProductName') display(df_productSmall.limit(10)) # create a view df_productSmall.createOrReplaceTempView("vw_Products") %sql select * from vw_Products limit 10 %sql -- join clicks and products view select s.*, p.ProductName as product_name from vw_clicks s left join vw_Products p on s.product = p.ProductId limit 10 ``` Next lets do some Exploratory Data Analysis, that is analyse relationships between features, to get a sense of what could be influencing someone clicking an ad. ``` %sql -- different banner positions of ads. Where they are placed on a page. We can see 8 types select banner_pos, count(1) from vw_clicks group by 1 order by 1 %sql -- total number of clicks vs no clicks for each banner pos select banner_pos, sum(case when click = 1 then 1 else 0 end) as click, sum(case when click = 0 then 1 else 0 end) as no_click from vw_clicks group by 1 order by 1 %sql -- CTR is the number of clicks that your ad receives divided by the number of times that your ad is shown: clicks ÷ impressions = CTR -- CTR value for each banner pos. Number 3 is empty which means that position is never clicked. It could be faulty data too. Number is haighest CTR, so that is a popular one. select banner_pos, sum(case when click = 1 then 1 else 0 end) / (count(1) * 1.0) as CTR from vw_clicks group by 1 order by 1 %sql -- different kinds of devices used -- Device type 1 is most used by people who visit the site. select device_type, count(1) from vw_clicks group by 1 order by 1 %sql -- total number of clicks vs no clicks for each device -- though device 1 is most used but has highest no clicks too select device_type, sum(case when click = 1 then 1 else 0 end) as click, sum(case when click = 0 then 1 else 0 end) as no_click from vw_clicks group by 1 order by 1 %sql -- CTR value for each device type. Number 4 is least and Number 0 is highest, highest chances are with device 0. For number 4, maybe company should stop showing ads and save some money. select device_type, sum(case when click = 1 then 1 else 0 end) / (count(1) * 1.0) as CTR from vw_clicks group by 1 order by 1 %sql -- product M0001 is really popular in this clickstream dataset. So customers are spending lot of time looking at that product. Next are D0002 and M0010. select product, count(1) as count from vw_clicks group by 1 having count > 200 order by count desc %sql -- total number of clicks vs no clicks for each product -- M0010 gets the highest of clicks select product, sum(case when click = 1 then 1 else 0 end) as click, sum(case when click = 0 then 1 else 0 end) as no_click from vw_clicks group by 1 order by 3 desc %sql -- CTR of different products. M0006 has the highest CTR. % wise this product gets most clicks, 30% select product, sum(case when click = 1 then 1 else 0 end) / (count(1) * 1.0) as CTR from vw_clicks group by 1 order by 2 desc %sql select substr(hour, 7) as hour, count(1) from vw_clicks group by 1 order by 1 %sql -- total number of clicks vs no clicks for hour of day select substr(hour, 7) as hour, sum(case when click = 1 then 1 else 0 end) as click, sum(case when click = 0 then 1 else 0 end) as no_click from vw_clicks group by 1 order by 1 %sql select substr(hour, 7) as hour, sum(case when click = 1 then 1 else 0 end) / (count(1) * 1.0) as CTR from vw_clicks group by 1 order by 1 %sql select count(1) as total, count(distinct C1) as C1, count(distinct banner_pos) as banner_pos, count(distinct site_id) as site_id, count(distinct site_domain) as site_domain, count(distinct site_category) as site_category, count(distinct product) as product, count(distinct app_id) as app_id, count(distinct app_domain) as app_domain, count(distinct app_category) as app_category, count(distinct device_id) as device_id, count(distinct device_ip) as device_ip, count(distinct device_model) as device_model, count(distinct device_type) as device_type, count(distinct device_conn_type) as device_conn_type, count(distinct C14) as C14, count(distinct C15) as C15, count(distinct C16) as C16, count(distinct C17) as C17, count(distinct C18) as C18, count(distinct C19) as C19, count(distinct C20) as C20, count(distinct C21) as C21 from vw_clicks display(df_clicks.describe()) # Drop site_category column # we have 1 to 1 mapping with our product column so its highly correlated. We want to avoid correlation and use features that have no bearing on each other to get the best prediction. df_clicks1 = df_clicks.drop('site_category') df_clicks1.printSchema() # extract exact hour from hour column into a new hr column # we will add hr as a new feature df_clicks1 = df_clicks1.selectExpr("*", 'substr(hour, 7) as hr') display(df_clicks1.limit(10)) from pyspark.sql.functions import * strCols = map(lambda t: t[0], __builtin__.filter(lambda t: t[1] == 'string', df_clicks1.dtypes)) intCols = map(lambda t: t[0], __builtin__.filter(lambda t: t[1] == 'int', df_clicks1.dtypes)) # [row_idx][json_idx] strColsCount = sorted(map(lambda c: (c, df_clicks1.select(countDistinct(c)).collect()[0][0]), strCols), key=lambda x: x[1], reverse=True) intColsCount = sorted(map(lambda c: (c, df_clicks1.select(countDistinct(c)).collect()[0][0]), intCols), key=lambda x: x[1], reverse=True) # distinct counts for str columns display(strColsCount) # distinct counts for int columns display(intColsCount) ``` Below code is taken from databricks’ official site and it indexes each categorical column using the StringIndexer, then converts the indexed categories into one-hot encoded variables. The resulting output has the binary vectors appended to the end of each row. We use the StringIndexer again to encode our labels to label indices. Next, we use the VectorAssembler to combine all the feature columns into a single vector column. Once we have familiarized ourselves with our data, we proceed to the machine learning phase, where we convert our data into features for input to a machine learning algorithm and produce a trained model with which we can predict. Because Spark MLlib algorithms take a column of feature vectors of doubles as input, a typical feature engineering workflow includes: 1. Identifying numeric and categorical features 2. String indexing 3. Assembling them all into a sparse vector In our use of GBTClassifer, while we use string indexer but we are not applying One Hot Encoder (OHE). When using StringIndexer, categorical features are kept as k-ary categorical features. A tree node will test if feature X has a value in {subset of categories}. With both StringIndexer + OHE: Your categorical features are turned into a bunch of binary features. A tree node will test if feature X = category a vs. all the other categories (one vs. rest test). When using only StringIndexer, the benefits include: 1. There are fewer features to choose 2. Each node’s test is more expressive than with binary 1-vs-rest features Therefore, for tree based methods, it is preferable to not use OHE as it is a less expressive test and it takes up more space. But for non-tree-based algorithms such as like linear regression, you must use OHE or else the model will impose a false and misleading ordering on categories. ``` # Include PySpark Feature Engineering methods from pyspark.ml.feature import StringIndexer, VectorAssembler # All of the columns (string or integer) are categorical columns # except for the [click] column maxBins = 70 categorical = list(map(lambda c: c[0], __builtin__.filter(lambda c: c[1] <= maxBins, strColsCount))) categorical += list(map(lambda c: c[0], __builtin__.filter(lambda c: c[1] <= maxBins, intColsCount))) categorical.remove('click') # Apply string indexer to all of the categorical columns # And add _idx to the column name to indicate the index of the categorical value stringIndexers = list(map(lambda c: StringIndexer(inputCol = c, outputCol = c + "_idx"), categorical)) # Assemble the put as the input to the VectorAssembler # with the output being our features assemblerInputs = list(map(lambda c: c + "_idx", categorical)) vectorAssembler = VectorAssembler(inputCols = assemblerInputs, outputCol = "features") # The [click] column is our label labelStringIndexer = StringIndexer(inputCol = "click", outputCol = "label") # The stages of our ML pipeline stages = stringIndexers + [vectorAssembler, labelStringIndexer] ``` We use Pipeline to chain multiple Transformers and Estimators together to specify our machine learning workflow. A Pipeline’s stages are specified as an ordered array. ``` from pyspark.ml import Pipeline # Create our pipeline pipeline = Pipeline(stages = stages) # create transformer to add features featurizer = pipeline.fit(df_clicks1) # dataframe with feature and intermediate transformation columns appended featurizedClicks = featurizer.transform(df_clicks1) selectedCols = ['label', 'features'] + df_clicks1.columns featurizedClicks = featurizedClicks.select(selectedCols) featurizedClicks.printSchema() pd.DataFrame(featurizedClicks.take(5), columns=featurizedClicks.columns).transpose() ``` As you can see, we now have 'features' column and 'label' column. ``` display(featurizedClicks.select('features', 'label').limit(10)) train, test = featurizedClicks \ .select(["label", "features", "hr"]) \ .randomSplit([0.7, 0.3], 42) train.cache() test.cache() print("Training Dataset Count: " + str(train.count())) print("Test Dataset Count: " + str(test.count())) ``` We will take Gradient Boosting Tree classifier for our ML as that is a popular one. There are others you can try, XGBoost, Random forest etc. The exact nature of these models is outstide the scope for our demo. ``` from pyspark.ml.classification import GBTClassifier # Train our GBTClassifier model classifier = GBTClassifier(labelCol="label", featuresCol="features", maxBins=maxBins, maxDepth=10, maxIter=10) model = classifier.fit(train) # Execute our predictions predictions = model.transform(test) predictions.select('hr', 'label', 'rawPrediction', 'prediction', 'probability').show(10) from pyspark.ml.evaluation import BinaryClassificationEvaluator # Evaluate our GBTClassifier model using BinaryClassificationEvaluator() ev = BinaryClassificationEvaluator(rawPredictionCol="rawPrediction", metricName="areaUnderROC") print("Test Area Under ROC: " + str(ev.evaluate(predictions))) ``` An ROC curve (receiver operating characteristic curve) is a graph showing the performance of a classification model at all classification thresholds. AUC stands for "Area under the ROC Curve." That is, AUC measures the entire two-dimensional area underneath the entire ROC curve. With our predictions, we can evaluate the model according to an evaluation metric, like area under the ROC curve, which in this case is 72% ``` #exaplanation of all parameters available print(classifier.explainParams()) import json features = map(lambda c: str(json.loads(json.dumps(c))['name']), \ list(predictions.schema['features'].metadata.get('ml_attr').get('attrs').values())[0]) # convert numpy.float64 to str for spark.createDataFrame() weights=map(lambda w: '%.10f' % w, model.featureImportances) weightedFeatures = sorted(zip(weights, features), key=lambda x: x[1], reverse=True) spark.createDataFrame(weightedFeatures).toDF("weight", "feature").createOrReplaceTempView('wf') %sql select feature, weight from wf order by weight desc ``` #### Product feature has 10% weight on the prediction. Its not very high. So it does not impact heavily on the result, whether a customer clicks an ad or not. The feature C21 though is a different story, 53%. We should dig more into what that is and why it is influencing the result so much. ``` #create a sql view predictions.createOrReplaceTempView("predictions") %sql describe predictions %sql select sum(case when prediction = label then 1 else 0 end) / (count(1) * 1.0) as accuracy from predictions ``` ### The AUC for our model is 72% and accuracy is 84%. Both are high enough. We evaluated two metrics, AUC and Accuracy. There are other metrics too like Precision, Recall, F1 score. Choosing the right metric needs some thinking. Sometimes it depends on the dataset, whether its balanced or not, or what kind of problem you are solving or what kind of ML model you are using. Again something outside the scope of this notebook. Product feature has 10% weight on the prediction. Its not very high, which means it does not impact heavily on the result. What product a customer sees has no effect on the probability an ad will be clicked. The feature C21 though is a different story, 53%. We should dig more into what that is and why it is influencing the result so much. Hope you got a taste of what kind of data analysis and ML models we can build on clickstream data and Dynamics data. Thank you.
github_jupyter
# WHY NUMPY Faster to read less bytes of memory Ploting (MATPLOTLIB) Backend (Pandas) Machine Learning ``` #import numpy library import numpy as np # 1 dimensional a = np.array([1,2,3],dtype='int16') print(a) # 2 dimensional b = np.array([[9.0,8.0,5.0],[6.0,5.0,4.0]]) print(b) # Get dimension a.ndim b.ndim # Get Shape a.shape b.shape # Get Type a.dtype b.dtype # Get size a.itemsize b.itemsize # Get Total size a.size b.size ``` # Accessing/Changing specific elements, rows , columns etc.. ``` a = np.array([[1,2,3,4,5,6,7],[8,9,10,11,12,13,14]]) a.shape a.ndim # Get specific elements [r,c] a[1,5] # Get specific row a[0,:] # Get specific columns a[:,2] # Get elements between [startindex:endindex:stepsize] a[0,1:6:2] a[0,1:6] # Changing values a[1,5]=20 print(a) a[:,2]=[1,2] print(a) a[:,2]=5 print(a) # 3 dimension examples b = np.array([[[1,2],[3,4]],[[5,6],[7,8]]]) print(b) # Get specific elements b[0,1,:] b[0,1,1] # replace b[:,1,:]=[[9,9],[9,9]] print(b) # All 0s matrix a=np.zeros((2,3)) print(a) #All 1s matrix np.ones((4,2,2),dtype='int32') # Add any number np.full((2,2),99) # Any other number like copying size of other array (full_like) a = np.array([[1,2,3,4,5,6,7],[8,9,10,11,12,13,14]]) np.full_like(a,4) # Random decimal numbers np.random.rand(4,2,3) # random number on shape of other array np.random.random_sample(a.shape) # Random integer values np.random.randint(4,7,size=(3,3)) # Random integer values np.random.randint(7,size=(3,3)) #identity matrix np.identity(3) # repeat array arr = np.array([[1,2,3]]) r1 = np.repeat(arr,3,axis=1) print(r1) # repeat array arr = np.array([[1,2,3]]) r1 = np.repeat(arr,3,axis=0) print(r1) # Example problem on merging different array output = np.ones((5,5)) print(output) z = np.zeros((3,3)) z[1,1]=9 print(z) output[1:4,1:4] = z print(output) ``` #### Be carefull while copying array ``` a = np.array([1,2,3]) b = a print(b) b[0] = 100 print(b) print(a) # use copy to prevent it a = np.array([1,2,3]) b = a.copy() print(b) b[0] = 100 print(b) print(a) ``` #### Mathematics ``` a = np.array([1,2,3,4]) print(a) a +=1 print(a) a - 2 a*2 a / 2 a**2 b = np.array([1,0,1,0]) a+b # Take the sin np.sin(a) np.cos(b) ``` #### Linear Algebra ``` a = np.ones((2,3)) a b = np.full((3,2),2) b # Matrix multiplication function np.matmul(a,b) c = np.identity(3) np.linalg.det(c) ``` #### Statistics ``` stats = np.array([[1,2,3],[4,5,6]]) stats np.min(stats,axis=0) np.max(stats,axis=1) np.sum(stats,axis=0) np.sum(stats) np.sum(stats,axis=1) ``` #### Reorganising arrays ``` before = np.array([[1,2,3,4],[5,6,7,8]]) before after = before.reshape(8,1) print(after) ``` #### Vertically stacking vectors ``` # Vertically stacking vectors v1 = np.array([1,2,3,4]) v2 = np.array([5,6,7,8]) np.vstack([v1,v2,v2,v1]) ``` # Miscellaneous #### Load data from File ``` filedata = np.genfromtxt('numpyexample.txt',delimiter=',') filedata = filedata.astype('int32') filedata ``` #### Boolean masking and Advanced indexing ``` filedata > 50 filedata < 50 filedata[filedata > 50] filedata[filedata < 50] np.any(filedata > 50,axis=0) ((filedata > 50) & (filedata < 100)) # You can index with a list in numpy a = np.array([1,2,3,4,5,6,7,8,9]) a[[1,2,8]] ```
github_jupyter
<center> <h1> Mehrgittermethoden </h1> <h2> Übungsaufgaben </h2> <h3> Robert Speck & Dieter Moser, Sommersemester 2016 </h3> </center> ___ ### Das Helmholtz-Problem ### 1. Implementieren Sie das 1D Helmholtz-Problem $-\Delta u - \sigma u = 0$, $u(0) = u(1) = 0$ als neue Problem-Klasse. Dokumentieren Sie Ihren Code angemessen mit Hilfe des Sphinx-Frameworks. Nehmen Sie das Poisson-Problem als Vorlage. 1. Schreiben Sie ein Skript, welches für festes $h$ aber für verschiedene $\sigma\in[0,50]$ solange V-Zyklen ausführt, bis eine Residuumstoleranz von $10^{-10}$ erreicht wird. Konvergiert der Mehrgitter-Algorithmus immer? Plotten Sie die Anzahl der Iterationen gegen $\sigma$ und interpretieren Sie das Ergebnis. 1. Wählen Sie nun $\sigma\in[-50,0]$. Was stellen Sie fest? Hinweis: Im `bin`-Ordner gibt die Datei `mg_poisson_test.py`, in dem der MG-Löser testweise aufgerufen wird. Erstellen Sie einen Release (welches auch das bearbeitete Notebook enthält) und reichen Sie den Link zum Release ein. Bei Einreichung wird dieses Notebook ausgeführt und erzeugt eine html-Version, die dann zur Korrektur genutzt wird. Achten Sie daher darauf, dass das Notebook ohne Fehler ausführbar ist und alle gewünschten Lösungen/Erklärungen enthält. ### Der FMG-Zyklus ("Live-Aufgabe")### Implementieren Sie entweder die rekursive oder die nicht-rekursive Version des FMG-Prediktors mit anschließenden V-Zyklen. Fügen Sie die Funktionalität in die `MyMultigrid`-Klasse ein. Stellen Sie die Entwicklung des Fehlers für das Poisson-Problem grafisch dar, indem Sie für jedes Level (z.B. nach der Grobgitter-Korrektur) den Fehler über die Freiheitsgrade plotten. ### Über zyklische Matrizen (Teil 1) Sei $A$ von der Gestalt $$A:=\begin{pmatrix} a_0&a_{n-1}&a_{n-2}&\ldots&a_1\\ a_1&a_0&a_{n-1}&\ldots&a_2\\ a_2&a_1&a_0&\ldots&a_3\\ &\ddots&\ddots&\ddots\\ a_{n-1}&a_{n-2}&a_{n-3}&\ldots&a_0\end{pmatrix}. $$ 1. Berechnen Sie das charakteristische Polynom, die Eigenwerte, Eigenvektoren und die Determinante von $A$. 1. Seien $B$ und $C$ zyklische miteinander kommutierende Matrizen mit den Eigenwerten $\{\beta_k\},\{\zeta_k\}$. Was sind die Eigenwerte von $BC$. 1. Zeigen Sie, dass der Raum der zyklischen Matrizen ein Untervektorraum des $K^{N \times N}$ ### Fourier Reihen Sei $\{t_k\}_{-\infty}^{\infty}$ eine absolut summierbare Folge, d.h. $\sum_{k=-\infty}^{\infty}|t_k| < \infty$. Sei desweiteren $f(\lambda) = \lim_{n \to \infty} \sum_{k=-n}^{n}t_k e^{ik\lambda}$. 1. Zeigen Sie, dass $S_n(\lambda) = \sum_{k=-n}^{n}t_k e^{ik\lambda}$ gleichmässig gegen $f(\lambda)$ konvergiert. 1. Folgern Sie, dass $f(\lambda)$ Riemann-integrierbar und beschränkt auf $[0,2\pi]$ ist. 1. Finden Sie mithilfe der inversen Fouriertransformation eine Darstellung von $t_k$ unter Verwendung von $f(\lambda)$. Wir nennen $f(\lambda)$ eine Funktion der *Wiener Klasse*.
github_jupyter
# Introduction to INDRA ## Case study: modelling p53 oscillations Here we demonstrate building a dynamical model of a molecular mechanism automatically from natural language. We look at a small system describing the oscillatory dynamics of p53 upon double-stranded DNA damage, as described in [Purvis and Lahav (2013)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3707615/), Figure 5B. Our goal will be to describe the mechanisms in this diagram in English and automatically read and assemble it into a model. The diagram in Purvis et al. simply shows activation and ihibition effects. Here we specified detailed mechanisms that are not explicitly included in the figure. Interestingly, we found that we needed to include some additional mechanisms that were not in the original diagram to reporduce the oscillatory dynamics, an example being the negative regulation of Mdm2 and Wip1 which is left out for visual clarity, but still plays a major role in dynamics. Below is the text describing the mechanisms we want to model. ``` model_text = \ ''' Active ATM phosphorylates ATM, and phosphorylated ATM is active. Active ATM activates p53. p53 is transcribed and active p53 transcribes MDM2. MDM2 is degraded. Active p53 activates Wip1. Active Wip1 inactivates p53. Active Wip1 dephosphorylates ATM. MDM2 ubiquitinates p53 and ubiquitinated p53 is degraded. HIPK2 inactivates Wip1. ''' ``` ## Processing the text using TRIPS We use INDRA's API to the TRIPS natural language processing system (http://trips.ihmc.us/parser/cgi/drum) developed by IHMC to read sentences describing molecular mechanisms and extract them as INDRA statements. ``` from indra.sources import trips ``` We can pass the block of text defined above to the TRIPS processor: ``` tp = trips.process_text(model_text) ``` Here tp is a TripsProcessor object which contains the extacted INDRA Statements as a list. We can inspect the statements extracted by TRIPS to make sure that all of the information was extracted. ``` tp.statements ``` ## Assembling a PySB model INDRA Statements can be assembled into a number of formats and models for simulation or visualization. Next, we will assemble the statements into a PySB model which we will parametrize and run. To do this, we start by importing INDRA's PysbAssembler ``` from indra.assemblers.pysb import PysbAssembler ``` Next, we instantiate a PySB assembler object. ``` pa = PysbAssembler() ``` The assembler takes a list of INDRA Statements as input in order to build a model. ``` pa.add_statements(tp.statements) ``` We finally call the assembler's `make_model` method to produce the PySB model. ``` model = pa.make_model() model.name = 'p53_DSB_model' ``` ## Simulating the model We next parameterize the model for dynamical simulation and set up active p53 as an observable that we will plot. ``` from pysb import Parameter, Observable ``` We add some initial active ATM to start off the reaction network. ``` model.add_component(Parameter('ATMa_0', 1)) atm_atr_m = model.monomers['ATM'] model.initial(atm_atr_m(phospho='p'),model.parameters['ATMa_0']) ``` Below are the parameters we define for the simulation (these override the nominal parameters automatically defined by INDRA's PySB Assembler). ``` parameters = { "kf_aa_phosphorylation_1": 5e-07, "kf_pa_dephosphorylation_1": 1e-05, "kf_mt_ubiquitination_1": 1e-06, "kf_at_act_1": 1e-07, "kf_tp_act_1": 1e-07, "kf_pt_act_1": 5e-07, "kf_hp_act_1": 1e-07, "kf_m_deg_1": 0.08, "kf_t_deg_1": 2e-05, "kf_t_synth_1": 2.0, "kf_tm_synth_1": 0.02, "HIPK2_0": 10000.0, "MDM2_0": 0, "ATM_0": 10000.0, "TP53_0": 10000.0, "PPM1D_0": 10000.0, "ATMa_0": 1.0, } for name, value in parameters.items(): model.parameters[name].value = value ``` Now we set up an observable which monitors the amount of active p53 over time in order to then be able to plot this quantity. ``` # Add active p53 observable p53 = model.monomers['TP53'] obs = Observable('p53_active', p53(activity='active')) model.add_component(obs) ``` We want to simulate the model over a relevant length of time: 24 hours, defined in seconds. ``` import numpy as np sim_hours = 24 ts = np.linspace(0, sim_hours*3600, sim_hours*60) ``` We now instantiate a numerical ODE solver and run it with the model for the specified time span. ``` from pysb.integrate import Solver solver = Solver(model, ts) solver.run() ``` Finally, we plot the time course of active p53. ``` import matplotlib.pyplot as plt %matplotlib inline plt.figure() plt.plot(ts, solver.yobs['p53_active'], 'r') plt.xticks([]) plt.xlabel('Time (a.u.)') plt.ylabel('Active p53') plt.yticks([]) ```
github_jupyter
``` import os import pandas as pd import numpy as np import pareto from matplotlib.ticker import StrMethodFormatter import matplotlib.pyplot as plt %matplotlib inline plt.style.use('seaborn-bright') plt.rcParams['figure.figsize'] = [15, 9] plt.rcParams['font.size'] = 12 pd.set_option('display.max_columns', None) # pd.set_option('display.max_rows', None) cwd = os.getcwd() join = os.path.join norm = os.path.normpath dynamic_path = norm(join(cwd, '../dynamic/runs/dynamic_stats.csv')) static_path = norm(join(cwd, '../static/runs/static_stats.csv')) mlaa_path = norm(join(cwd, '../MLAA-Bernier/runs/MLAA_stats.csv')) liu_path = norm(join(cwd, '../Hierarchical-Liu/runs/Hierarchical_stats.csv')) cenk_path = norm(join(cwd, '../Yavuzturk/runs/Yavuzturk_stats.csv')) df_d = pd.read_csv(dynamic_path, index_col=[0]) df_s = pd.read_csv(static_path, index_col=[0]) df_m = pd.read_csv(mlaa_path, index_col=[0]) df_l = pd.read_csv(liu_path, index_col=[0]) df_c = pd.read_csv(cenk_path, index_col=[0]) df_d.head(2) def find_claesson(load, year): _df = df_d.loc[(df_d['load'] == load) & (df_d['sim time'] == year) & (df_d['start width'] == 5) & (df_d['end width'] == 5) & (df_d['exp_rate'] == 2)] x = _df['rmse'].values y = _df['run time fraction'].values return float(x), float(y) def plot_all_methods_runtimefrac_vs_rmse(dfs, names, load, year): fig = plt.figure(figsize=(7, 5), dpi=200) ax = fig.add_subplot(1, 1, 1) markers = ['X', 'D', 'v', 'h', '+', '*'] for idx, df in enumerate(dfs): mask = (df['load'] == load) & (df['sim time'] == year) x = df.loc[mask]['rmse'] y = df.loc[mask]['run time fraction'] ax.scatter(x, y, label=names[idx], marker=markers[idx]) x, y = find_claesson(load, year) ax.scatter(x, y, label='Claesson', marker=markers[-1]) plt.xlabel('RMSE MFT [C]') plt.ylabel('Runtime Fraction') plt.title('{} {}'.format(load.title(), year)) plt.legend() plt.grid(True) # plt.savefig('{}_{}.pdf'.format(load, year), bbox_inches='tight') plt.show() plot_all_methods_runtimefrac_vs_rmse([df_d, df_s, df_m, df_l, df_c], ['Dynamic', 'Static', 'Bernier', 'Liu', 'Yavuzturk'], 'balanced', 1) plot_all_methods_runtimefrac_vs_rmse([df_d, df_s, df_m, df_l, df_c], ['Dynamic', 'Static', 'Bernier', 'Liu', 'Yavuzturk'], 'imbalanced', 1) plot_all_methods_runtimefrac_vs_rmse([df_d, df_s, df_m, df_l, df_c], ['Dynamic', 'Static', 'Bernier', 'Liu', 'Yavuzturk'], 'balanced', 5) plot_all_methods_runtimefrac_vs_rmse([df_d, df_s, df_m, df_l, df_c], ['Dynamic', 'Static', 'Bernier', 'Liu', 'Yavuzturk'], 'imbalanced', 5) plot_all_methods_runtimefrac_vs_rmse([df_d, df_s, df_m, df_l, df_c], ['Dynamic', 'Static', 'Bernier', 'Liu', 'Yavuzturk'], 'balanced', 10) plot_all_methods_runtimefrac_vs_rmse([df_d, df_s, df_m, df_l, df_c], ['Dynamic', 'Static', 'Bernier', 'Liu', 'Yavuzturk'], 'imbalanced', 10) def define_pareto(df_in): df = pd.DataFrame.from_records(pareto.eps_sort([list(df_in.itertuples(False))], [4, 5]), columns=list(df_in.columns.values)) df.sort_values(by=['rmse'], inplace=True) return df def plot_pareto_with_data(df, load, year, data_label, ymax=None, ymin=None): mask = (df['sim time'] == year) & (df['load'] == load) df_pareto = define_pareto(df.loc[mask]) fig = plt.figure(figsize=(7, 5), dpi=200) ax = fig.add_subplot(1, 1, 1) x = df['rmse'].loc[mask].values y = df['run time fraction'].loc[mask].values plt.scatter(x, y, label=data_label) plt.plot(df_pareto['rmse'].values, df_pareto['run time fraction'].values, c='r', label='Pareto') plt.xlabel('RMSE MFT [C]') plt.ylabel('Runtime Fraction') plt.title('{} {}'.format(load.title(), year)) if ymax: plt.gca().set_ylim(top=ymax) if ymin: plt.gca().set_ylim(bottom=ymin) plt.legend() plt.grid(True) plt.savefig('{}_{}_pareto.pdf'.format(load, year), bbox_inches='tight') plt.show() plot_pareto_with_data(df_d, 'balanced', 1, 'Dynamic', ymax = 0.035, ymin=0.023) plot_pareto_with_data(df_d, 'imbalanced', 1, 'Dynamic', ymax=0.035, ymin=0.023) plot_pareto_with_data(df_d, 'balanced', 5, 'Dynamic', ymax=0.013, ymin=0.005) plot_pareto_with_data(df_d, 'imbalanced', 5, 'Dynamic', ymax=0.013, ymin=0.005) plot_pareto_with_data(df_d, 'balanced', 10, 'Dynamic', ymax=0.008, ymin=0.003) plot_pareto_with_data(df_d, 'imbalanced', 10, 'Dynamic', ymax=0.008, ymin=0.003) df_d.head(1) def get_all_paretos(df, loads, years): df_ret = pd.DataFrame(columns=df.columns) for load in loads: for year in years: mask = (df['load'] == load) & (df['sim time'] == year) df_pareto = define_pareto(df.loc[mask]) df_ret = pd.concat([df_ret, df_pareto]) return df_ret df_all_pareto = get_all_paretos(df_d, ['balanced', 'imbalanced'], [1, 5, 10]) df_all_pareto.head(1) df_all_pareto[['depth', 'end width', 'exp_rate', 'rmse', 'run time', 'run time fraction', 'run time stdev', 'sample count', 'sim time', 'start width']] = df_all_pareto[['depth', 'end width', 'exp_rate', 'rmse', 'run time', 'run time fraction', 'run time stdev', 'sample count', 'sim time', 'start width']].apply(pd.to_numeric) df_all_pareto.hist() def make_hist(series, label, save_name): times = [1, 5, 10] # https://community.modeanalytics.com/gallery/python_histogram/ ax = df_all_pareto.hist(column=series, by='sim time', bins=10, sharex=True, layout=(3, 1), figsize=(7, 5), zorder=2, rwidth=0.9) for i,x in enumerate(ax): # Despine x.spines['right'].set_visible(False) x.spines['top'].set_visible(False) x.spines['left'].set_visible(False) # Switch off ticks x.tick_params(axis="both", which="both", bottom=False, top=False, labelbottom=True, left=False, right=False, labelleft=True) # Draw horizontal axis lines vals = x.get_yticks() for tick in vals: x.axhline(y=tick, linestyle='dashed', alpha=0.4, color='#eeeeee', zorder=1) # Set x-axis label x.set_xlabel(label, labelpad=20, size=14) # Set y-axis label if i == 1: x.set_ylabel("Frequency", labelpad=50, size=14) # Format y-axis label x.yaxis.set_major_formatter(StrMethodFormatter('{x:,g}')) x.tick_params(axis='x', rotation=0) x.set_title('{} Years'.format(times[i])) fig = ax[0].get_figure() fig.savefig(save_name, bbox_inches='tight') make_hist('rmse', 'RMSE MFT [C]', 'hist_rmse.pdf') make_hist('start width', '$N_{b,1}$', 'hist_num_first_level.pdf') make_hist('end width', '$N_{b,n}$', 'hist_num_last_level.pdf') def make_scatter_with_color_bar(df, color_name, color_label, title, ymax=None, ymin=None, save_name=None): fig = plt.figure(figsize=(7, 5), dpi=200) ax = fig.add_subplot(1, 1, 1) c = df[color_name].values sc = ax.scatter(df['rmse'].values, df['run time fraction'].values, c=c, cmap='jet', label='Dynamic') cb = plt.colorbar(sc) cb.set_label(color_label) plt.xlabel('RMSE MFT [C]') plt.ylabel('Runtime Fraction') plt.title(title) if ymax: plt.gca().set_ylim(top=ymax) if ymin: plt.gca().set_ylim(bottom=ymin) plt.legend() plt.grid(True) if save_name: plt.savefig('{}.pdf'.format(save_name), bbox_inches='tight') plt.show() df_d['sw-ew'] = df_d['start width'] - df_d['end width'] load = 'balanced' year = 1 mask = (df_d['load'] == load) & (df_d['sim time'] == year) make_scatter_with_color_bar(df_d.loc[mask], 'exp_rate', 'Expansion Rate', '{} {}'.format(load.title(), year), save_name='{}_{}_exp_rate'.format(load, year)) load = 'imbalanced' year = 1 mask = (df_d['load'] == load) & (df_d['sim time'] == year) make_scatter_with_color_bar(df_d.loc[mask], 'exp_rate', 'Expansion Rate', '{} {}'.format(load.title(), year), save_name='{}_{}_exp_rate'.format(load, year)) load = 'balanced' year = 5 mask = (df_d['load'] == load) & (df_d['sim time'] == year) make_scatter_with_color_bar(df_d.loc[mask], 'exp_rate', 'Expansion Rate', '{} {}'.format(load.title(), year), save_name='{}_{}_exp_rate'.format(load, year)) load = 'imbalanced' year = 5 mask = (df_d['load'] == load) & (df_d['sim time'] == year) make_scatter_with_color_bar(df_d.loc[mask], 'exp_rate', 'Expansion Rate', '{} {}'.format(load.title(), year), save_name='{}_{}_exp_rate'.format(load, year)) load = 'balanced' year = 10 mask = (df_d['load'] == load) & (df_d['sim time'] == year) make_scatter_with_color_bar(df_d.loc[mask], 'exp_rate', 'Expansion Rate', '{} {}'.format(load.title(), year), save_name='{}_{}_exp_rate'.format(load, year)) load = 'imbalanced' year = 10 mask = (df_d['load'] == load) & (df_d['sim time'] == year) make_scatter_with_color_bar(df_d.loc[mask], 'exp_rate', 'Expansion Rate', '{} {}'.format(load.title(), year), save_name='{}_{}_exp_rate'.format(load, year)) df_d.head(2) def make_scatter_with_color_bar_and_markers(df, color_name, color_label, marker_data, marker_names, title, ymax=None, ymin=None, save_name=None): fig = plt.figure(figsize=(7, 5), dpi=200) ax = fig.add_subplot(1, 1, 1) markers = ['X', 'D', 'v', 'h', '+', '*'] for idx, m in enumerate(marker_names): mask = df[marker_data] == marker_names[idx] c = df[color_name].loc[mask].values sc = ax.scatter(df['rmse'].loc[mask].values, df['run time fraction'].loc[mask].values, c=c, cmap='jet', label=str(m), marker=markers[idx]) cb = plt.colorbar(sc) cb.set_label(color_label) plt.xlabel('RMSE MFT [C]') plt.ylabel('Runtime Fraction') plt.title(title) if ymax: plt.gca().set_ylim(top=ymax) if ymin: plt.gca().set_ylim(bottom=ymin) plt.legend() plt.grid(True) if save_name: plt.savefig('{}.pdf'.format(save_name), bbox_inches='tight') plt.show() load = 'balanced' year = 1 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'start width', '$N_{b,1}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_start_width'.format(load, year)) load = 'imbalanced' year = 1 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'start width', '$N_{b,1}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_start_width'.format(load, year)) load = 'balanced' year = 5 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'start width', '$N_{b,1}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_start_width'.format(load, year)) load = 'imbalanced' year = 5 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'start width', '$N_{b,1}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_start_width'.format(load, year)) load = 'imbalanced' year = 5 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'start width', '$N_{b,1}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_start_width'.format(load, year)) load = 'balanced' year = 10 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'start width', '$N_{b,1}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_start_width'.format(load, year)) load = 'imbalanced' year = 10 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'start width', '$N_{b,1}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_start_width'.format(load, year)) load = 'balanced' year = 1 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) & (df_d['start width'] == df_d['end width']) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'start width', '$N_{b}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_uniform_start_width_end_width'.format(load, year)) load = 'imbalanced' year = 1 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) & (df_d['start width'] == df_d['end width']) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'start width', '$N_{b}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_uniform_start_width_end_width'.format(load, year)) load = 'balanced' year = 5 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) & (df_d['start width'] == df_d['end width']) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'start width', '$N_{b}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_uniform_start_width_end_width'.format(load, year)) load = 'imbalanced' year = 5 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) & (df_d['start width'] == df_d['end width']) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'start width', '$N_{b}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_uniform_start_width_end_width'.format(load, year)) load = 'balanced' year = 10 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) & (df_d['start width'] == df_d['end width']) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'start width', '$N_{b}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_uniform_start_width_end_width'.format(load, year)) load = 'imbalanced' year = 10 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) & (df_d['start width'] == df_d['end width']) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'start width', '$N_{b}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_uniform_start_width_end_width'.format(load, year)) load = 'balanced' year = 1 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) & (df_d['start width'] == 10) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'end width', '$N_{b,n}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_end_width'.format(load, year)) load = 'imbalanced' year = 1 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) & (df_d['start width'] == 10) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'end width', '$N_{b,n}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_end_width'.format(load, year)) load = 'balanced' year = 5 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) & (df_d['start width'] == 10) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'end width', '$N_{b,n}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_end_width'.format(load, year)) load = 'imbalanced' year = 5 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) & (df_d['start width'] == 10) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'end width', '$N_{b,n}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_end_width'.format(load, year)) load = 'balanced' year = 10 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) & (df_d['start width'] == 10) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'end width', '$N_{b,n}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_end_width'.format(load, year)) load = 'imbalanced' year = 10 mask_1 = (df_d['load'] == load) & (df_d['sim time'] == year) & (df_d['start width'] == 10) mask_2 = (df_d['exp_rate'] == 1.25) | (df_d['exp_rate'] == 1.50) | (df_d['exp_rate'] == 1.62) | (df_d['exp_rate'] == 1.75) mask = mask_1 & mask_2 make_scatter_with_color_bar(df_d.loc[mask], 'end width', '$N_{b,n}$', '{} {}'.format(load.title(), year), save_name='{}_{}_125-to-175_exp_rate_end_width'.format(load, year)) make_fig(pareto_i1, exp_rates, 'exp_rate', '1-year Imbalanced') make_fig(pareto_i2, exp_rates, 'exp_rate', '2-year Imbalanced') make_fig(pareto_i3, exp_rates, 'exp_rate', '3-year Imbalanced') make_fig(pareto_i4, exp_rates, 'exp_rate', '4-year Imbalanced') make_fig(pareto_i5, exp_rates, 'exp_rate', '5-year Imbalanced') make_fig(pareto_i6, exp_rates, 'exp_rate', '6-year Imbalanced') def make_fig_with_annotation(df_in, mask_series, mask_col_name, annotate_col_name, title=None): fig = plt.figure() ax = fig.add_subplot(1, 1, 1) for idx, mask in enumerate(reversed(mask_series)): s = df_in[mask_col_name] == float(mask) x = df_in.loc[s]['rmse'] y = df_in.loc[s]['run time'] m = markers[idx] ax.scatter(x, y, marker=m, label=mask, s=60) for i, txt in enumerate(df_in.loc[s][annotate_col_name].values): ax.annotate(txt, (x.values[i], y.values[i])) if title: plt.title(title) plt.legend() plt.show() exp_rate_mask = df['exp_rate'] == 1.75 start_widths = range(1, 6) make_fig_with_annotation(df.loc[m_b1 & exp_rate_mask], start_widths, 'start width', 'end width', '1-year Balanced') make_fig_with_annotation(df.loc[m_b2 & exp_rate_mask], start_widths, 'start width', 'end width', '2-year Balanced') make_fig_with_annotation(df.loc[m_b3 & exp_rate_mask], start_widths, 'start width', 'end width', '3-year Balanced') make_fig_with_annotation(df.loc[m_b4 & exp_rate_mask], start_widths, 'start width', 'end width', '4-year Balanced') make_fig_with_annotation(df.loc[m_b5 & exp_rate_mask], start_widths, 'start width', 'end width', '5-year Balanced') make_fig_with_annotation(df.loc[m_b6 & exp_rate_mask], start_widths, 'start width', 'end width', '6-year Balanced') make_fig_with_annotation(df.loc[m_i1 & exp_rate_mask], start_widths, 'start width', 'end width', '1-year Imbalanced') make_fig_with_annotation(df.loc[m_i2 & exp_rate_mask], start_widths, 'start width', 'end width', '2-year Imbalanced') make_fig_with_annotation(df.loc[m_i3 & exp_rate_mask], start_widths, 'start width', 'end width', '3-year Imbalanced') make_fig_with_annotation(df.loc[m_i4 & exp_rate_mask], start_widths, 'start width', 'end width', '4-year Imbalanced') make_fig_with_annotation(df.loc[m_i5 & exp_rate_mask], start_widths, 'start width', 'end width', '5-year Imbalanced') make_fig_with_annotation(df.loc[m_i6 & exp_rate_mask], start_widths, 'start width', 'end width', '6-year Imbalanced') def make_some_plot(*args): fig = plt.figure() ax = fig.add_subplot(1, 1, 1) args = args[0] for s in args: print(args) try: ax.plot(s['x'], s['y'], label=s['label']) except KeyError: ax.plot(s['x'], s['y']) plt.grid() plt.legend() plt.show() make_some_plot([a, b, c]) ```
github_jupyter
``` flex_subtitle = "built using jupyter-flex" flex_external_link = "https://github.com/danielfrg/jupyter-flex/blob/master/examples/plots/altair.ipynb" flex_title = "Altair plots" flex_show_source = True ``` # Simple charts ### Simple Scatter Plot with Tooltips ``` import numpy as np import pandas as pd import altair as alt from vega_datasets import data alt.renderers.set_embed_options(actions=False) np.random.seed(42) source = data.cars() plot = alt.Chart(source).mark_circle(size=60).encode( x='Horsepower', y='Miles_per_Gallon', color='Origin', tooltip=['Name', 'Origin', 'Horsepower', 'Miles_per_Gallon'] ) plot plot.properties( width='container', height='container' ).interactive() ``` ## Col 2 ### Simple bar chart ``` source = pd.DataFrame({ 'a': ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I'], 'b': [28, 55, 43, 91, 81, 53, 19, 87, 52] }) plot = alt.Chart(source).mark_bar().encode( x='a', y='b' ) plot plot.properties( width='container', height='container' ) ``` ### Simple Heatmap ``` # Compute x^2 + y^2 across a 2D grid x, y = np.meshgrid(range(-5, 5), range(-5, 5)) z = x ** 2 + y ** 2 # Convert this grid to columnar data expected by Altair source = pd.DataFrame({'x': x.ravel(), 'y': y.ravel(), 'z': z.ravel()}) plot = alt.Chart(source).mark_rect().encode( x='x:O', y='y:O', color='z:Q' ) plot plot.properties( width='container', height='container' ) ``` # Bar Charts ### Bar Chart with Negative Values ``` source = data.us_employment() alt.Chart(source).mark_bar().encode( x="month:T", y="nonfarm_change:Q", color=alt.condition( alt.datum.nonfarm_change > 0, alt.value("steelblue"), # The positive color alt.value("orange") # The negative color ) ).properties( width="container", height="container" ) ``` ### Horizontal Bar Chart ``` source = data.wheat() alt.Chart(source).mark_bar().encode( x='wheat:Q', y="year:O" ).properties( width="container", height="container" ) ``` ## Col 2 ### Stacked Bar Chart ``` source = data.barley() alt.Chart(source).mark_bar().encode( x='variety', y='sum(yield)', color='site' ).properties( width="container", height="container" ) ``` # Line and Area Charts ### Filled Step Chart ``` source = data.stocks() alt.Chart(source).mark_area( color="lightblue", interpolate='step-after', line=True ).encode( x='date', y='price' ).transform_filter(alt.datum.symbol == 'GOOG').properties( width="container", height="container" ) ``` ### Multi Series Line Chart ``` source = data.stocks() alt.Chart(source).mark_line().encode( x='date', y='price', color='symbol' ).properties( width="container", height="container" ) ``` ## Col 2 ### Cumulative Count Chart ``` source = data.movies.url alt.Chart(source).transform_window( cumulative_count="count()", sort=[{"field": "IMDB_Rating"}], ).mark_area().encode( x="IMDB_Rating:Q", y="cumulative_count:Q" ).properties( width="container", height="container" ) ``` ### Stacked Density Estimates ``` source = data.iris() alt.Chart(source).transform_fold( ['petalWidth', 'petalLength', 'sepalWidth', 'sepalLength'], as_ = ['Measurement_type', 'value'] ).transform_density( density='value', bandwidth=0.3, groupby=['Measurement_type'], extent= [0, 8], counts = True, steps=200 ).mark_area().encode( alt.X('value:Q'), alt.Y('density:Q', stack='zero'), alt.Color('Measurement_type:N') ).properties( width="container", height="container" ) ``` # Scatter and Maps ### Binned Scatterplot ``` source source = data.movies.url alt.Chart(source).mark_circle().encode( alt.X('IMDB_Rating:Q', bin=True), alt.Y('Rotten_Tomatoes_Rating:Q', bin=True), size='count()' ) ``` ### Multifeature Scatter Plot ``` source = data.iris() alt.Chart(source).mark_circle().encode( alt.X('sepalLength', scale=alt.Scale(zero=False)), alt.Y('sepalWidth', scale=alt.Scale(zero=False, padding=1)), color='species', size='petalWidth' ).properties( width="container", height="container" ) ``` ## Col 2 ### Choropleth Map ``` from vega_datasets import data counties = alt.topo_feature(data.us_10m.url, 'counties') source = data.unemployment.url alt.Chart(counties).mark_geoshape().encode( color='rate:Q' ).transform_lookup( lookup='id', from_=alt.LookupData(source, 'id', ['rate']) ).project( type='albersUsa' ).properties( width="container", height="container" ) ``` ### Layered Histogram ``` # Generating Data source = pd.DataFrame({ 'Trial A': np.random.normal(0, 0.8, 1000), 'Trial B': np.random.normal(-2, 1, 1000), 'Trial C': np.random.normal(3, 2, 1000) }) alt.Chart(source).transform_fold( ['Trial A', 'Trial B', 'Trial C'], as_=['Experiment', 'Measurement'] ).mark_area( opacity=0.3, interpolate='step' ).encode( alt.X('Measurement:Q', bin=alt.Bin(maxbins=100)), alt.Y('count()', stack=None), alt.Color('Experiment:N') ).properties( width="container", height="container" ) ``` # Scatter Matrix ``` source = data.cars() alt.Chart(source).mark_circle().encode( alt.X(alt.repeat("column"), type='quantitative'), alt.Y(alt.repeat("row"), type='quantitative'), color='Origin:N' ).repeat( row=['Horsepower', 'Acceleration', 'Miles_per_Gallon'], column=['Miles_per_Gallon', 'Acceleration', 'Horsepower'] ).interactive() ``` # Faceted Density Estimates ``` source = data.iris() alt.Chart(source).transform_fold( ['petalWidth', 'petalLength', 'sepalWidth', 'sepalLength'], as_ = ['Measurement_type', 'value'] ).transform_density( density='value', bandwidth=0.3, groupby=['Measurement_type'], extent= [0, 8] ).mark_area().encode( alt.X('value:Q'), alt.Y('density:Q'), alt.Row('Measurement_type:N') ).properties(width=600, height=180) ``` # Interactive ### Interactive Crossfilter ``` source = alt.UrlData( data.flights_2k.url, format={'parse': {'date': 'date'}} ) brush = alt.selection(type='interval', encodings=['x']) # Define the base chart, with the common parts of the # background and highlights base = alt.Chart().mark_bar().encode( x=alt.X(alt.repeat('column'), type='quantitative', bin=alt.Bin(maxbins=20)), y='count()' ).properties( width=200, height=300 ) # gray background with selection background = base.encode( color=alt.value('#ddd') ).add_selection(brush) # blue highlights on the transformed data highlight = base.transform_filter(brush) # layer the two charts & repeat alt.layer( background, highlight, data=source ).transform_calculate( "time", "hours(datum.date)" ).repeat(column=["distance", "delay", "time"]) ``` ### Scatter Plot and Histogram with Interval Selection ``` x = np.random.normal(size=100) y = np.random.normal(size=100) m = np.random.normal(15, 1, size=100) source = pd.DataFrame({"x": x, "y":y, "m":m}) # interval selection in the scatter plot pts = alt.selection(type="interval", encodings=["x"]) # left panel: scatter plot points = alt.Chart().mark_point(filled=True, color="black").encode( x='x', y='y' ).transform_filter( pts ).properties( width=300, height=300 ) # right panel: histogram mag = alt.Chart().mark_bar().encode( x='mbin:N', y="count()", color=alt.condition(pts, alt.value("black"), alt.value("lightgray")) ).properties( width=300, height=300 ).add_selection(pts) # build the chart: alt.hconcat( points, mag, data=source ).transform_bin( "mbin", field="m", bin=alt.Bin(maxbins=20) ) ``` ## Col 2 ### Interactive average ``` source = data.seattle_weather() brush = alt.selection(type='interval', encodings=['x']) bars = alt.Chart().mark_bar().encode( x='month(date):O', y='mean(precipitation):Q', opacity=alt.condition(brush, alt.OpacityValue(1), alt.OpacityValue(0.7)), ).add_selection( brush ).properties(width=700, height=300) line = alt.Chart().mark_rule(color='firebrick').encode( y='mean(precipitation):Q', size=alt.SizeValue(3) ).transform_filter( brush ).properties(width=700, height=300) alt.layer(bars, line, data=source) ``` ### Interactive Legend ``` source = data.unemployment_across_industries.url selection = alt.selection_multi(fields=['series'], bind='legend') alt.Chart(source).mark_area().encode( alt.X('yearmonth(date):T', axis=alt.Axis(domain=False, format='%Y', tickSize=0)), alt.Y('sum(count):Q', stack='center', axis=None), alt.Color('series:N', scale=alt.Scale(scheme='category20b')), opacity=alt.condition(selection, alt.value(1), alt.value(0.2)) ).properties( width="container", height="container" ).add_selection( selection ) ```
github_jupyter
<a href="https://qworld.net" target="_blank" align="left"><img src="../qworld/images/header.jpg" align="left"></a> $ \newcommand{\bra}[1]{\langle #1|} $ $ \newcommand{\ket}[1]{|#1\rangle} $ $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $ $ \newcommand{\dot}[2]{ #1 \cdot #2} $ $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $ $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $ $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $ $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $ $ \newcommand{\mypar}[1]{\left( #1 \right)} $ $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $ $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $ $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $ $ \newcommand{\onehalf}{\frac{1}{2}} $ $ \newcommand{\donehalf}{\dfrac{1}{2}} $ $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $ $ \newcommand{\vzero}{\myvector{1\\0}} $ $ \newcommand{\vone}{\myvector{0\\1}} $ $ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $ $ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $ $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $ $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $ $ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $ $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $ $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $ $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $ $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $ $ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $ $ \newcommand{\greenbit}[1] {\mathbf{{\color{green}#1}}} $ $ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}#1}}} $ $ \newcommand{\redbit}[1] {\mathbf{{\color{red}#1}}} $ $ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}#1}}} $ $ \newcommand{\blackbit}[1] {\mathbf{{\color{black}#1}}} $ <font style="font-size:28px;" align="left"><b> Quantum Coin Flipping </b></font> <br> _prepared by Abuzer Yakaryilmaz_ <br><br> [<img src="../qworld/images/watch_lecture.jpg" align="left">](https://youtu.be/ZfMYKIbuXVw) <br><br> $ \newcommand{\bra}[1]{\langle #1|} $ $ \newcommand{\ket}[1]{|#1\rangle} $ $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $ $ \newcommand{\dot}[2]{ #1 \cdot #2} $ $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $ $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $ $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $ $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $ $ \newcommand{\mypar}[1]{\left( #1 \right)} $ $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $ $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $ $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $ $ \newcommand{\onehalf}{\frac{1}{2}} $ $ \newcommand{\donehalf}{\dfrac{1}{2}} $ $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $ $ \newcommand{\vzero}{\myvector{1\\0}} $ $ \newcommand{\vone}{\myvector{0\\1}} $ $ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $ $ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $ $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $ $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $ $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $ $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $ $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $ $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $ We explain a series of experiments and try to understand basic behaviors of "particles". <h3> The first experiment</h3> We will trace the behavior of a photon. For quantum coin-flipping, we use a beam splitter. For measurements, we use two photon detectors. <ul> <li> Photon is our coin. </li> <li> Beam splitter flips the photon. </li> <li> Photon detectors are our eyes.</li> </li> <h4> The setup </h4> </b>We send photons to a beam splitter as shown below. We expect two behaviors: the beam splitter either transmits or reflects the photon. <img src="images/photon1.jpg" width="50%"> <hr> <center><font style="color:blue;"> You may do these experiments by using an open-source interactive tool <a href="https://quantumgame.io/level/0" target="_blank">quantumgame</a> (requiring internet connection). </font></center> <hr> <h4> Experimental results </h4> After many experiments, we observe the photons in each photon detector almost evenly ($ \approx \% 50 $ and $ \approx \% 50 $). <img src="images/photon2.jpg" width="50%"> <h4> The first interpretation </h4> So, a beam splitter behaves similarly to a fair coin. <ul> <li> Head (state 0): Trasmitted </li> <li> Tail (state 1): Reflected </li> </ul> <h4> Modeling </h4> We describe our first experiment by a single (probabilistic) bit. We start in state 0. With half probability, the photon transmits, and the state does not change. With half probability, the photon is reflected, and the state is flipped. <img src="images/photon3.jpg" width="50%"> <h3> The second experiment </h3> We extend our experiment with two mirrors and another beam splitter. Then, we try to validate our <u>interpretation</u> and <u>model</u>. <img src="images/photon4.jpg" width="60%"> In this setup, we have three photon detectors. By using our model described above, we expect to observe a photon <ul> <li> in $ A $ with probability $ 0.5 $, </li> <li> and in $ B1 $ and $ B2 $ with probabilities $ 0.25 $. </li> </ul> Thus, our prediction for the frequencies of observing the photons in $ A $, $ B1 $, and $ B2 $ are respectively $$ \approx \% 50, \approx \% 25, \mbox{ and } \approx \% 25. $$ <h4> Experimental results </h4> Experiments confirm our predictions. Our model explains the second experiment. <img src="images/photon5.jpg" width="65%"> <h3> The third experiment </h3> In the third experiment, we remove the photon detector $ A $. So we have only the detectors $ B1 $ and $ B2 $. <img src="images/photon6.jpg" width="65%"> <h4> Our prediction </h4> The third setup is similar to flipping a fair coin twice. Our prediciton is to observe the photons in $ B1 $ and $ B2 $ almost evenly ($ \approx \% 50 $ and $ \approx \% 50 $). <h4>Math for our prediction</h4> 0) At the initial step, we are in state $ 0 $. If we use our vector representation, it is $$ v_0 = \myvector{1 \\ 0}. $$ 1) We flip a fair coin. The new probabilistic state is expected to be in both states ($0$ and $1$) with half probability ($ \frac{1}{2} = 0.5 $). $$ v_1 = \myvector{\frac{1}{2} \\ \frac{1}{2}} = \mymatrix{cc}{ \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} } \myvector{1 \\ 0}. $$ Here the transitions of a fair coin can be represented by the matrix (table): $ \mymatrix{cc}{ \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} } $ . 2) Then, we flip a fair coin again. The new probabilistic state will be the same: $$ v_2 = \myvector{\frac{1}{2} \\ \frac{1}{2}} = \mymatrix{cc}{ \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} } \myvector{\frac{1}{2} \\ \frac{1}{2}}. $$ <b><i> Our prediction is explained with mathematical calculation. </i></b> <img src="images/prediction1.jpg" width="50%"> <h4> Experimental results </h4> <b style="color:red;">However, the experiment results do not confirm our prediction.<b> <img src="images/photon7.jpg" width="65%"> We observe the photons <b>only</b> in the detector $ B1 $, and we <b>never</b> observe any photon in the detector $ B2 $. <b> How could this be possible?</b> We may conclude that the "classical" (Newtonian) mechanics fails to explain the behaviors of particles. We need a new (mathematical) model. We can explain our experiments by using <u>quantum mechanics</u>.
github_jupyter
``` import numpy as np import pandas as pd import os as os import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn import neighbors, datasets from sklearn.metrics import confusion_matrix %matplotlib inline n_neighbors = 1 data = pd.read_csv(os.path.join('..','Data','CrowdstormingDataJuly1st.csv')) data = data.dropna() X = data[['height','weight','goals','yellowCards','yellowReds','redCards']].values y = data[['position']].values posd = pd.Series(data.position,dtype="category") posd.unique() y = posd.cat.rename_categories(range(0,12)) y = y.to_frame() y = y.values indices = np.random.permutation(data.shape[0]) training_idx, test_idx = indices[:int(round(.8*len(indices)))],indices[int(round(.8*len(indices))):] training_X = X[training_idx,:] training_y = y[training_idx].reshape(len(training_idx),) test_X = X[test_idx,:] test_y = y[test_idx].reshape(len(test_idx),) data.keys() weights = 'uniform' clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights) clf.fit(training_X, training_y) predicted = clf.predict(test_X) np.mean(predicted == test_y) weights = 'distance' clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights) clf.fit(training_X, training_y) predicted = clf.predict(test_X) np.mean(predicted == test_y) # Redo in 2 dimensions X = data[['height','weight']].values y = data[['position']].values posd = pd.Series(data.position,dtype="category") posd.unique() y = posd.cat.rename_categories(range(0,12)) y = y.to_frame() y = y.values indices = np.random.permutation(data.shape[0]) training_idx, test_idx = indices[:int(round(.8*len(indices)))],indices[int(round(.8*len(indices))):] training_X = X[training_idx,:] training_y = y[training_idx].reshape(len(training_idx),) test_X = X[test_idx,:] test_y = y[test_idx].reshape(len(test_idx),) weights = 'uniform' weights = 'distance' clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights) clf.fit(training_X, training_y) # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. X = test_X[:,0:2] y = test_y h = .02 x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # Create color maps cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF','#FFAAAA', '#ff0000', '#ff8000','#ffff00', '#40ff00', '#00ffff','#bf00ff', '#ff0080', '#ff0000']) cmap_bold = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF','#FFAAAA', '#ff0000', '#ff8000','#ffff00', '#40ff00', '#00ffff','#bf00ff', '#ff0080', '#ff0000']) # Put the result into a color plot plt.figure() plt.pcolormesh(xx, yy, Z, cmap=cmap_light) # Plot also the training points plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title("12-Class classification (k = %i, weights = '%s')" % (n_neighbors, weights)) plt.show() predicted = clf.predict(test_X) np.mean(predicted == test_y) def plot_confusion_matrix(cm, names, title='Confusion matrix', cmap=plt.cm.Blues): plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(names)) plt.xticks(tick_marks, names, rotation=45) plt.yticks(tick_marks, names) plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') # Compute confusion matrix cm = confusion_matrix(test_y, predicted) np.set_printoptions(precision=2) print('Confusion matrix, without normalization') print(cm) names = posd.unique() plt.figure(figsize=(12,12)) plot_confusion_matrix(cm,names,title = 'Confusion matrix') # Normalize the confusion matrix by row (i.e by the number of samples # in each class) cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print('Normalized confusion matrix') print(cm_normalized) plt.figure(figsize=(12,12)) plot_confusion_matrix(cm_normalized,names, title='Normalized confusion matrix') plt.show() ``` ## Other classification models Linear SVM ``` from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import LinearSVC # Linear SVM clf = OneVsRestClassifier(LinearSVC(random_state=0)) clf.fit(training_X, training_y) predicted = clf.predict(test_X) np.mean(predicted == test_y) # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. X = test_X[:,0:2] y = test_y h = .02 x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # Create color maps cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF','#FFAAAA', '#ff0000', '#ff8000','#ffff00', '#40ff00', '#00ffff','#bf00ff', '#ff0080', '#ff0000']) cmap_bold = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF','#FFAAAA', '#ff0000', '#ff8000','#ffff00', '#40ff00', '#00ffff','#bf00ff', '#ff0080', '#ff0000']) # Put the result into a color plot plt.figure() plt.pcolormesh(xx, yy, Z, cmap=cmap_light) # Plot also the training points plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title("3-Class classification (k = %i, weights = '%s')" % (n_neighbors, weights)) plt.show() # Naive Bayesian classifier from sklearn.naive_bayes import GaussianNB clf = GaussianNB() clf.fit(training_X, training_y) # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. X = test_X[:,0:2] y = test_y h = .02 x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # Create color maps cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF','#FFAAAA', '#ff0000', '#ff8000','#ffff00', '#40ff00', '#00ffff','#bf00ff', '#ff0080', '#ff0000']) cmap_bold = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF','#FFAAAA', '#ff0000', '#ff8000','#ffff00', '#40ff00', '#00ffff','#bf00ff', '#ff0080', '#ff0000']) # Put the result into a color plot plt.figure() plt.pcolormesh(xx, yy, Z, cmap=cmap_light) # Plot also the training points plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title("3-Class classification (k = %i, weights = '%s')" % (n_neighbors, weights)) plt.show() ```
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import tensorflow as tf import tensorlayer as tl import torch.nn as nn from SubpixelConv2D import SubpixelConv2d from Subpixel_conv2D import SubpixelConv2D def get_gen(input_shape): w_init = tf.random_normal_initializer(stddev=0.02) g_init = tf.random_normal_initializer(1., 0.02) nin = tf.keras.layers.Input(input_shape) n = tf.keras.layers.Conv2D(64, (3,3), (1,1), padding='same', kernel_initializer=w_init,activation='relu')(nin) temp=n # B residual blocks for i in range(16): nn = tf.keras.layers.Conv2D(64, (3,3), (1,1), padding='same', kernel_initializer=w_init)(n) nn = tf.keras.layers.BatchNormalization(gamma_initializer=g_init)(nn) nn = tf.keras.activations.relu(nn) nn = tf.keras.layers.Conv2D(64, (3,3), (1,1), padding='same', kernel_initializer=w_init)(nn) nn = tf.keras.layers.BatchNormalization(gamma_initializer=g_init)(nn) nn = tf.keras.layers.Add()([n, nn]) n=nn n = tf.keras.layers.Conv2D(64, (3,3), (1,1), padding='same', kernel_initializer=w_init,bias_initializer=None)(n) n = tf.keras.layers.BatchNormalization(gamma_initializer=g_init)(n) n = tf.keras.layers.Add()([n, temp]) n = tf.keras.layers.Conv2D(256, (3,3), (1,1), padding='same', kernel_initializer=w_init)(n) # n = tl.layers.SubpixelConv2d(scale=2, n_out_channels=None)(n) # n = SubpixelConv2d(scale=2,n_out_channels=None)(n) n = SubpixelConv2D(upsampling_factor=2)(n) n = tf.keras.layers.Conv2D(256, (3,3), (1,1), padding='same', kernel_initializer=w_init)(n) # n = tl.layers.SubpixelConv2d(scale=2, n_out_channels=None)(n) n = SubpixelConv2D(upsampling_factor=2)(n) nn = tf.keras.layers.Conv2D(3, (1,1), (1,1), padding='same', kernel_initializer=w_init,activation='tanh')(n) generator = tf.keras.models.Model(inputs=nin, outputs=nn, name='generator') return generator def get_Dis(input_shape): w_init = tf.random_normal_initializer(stddev=0.02) gamma_initializer = tf.random_normal_initializer(1., 0.02) df_dim = 64 nin = tf.keras.layers.Input(input_shape) n = tf.keras.layers.Conv2D(df_dim, (4,4), (1,1), padding='same', kernel_initializer=w_init,activation='LeakyReLU')(nin) n = tf.keras.layers.Conv2D(df_dim, (4,4), (2,2), padding='same', kernel_initializer=w_init,activation='LeakyReLU',bias_initializer=None)(n) n = tf.keras.layers.BatchNormalization(gamma_initializer=gamma_initializer)(n) n = tf.keras.layers.Conv2D(df_dim*2, (4,4), (1,1), padding='same', kernel_initializer=w_init,activation='LeakyReLU',bias_initializer=None)(n) n = tf.keras.layers.BatchNormalization(gamma_initializer=gamma_initializer)(n) n = tf.keras.layers.Conv2D(df_dim*2, (4,4), (2,2), padding='same', kernel_initializer=w_init,activation='LeakyReLU',bias_initializer=None)(n) n = tf.keras.layers.BatchNormalization(gamma_initializer=gamma_initializer)(n) n = tf.keras.layers.Conv2D(df_dim*4, (4,4), (1,1), padding='same', kernel_initializer=w_init,activation='LeakyReLU',bias_initializer=None)(n) n = tf.keras.layers.BatchNormalization(gamma_initializer=gamma_initializer)(n) n = tf.keras.layers.Conv2D(df_dim*4, (4,4), (2,2), padding='same', kernel_initializer=w_init,activation='LeakyReLU',bias_initializer=None)(n) n = tf.keras.layers.BatchNormalization(gamma_initializer=gamma_initializer)(n) n = tf.keras.layers.Conv2D(df_dim*8, (4,4), (1,1), padding='same', kernel_initializer=w_init,activation='LeakyReLU',bias_initializer=None)(n) n = tf.keras.layers.BatchNormalization(gamma_initializer=gamma_initializer)(n) n = tf.keras.layers.Conv2D(df_dim*8, (4,4), (2,2), padding='same', kernel_initializer=w_init,activation='LeakyReLU',bias_initializer=None)(n) n = tf.keras.layers.BatchNormalization(gamma_initializer=gamma_initializer)(n) n = tf.keras.layers.Flatten()(n) n = tf.keras.layers.Dense(1024, kernel_initializer=w_init,activation='LeakyReLU')(n) n = tf.keras.layers.Dense(1, kernel_initializer=w_init,activation='sigmoid')(n) discriminator = tf.keras.models.Model(inputs=nin, outputs=n, name='discriminator') return discriminator def get_Dis2(input_shape): w_init = tf.random_normal_initializer(stddev=0.02) gamma_initializer = tf.random_normal_initializer(1., 0.02) df_dim = 64 nin = tf.keras.layers.Input(input_shape) n = tf.keras.layers.Conv2D(df_dim, (4,4), (2,2), padding='same', kernel_initializer=w_init,activation='LeakyReLU')(nin) n = tf.keras.layers.Conv2D(df_dim*2, (4,4), (2,2), padding='same', kernel_initializer=w_init,activation='LeakyReLU',bias_initializer=None)(n) n = tf.keras.layers.BatchNormalization(gamma_initializer=gamma_initializer)(n) n = tf.keras.layers.Conv2D(df_dim*4, (4,4), (2,2), padding='same', kernel_initializer=w_init,activation='LeakyReLU',bias_initializer=None)(n) n = tf.keras.layers.BatchNormalization(gamma_initializer=gamma_initializer)(n) n = tf.keras.layers.Conv2D(df_dim*8, (4,4), (2,2), padding='same', kernel_initializer=w_init,activation='LeakyReLU',bias_initializer=None)(n) n = tf.keras.layers.BatchNormalization(gamma_initializer=gamma_initializer)(n) n = tf.keras.layers.Conv2D(df_dim*16, (4,4), (2,2), padding='same', kernel_initializer=w_init,activation='LeakyReLU',bias_initializer=None)(n) n = tf.keras.layers.BatchNormalization(gamma_initializer=gamma_initializer)(n) n = tf.keras.layers.Conv2D(df_dim*32, (4,4), (2,2), padding='same', kernel_initializer=w_init,activation='LeakyReLU',bias_initializer=None)(n) n = tf.keras.layers.BatchNormalization(gamma_initializer=gamma_initializer)(n) n = tf.keras.layers.Conv2D(df*16, (1,1), (1,1), padding='same', kernel_initializer=w_init,activation='LeakyReLU',bias_initializer=None)(n) n = tf.keras.layers.BatchNormalization(gamma_initializer=gamma_initializer)(n) n = tf.keras.layers.Conv2D(df*8, (1,1), (1,1), padding='same', kernel_initializer=w_init,activation='LeakyReLU',bias_initializer=None)(n) nn = tf.keras.layers.BatchNormalization(gamma_initializer=gamma_initializer)(n) n = tf.keras.layers.Conv2D(df*2,(1,1),(1,1),padding='same',kernel_initializer=w_init,activation='LeakyReLU',bias_initializer=None)(nn) n = tf.keras.layers.BatchNormalization(gamma_initializer=gamma_initializer)(n) n = tf.keras.layers.Conv2D(df*2,(3,3),(1,1),padding='same',kernel_initializer=w_init,activation='LeakyReLU',bias_initializer=None)(n) n = tf.keras.layers.BatchNormalization(gamma_initializer=gamma_initializer)(n) n = tf.keras.layers.Conv2D(df*8,(3,3),(1,1),padding='same',kernel_initializer=w_init,activation='LeakyReLU',bias_initializer=None)(n) n = tf.keras.layers.BatchNormalization(gamma_initializer=gamma_initializer)(n) n = tf.keras.layers.Add()([nn,n]) n = Flatten()(n) n = Dense(1,kernel_initializer=w_init,activation='LeakyReLU')(n) discriminator = tf.keras.models.Model(inputs=nin, outputs=n, name='discriminator') return discriminator def load(image_file): image = tf.io.read_file(image_file) image = tf.image.decode_png(image) image = tf.cast(image, tf.float32) return image def resize(image,height,width): image = tf.image.resize(image,[height,width],method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) return image def normalize(image): image = (image/127.5)-1 return image def random_crop(image): image = tf.image.random_crop(image,[384,384,3]) return image def random_crop_LR(image): image = tf.image.random_crop(image,[96,96,3]) return image @tf.function def random_jitter_LR(image): image = resize(image,256,256) image = random_crop_LR(image) if tf.random.uniform(()) > 0.5: image = tf.image.flip_left_right(image) return image @tf.function def random_jitter(image): image = resize(image,390,390) image = random_crop(image) if tf.random.uniform(())>0.5: image = tf.image.flip_left_right(image) return image def load_image_train(image_file): image = load(image_file) image = random_jitter(image) image = normalize(image) return image def load_image_train_LR(image_file): image = load(image_file) image = random_jitter_LR(image) image = normalize(image) return image def load_image_test(image_file): image = load(image_file) image = resize(image,390,390) image = normalize(image) return image LR = tf.data.Dataset.list_files(r'C:\Users\kc510\Documents\Projects\Projects_MLOps\Project_SuperResolution\data\train\LR'+'\\*.png',shuffle=False) LR = LR.map(load_image_train_LR,num_parallel_calls=tf.data.experimental.AUTOTUNE) HR = tf.data.Dataset.list_files(r'C:\Users\kc510\Documents\Projects\Projects_MLOps\Project_SuperResolution\data\train\HR'+'\\*.png',shuffle=False) HR = HR.map(load_image_train,num_parallel_calls=tf.data.experimental.AUTOTUNE) ds = tf.data.Dataset.zip((LR,HR)) ds = ds.shuffle(buffer_size=800).batch(4) for inp,out in ds.take(1): display_list = [inp[0],out[0]] for i in range(2): plt.subplot(1,2,i+1) plt.imshow(display_list[i]*0.5) plt.axis('off') plt.show() for n, (inp,out) in ds.enumerate(): print(inp.shape,out.shape) break LAMBDA = 1e-3 def generator_loss(disc_generated_output,gen_output,target): valid = tf.ones_like(disc_generated_output) gan_loss = tf.keras.losses.MSE(valid,disc_generated_output) gen_features = feature_extractor(gen_output) real_features = feature_extractor(target) l1_loss = tf.reduce_mean(tf.abs(gen_features-real_features)) total_gen_loss = gan_loss + (LAMBDA*l1_loss) return total_gen_loss,gan_loss,l1_loss def discriminator_loss(disc_real_output,disc_generated_output): real_loss = tf.keras.losses.MSE(tf.ones_like(disc_real_output),disc_real_output) generated_loss = tf.keras.losses.MSE(tf.zeros_like(disc_generated_output),disc_generated_output) total_disc_loss = (real_loss + generated_loss) / 2 return total_disc_loss # criterion_GAN = nn.MSELoss() # criterion_content = nn.L1Loss() # Adversarial truth from tensorflow.keras.applications.vgg19 import VGG19 vgg19_model = VGG19(weights='imagenet',include_top=False) feature_extractor = tf.keras.models.Sequential(*[(vgg19_model.layers)[:18]]) def gen_loss(disc_generated_output,target,output): valid = tf.ones_like(disc_generated_output) loss_gan = tf.keras.losses.MSE(valid,disc_generated_output) # Content loss gen_features = feature_extractor(output,training=True) real_features = feature_extractor(target,training=True) l1_loss = tf.reduce_mean(tf.abs(real_features-gen_features)) total_gen_loss = loss_gan + 1e-3*l1_loss return total_gen_loss, loss_gan, l1_loss def disc_loss(disc_real_output,disc_generated_output): real_loss = tf.keras.losses.MSE(tf.ones_like(disc_real_output),disc_real_output) generated_loss = tf.keras.losses.MSE(tf.zeros_like(disc_generated_output),disc_generated_output) total_disc_loss = (real_loss + generated_loss) / 2 return total_disc_loss generator_optimizer = tf.keras.optimizers.Adam(2e-4,0.5) discriminator_optimizer = tf.keras.optimizers.Adam(2e-4,0.5) def train(input_image,target,epoch): with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape: gen_output = generator(input_image,training=True) disc_real_output = discriminator(target,training=True) disc_generated_output = discriminator(gen_output,training=True) gen_total_loss, gen_loss_gan, gen_loss_l1 = generator_loss(disc_generated_output,gen_output,target) disc_loss = discriminator_loss(disc_real_output,disc_generated_output) generator_gradients = gen_tape.gradient(gen_total_loss,generator.trainable_variables) discriminator_gradients = disc_tape.gradient(disc_loss,discriminator.trainable_variables) generator_optimizer.apply_gradients(zip(generator_gradients,generator.trainable_variables)) discriminator_optimizer.apply_gradients(zip(discriminator_gradients,discriminator.trainable_variables)) def fit(train_ds,epochs): for epoch in range(epochs): start = time.time() for n, (input_image,target) in train_ds.enumerate(): print(".",end="") train(input_image,target,epoch) if n % 100 == 0: print() print() if(epoch+1)%5 == 0: checkpoint.save(file_prefix = checkpoint_prefix) generator.save() discriminator.save() print('Time taken for epoch {} is {} sec'.format(epoch+1,time.time()-start)) import time generator = get_gen(input_shape=(96,96,3)) discriminator = get_Dis(input_shape=(384,384,3)) fit(ds,5) ```
github_jupyter
``` %load_ext autoreload %autoreload 2 import torch from UnarySim.sw.kernel.sqrt import UnarySqrt from UnarySim.sw.stream.gen import RNG, SourceGen, BSGen from UnarySim.sw.metric.metric import ProgressiveError import matplotlib.pyplot as plt import time import math import numpy as np device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") def test(rng="Sobol", mode="unipolar", bitwidth=8, jk_trace=False, emit=True, total_cnt=100, depth_kernel=1, depth_sr=2, savepdf=False): stype = torch.float rtype = torch.float print("========================================================") print(mode) print("========================================================") # all input values are non-negative low_bound = 0 if mode is "unipolar": up_bound = 2**bitwidth elif mode is "bipolar": low_bound = 0 up_bound = 2**(bitwidth-1) input_list = [] for input_val in range(low_bound, up_bound+1, 1): input_list.append(input_val) input = torch.tensor(input_list).type(torch.float).div(up_bound).to(device) output = torch.sqrt(input).to(device) result_pe_total = [] for rand_idx in range(1, total_cnt+1): outputPE = ProgressiveError(output, mode=mode).to(device) inputPE = ProgressiveError(input, mode=mode).to(device) inputSRC = SourceGen(input, bitwidth, mode=mode, rtype=rtype)().to(device) dut_sqrt = UnarySqrt(mode=mode, jk_trace=jk_trace, depth_kernel=depth_kernel, rng="Sobol", rng_dim=4, emit=emit, depth_sr=depth_sr, stype=torch.float).to(device) inputRNG = RNG(bitwidth, rand_idx, rng, rtype)().to(device) inputBS = BSGen(inputSRC, inputRNG, stype).to(device) with torch.no_grad(): start_time = time.time() for i in range(2**bitwidth): input_bs = inputBS(torch.tensor([i])) inputPE.Monitor(input_bs) ouyput_bs = dut_sqrt(input_bs) outputPE.Monitor(ouyput_bs) # get the result for different rng result_pe = outputPE()[1].cpu().numpy() result_pe_total.append(result_pe) # get the result for different rng result_pe_total = np.array(result_pe_total) ####################################################################### # check the error of all simulation ####################################################################### print("RMSE:{:1.4}".format(math.sqrt(np.mean(result_pe_total**2)))) print("MAE: {:1.4}".format(np.mean(np.abs(result_pe_total)))) print("bias:{:1.4}".format(np.mean(result_pe_total))) print("max: {:1.4}".format(np.max(result_pe_total))) print("min: {:1.4}".format(np.min(result_pe_total))) ####################################################################### # check the error according to input value ####################################################################### max_total = np.max(result_pe_total, axis=0) min_total = np.min(result_pe_total, axis=0) avg_total = np.mean(result_pe_total, axis=0) axis_len = outputPE()[1].size()[0] input_x_axis = [] for axis_index in range(axis_len): input_x_axis.append((axis_index/(axis_len-1)*(up_bound-low_bound)+low_bound)/up_bound) fig, ax = plt.subplots() ax.fill_between(input_x_axis, max_total, avg_total, facecolor="red", alpha=0.75) ax.fill_between(input_x_axis, avg_total, min_total, facecolor="blue", alpha=0.75) ax.plot(input_x_axis, avg_total, label='Avg error', color="black", linewidth=0.3) plt.tight_layout() plt.xlabel('Input value') plt.ylabel('Output error') plt.xticks(np.arange(0, 1.1, step=0.5)) # ax.xaxis.set_ticklabels([]) plt.xlim(0, 1) plt.yticks(np.arange(-0.2, 0.4, step=0.2)) # ax.yaxis.set_ticklabels([]) plt.ylim(-0.3, 0.55) plt.grid(b=True, which="both", axis="y", linestyle="--", color="grey", linewidth=0.3) fig.set_size_inches(4, 4) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) if savepdf is True: if emit is True: plt.savefig("sqrt-"+mode+"-bw"+str(bitwidth)+"-bit"+"-emitting"+"-sr"+str(depth_sr)+"-k"+str(depth_kernel)+".pdf", dpi=300, bbox_inches='tight') else: if jk_trace is True: plt.savefig("sqrt-"+mode+"-bw"+str(bitwidth)+"-bit"+"-inserting-JK"+".pdf", dpi=300, bbox_inches='tight') else: plt.savefig("sqrt-"+mode+"-bw"+str(bitwidth)+"-bit"+"-inserting-IS"+"-k"+str(depth_kernel)+".pdf", dpi=300, bbox_inches='tight') plt.show() plt.close() ``` # performance comparison ## bit-inseting and bit-emitting ``` print("unipolar, bit-emitting") test(mode = "unipolar", bitwidth = 8, emit = True, jk_trace = False, depth_kernel=1, depth_sr=2, savepdf=False) test(mode = "unipolar", bitwidth = 8, emit = True, jk_trace = False, depth_kernel=1, depth_sr=4, savepdf=False) test(mode = "unipolar", bitwidth = 8, emit = True, jk_trace = False, depth_kernel=1, depth_sr=8, savepdf=False) print("unipolar, bit-inserting-JK") test(mode = "unipolar", bitwidth = 8, emit = False, jk_trace = True, depth_kernel=1, depth_sr=2, savepdf=False) print("unipolar, bit-inserting-IS") test(mode = "unipolar", bitwidth = 8, emit = False, jk_trace = False, depth_kernel=1, depth_sr=2, savepdf=False) print("bipolar, bit-emitting") test(mode = "bipolar", bitwidth = 8, emit = True, jk_trace = False, depth_kernel=1, depth_sr=2, savepdf=False) test(mode = "bipolar", bitwidth = 8, emit = True, jk_trace = False, depth_kernel=1, depth_sr=4, savepdf=False) test(mode = "bipolar", bitwidth = 8, emit = True, jk_trace = False, depth_kernel=1, depth_sr=8, savepdf=False) print("bipolar, bit-inserting-JK") test(mode = "bipolar", bitwidth = 8, emit = False, jk_trace = True, depth_kernel=1, depth_sr=2, savepdf=False) print("bipolar, bit-inserting-IS") test(mode = "bipolar", bitwidth = 8, emit = False, jk_trace = False, depth_kernel=1, depth_sr=2, savepdf=False) ```
github_jupyter
``` # default_exp schema ``` # schema > checking dictionaries based on json schema ``` #export import jsonschema, requests, yaml from types import SimpleNamespace ``` # Get schema from path ``` #export import dpath.util def getSchemaPath(schemaUrl:str, path:str='/', isYaml = True): ''' get a nested schema from path \n schemaUrl: str: url of the schema \n path: str: path of the schema, if root then path='/' \n isYaml: Bool: is the schema yaml (false indicates that the schema is json), default = True ''' if isYaml: schema = yaml.load(requests.get(schemaUrl).text, Loader= yaml.Loader) else: schema = requests.get(schemaUrl).json() return dpath.util.get(schema, path) testSchema = 'https://gist.githubusercontent.com/thanakijwanavit/e2720d091ae0cef710a49b57c0c9cd4c/raw/ed2d322eac4900ee0f95b431d0f9067a40f3e0f0/squirrelOpenApiV0.0.3.yaml' path = 'components/schemas/Location' getSchemaPath(testSchema, path) ``` # Validate Url ``` #export def validateUrl(url,input_, format_ = 'json', headers = {'Cache-Control': 'no-cache'}, path = '/'): ''' verifies whether the input_ is valid under the schema located at path in the url \n url: str: url where the schema file is located \n input_: the input to be validated \n format_: str: the format of the schema; can be 'yaml' or 'json', default = 'json' \n headers: dict: dictionary of HTTP headers to send with the get request to retrieve the schema \n path: str: path of the schema within the file, if root then path='/' ''' if format_ == 'yaml': schema = getSchemaPath(url, path = path, isYaml = True) elif format_ == 'json': schema = getSchemaPath(url, path = path, isYaml = False) else: print('invalid schema format, using json') schema = requests.get(url).json() res = jsonschema.validate(input_,schema) return SimpleNamespace(**input_) ``` ### test json ``` url = 'https://raw.githubusercontent.com/thanakijwanavit/villaMasterSchema/master/Product.json' input_ = {'iprcode': 4, 'cprcode': 123 , 'oprCode': '123'} validateUrl(url, input_) ``` #### error json ``` errorProduct = [{ 'cprcode': '0171670', 'iprcode': '0171670', 'oprcode': '0171670', 'ordertype': 'Y', 'pr_abb': 'JIRAPAT YOUNG KALE 2', 'pr_active': 'Y', 'pr_cgcode': '05', 'pr_code': '0171670', 'pr_dpcode': '19', 'pr_engname': 'JIRAAT YOUNG KALE 200 G.', 'pr_ggcode': '057', 'pr_market': 'JIRAPAT ยอดคะน้า 200 G.', 'pr_name': 'JIRAPAT ยอดคะน้า 200 G.', 'pr_puqty': '1', 'pr_sa_method': '1', 'pr_sucode1': 'CM845', 'pr_suref3': 'A', 'prtype': 'I', 'psqty': '1', 'pstype': '1'}] #ProductDatabase.valueUpdate({'items':sampleProducts}) url = 'https://raw.githubusercontent.com/thanakijwanavit/villaMasterSchema/master/valueUpdate.json' try: validateUrl(url, errorProduct) except Exception as e: print(f'{e}') ``` ### test yaml ``` ##success url = 'https://gist.githubusercontent.com/thanakijwanavit/241c4cc443f39ea096820f5dfb84017d/raw/61694a0c5fbac3f6408fbb11217cc4265d38e38d/sampleYaml.yaml' input_ = {'iprcode': 4, 'cprcode': 123 , 'oprCode': '123'} print(validateUrl(url, input_, format_='yaml')) ``` #### error yaml ``` ## failure try: print(validateUrl(url, errorProduct, format_='yaml')) except Exception as e: print(f'{e}') ``` ## convert type to comply with json schema ``` #export typeMap = {'string': str, 'number': float, 'integer': int, 'object': dict, 'array': list, 'boolean': bool, 'null': None} def getTypes(schemaUrl:str, typeMap:dict=typeMap)->dict: ''' get python types from json schema \n schemaUrl: str: url where the schema file is located \n typeMap: dict: the dictionary that matches the key to its corresponding data type ''' r = requests.get(schemaUrl) s = yaml.load(r.text, Loader=yaml.FullLoader) properties = s['properties'] dtypes = {k: typeMap.get(v['type']) for k,v in properties.items()} return dtypes url = 'https://raw.githubusercontent.com/thanakijwanavit/villaMasterSchema/dev-manual/inventory/inventory.yaml' getTypes(url) #export def typeMapJsonSchema(url:str, input_:dict = {}, typeMap:dict = typeMap, defaultType=str): ''' try to map the datatype into the one specified in url of json schema. \n if type is not found, the defaultType is used \n url: str where the schema file is located \n typeMap: dict: the dictionary that matches the key to its corresponding data type \n defaultType: set the default type if a type is not specified ''' typesDict = getTypes(url, typeMap=typeMap) # get dtype from schema url print(f'typesDict is: {typesDict}') convertedInput = {k: (typesDict.get(k) or defaultType)(v) for k,v in input_.items()} return convertedInput url = 'https://raw.githubusercontent.com/thanakijwanavit/villaMasterSchema/dev-manual/inventory/inventory.yaml' inv = { 'iprcode': '0000009', 'brcode': '1000', 'ib_cf_qty': '50', 'new_ib_vs_stock_cv': '27', 'onlineflag': True } typeMapJsonSchema(url, input_=inv) typeMapJsonSchema(url, input_=inv) ``` # experimental code ``` #hide from pprint import pprint pprint("[{'cprcode':[1,2,3],'groupId':1171,'groupame':'Vegetables','level':1,'user':'1234'},{'cprcode':[7,8],'groupId':1191,'groupame':'Carbohydrates','subGroup':'[1,2,3]','user':'929292'},{'cprcode':[11,12],'groupId':1251,'groupame':'Proteis','user':'29420'},{'baerType':0,'categoryLv1':'hello','cprcode':[7,8],'descriptio':'hi','eabled':1,'edDate':23,'groupId':1271,'groupame':'Fruits','isBaer':1,'level':3,'slotIdex':2,'startDate':10,'subGroup':'[1,2,3]','user':'929292'}]isotoftype'object'Failedvalidatig'type'ischema:{'properties':{'baerType':{'type':'iteger'},'categoryLv1':{'type':'strig'},'cprcode':{'items':{'type':'iteger'},'type':'array'},'descriptio':{'type':'strig'},'eabled':{'type':'iteger'},'edDate':{'type':'iteger'},'groupId':{'type':'iteger'},'groupame':{'type':'strig'},'isBaer':{'type':'iteger'},'level':{'type':'iteger'},'slotIdex':{'type':'iteger'},'startDate':{'type':'iteger'},'subGroup':{'type':'strig'},'user':{'type':'strig'}},'required':['cprcode','groupId','groupame','user'],'type':'object'}Oistace:[{'cprcode':[1,2,3],'groupId':1171,'groupame':'Vegetables','level':1,'user':'1234'},{'cprcode':[7,8],'groupId':1191,'groupame':'Carbohydrates','subGroup':'[1,2,3]','user':'929292'},{'cprcode':[11,12],'groupId':1251,'groupame':'Proteis','user':'29420'},{'baerType':0,'categoryLv1':'hello','cprcode':[7,8],'descriptio':'hi','eabled':1,'edDate':23,'groupId':1271,'groupame':'Fruits','isBaer':1,'level':3,'slotIdex':2,'startDate':10,'subGroup':'[1,2,3]','user':'929292'})") ```
github_jupyter
#### 정규표현식 - Regex - 문자열을 처리할 때 특정 패턴으로 문자열을 처리하고 싶을 때 사용되는 문법 - 정규 표현식 함수 - match: 문자여르이 가장 앞에서 부터 일치하는 패턴 찾기 - search: 문자열에서 가장 첫번째로 일치하는 패턴 찾기 - findall: 일치하는 모든 패턴 찾기 - split: 문자열을 특정 패턴으로 나누기 - sub: 문자열을 특정 패턴에 맞춰 대체 하기 - pattern ``` import re s = 'fast campus datascience fighting.\ datascience fighting. fast campus fighting.' ``` #### 1. match - 문자열의 가장 앞에서 부터 일치하는 패턴 찾기 ``` result1 = re.match(pattern='fast', string=s) result2 = re.match(pattern='campus', string=s) print('result1:', result1) print('resutl2:', result2) ``` #### 2. search - 문자열에서 가장 첫번째로 일치하는 패턴 찾기 ``` result3 = re.search('fast', s) result4 = re.search('campus', s) result3, result4 ``` #### 3. findall - 일치하는 모든 패턴찾기 - list로 return ``` result5 = re.findall('fast', s) result6 = re.findall('fighting', s) result5, result6 ``` #### 4. split - 문자열을 특정 패턴으로 나누기 ``` s1 = 'fast campus datascience school fighting!' result7 = re.split('ca', s1) result7 ``` #### 5. sub - 일치하는 패턴 대체하기 Replacement와 비슷함 ``` s2 = 'slow campus datascience school fighting! slow campus \ datascience school fighting!' result8 = re.sub('slow', 'fast', s2) result8 s2 = 'fast campus fighting' re.sub('fast', 'Jay', s2) ``` #### 6. Pattern - 패턴을 설정하여 문자열 데이터에서 패턴에 맞는 데이터를 찾거나 수정가능 #### 6.1 문자 - \d: 숫자 (decimal? 연상) - \D: 숫자외 모든것 - \w: 숫자,문자,_ (Word 연상) - \W: 숫자,문자,_외 모든것 - \s: 공백 문자 (space) - \S: 공백 문자 제외한 모든것 #### 사용가능한 모든 문자 호출 ---------------------------------- ``` import string pt = string.printable pt ``` #### 숫자와 비숫자를 모두 찾기 ``` result = re.findall('\d', pt) #숫자 ''.join(result) result1 = re.findall('\D', pt) # 비숫자 ''.join(result1) result2 = re.findall('\w', pt) #숫자 + 문자 + _ ''.join(result2) result3 = re.findall('\W', pt) # 숫자, 문자, _ 제외한 모든것 ''.join(result3) # 공백문자 result4 = re.findall('\s', pt) ''.join(result4) # 공백문자 제외한 모든 것 result5 = re.findall('\S', pt) ''.join(result5) ``` #### 6.2 지정자! - `[]`: 문자 - `-`: 범위 - `.`: 하나의 문자 - `?`: 0회 또는 1번 반복 - `*`: 0회 이상 반복 - `+`: 1회 이상 반복 - `{m,n}`: m~n회 반복 - `()`: Grouping 그루핑 #### 6.2.1 범위 `-` - [0-9] : 숫자 모두 ``` result = re.findall('[0-9]', pt) ''.join(result) ``` - [a-z]: 알파벳 소문자 ``` result = re.findall('[a-z]', pt) ''.join(result) ``` - [a-zA-Z]: 알파벳 모두 ``` result = re.findall('[a-zA-Z]',pt) ''.join(result) ``` - [012345] = [0-5] ``` result = re.findall('[012345]', pt) ''.join(result) result = re.findall('[0-5]', pt) ''.join(result) ``` - [234789] = [2-47-9] ``` result = re.findall('[234789]', pt) ''.join(result) result = re.findall('[2-47-9]', pt) ''.join(result) ``` #### [bcde] == [bc-e] = [b-e] ``` result = re.findall('[bcde]', pt) ''.join(result) result = re.findall('[b-e]', pt) ''.join(result) result = re.findall('[b-de]', pt) ''.join(result) ``` #### 6.2.2 문자하나 `.` ``` ls = ['aab', 'a0b', 'abc'] for s in ls: result = re.findall('a.b', s) # a + 모든문자하나 + b # 그래서, 'aab', 'a0b',출력될듯 print(s, result) ``` #### 6.2.3 0회 또는 1회 반복 -? ``` l = ['aab', 'a3b', 'abc', 'accb'] for s in l: result = re.findall('a.?b', s) # a + 모든문자0개 또는 1개 + b print(s, result) # return 예측, 'aab', 'a3b', 'abc' l = ['aab', 'a3b', 'abc', 'accb'] for s in l: result = re.findall('a.?b', s) # a + 문자열 0개 거나 한개 + b print(result, s) l = ['aab', 'a3b', 'abc', 'accb'] for s in l: result = re.findall('a[0-4]?b', s) # a + 0개 또는 1개의 [0-4] + b print(s, result) # 예측: 'aab', 'a3b', 'abc' ``` #### 6.2.4 0회 이상 반복 - `*` ``` l = ['ac', 'abc', 'abbbbc', 'a3bec'] for s in l: result = re.findall('ab*c', s) # a + b가 0회이상 반복 + c print(s, result) # 예상: 'abc', 'abbbbc', 'ac' -> 포함 l = ['ac', 'abc', 'abbbbc', 'a3bec'] for s in l: result = re.findall('ab*c', s) # a + b가 0회 이상 반복 + c print(s, result) # 예상: 'ac', 'abc', 'abbbbc' ``` #### 6.2.5 1회 이상 반복 - `+` ``` l = ['ac', 'abc', 'abbbbc', 'a3bec'] for s in l: result = re.findall('ab+c', s) # a + b가 1회 이상 반복 + c print(s, result) # 예상: 'abc', 'abbbbbc', ``` #### 6.2.6 m~n회 반복 - `{m,n}` ``` l = ['ac', 'abcasd', 'abbc', 'abbbc', 'abbbbbbbc'] for s in l: result = re.findall('ab{1,3}c', s) # a + b가 1~ 3회 반복 + c print(s, result) # 예상, 'abcasd', 'abbc', 'abbbc' l = ["ac","abcasd","abbc","abbbc","abbbbbbc"] for s in l: result = re.findall('ab{1,3}c', s) # a + b􀐾 1􀴥~3􀴥 􀟈􀠂 + c print(s, result) ``` #### 6.2.7 grouping - `()` ``` l = ['aaa5.djfi', 'abdddc5', '1abbbbc', 'a3.bec'] for s in l: result = re.findall('([0-9]+)[.]([\w]{2})', s) # 숫자1회이상 + . 문자2회 print(s, result) l = ['aaa5.djfi', 'abdddc5', '1abbbbc', 'a3.bec'] # 숫자1회이상 + . + 문자2 회 for s in l: result = re.findall('([0-9]+)[.]([\w]{2})', s) print(s, result) ``` #### 예시 - 7.1 이메일 주소 찾기 ``` s = '저의 이메일 주소는 pdj1224@daum.com입니다.\ 또한 radajin1224@gmail.com역시 마찬가지 입니다.' # pattern 설정 p = "[0-9a-zA-Z]+[@][0-9a-zA-Z]+[.][a-zA-Z]+" result = True if re.search(p, s) else False print(result, re.findall(p, s)) s = '저의 이메일 주소는 pdj1224@daum.com입니다.\ 또한 radajin1224@gmail.com역시 마찬가지 입니다.' # pattern 설정 p = '[a-zA-Z0-9]+[@][a-zA-Z]+[.][a-zA-Z]+' result = True if re.search(p, s) else False print(result, re.findall(p,s)) ``` #### 7.2 주민등록번호: group으로 나눠서 변경하기 ``` s = '저의 전화번호는 010-1111-2222이고 주민등록번호는 \ 871211-4029348 입니다' p = '([0-9]{6})[-]([0-9]{7})' re.sub(p, '\g<1>-*******', s) #\g<1>: 그루핑 첫번째 데이터 사용 import re s = '저의 전화번호는 010-1111-2222이고 주민등록번호는 871211-4029348 입니다' p = '([0-9]{6})[-]([0-9]{7})' re.sub(p, '\g<2>-******', s) ```
github_jupyter
<a href="https://colab.research.google.com/github/chavgova/My-AI/blob/master/emotion_recognition_18_female_cleanCode.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> **Voice Emotion Recognition** ``` import librosa import librosa.display import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from keras.models import model_from_json from matplotlib.pyplot import specgram from matplotlib.axis import Axis import keras from keras.preprocessing import sequence from keras.models import Sequential from keras.layers import Dense, Embedding from keras.layers import LSTM from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.utils import to_categorical from keras.layers import Input, Flatten, Dropout, Activation from keras.layers import Conv1D, MaxPooling1D, AveragePooling1D from keras.models import Model from keras.callbacks import ModelCheckpoint from sklearn.metrics import confusion_matrix from keras import regularizers import os import pandas as pd from google.colab import drive from sklearn.preprocessing import MinMaxScaler from sklearn.utils import shuffle from array import * import re from sklearn.preprocessing import MinMaxScaler from keras.utils import np_utils from sklearn.preprocessing import LabelEncoder import seaborn as sns dataset_path = '/content/drive/My Drive/My_AI/RawData' # https://drive.google.com/drive/folders/19vC20XHt-_yhsobePchy7K3PcBHD1oCu?usp=sharing model_path = '/content/drive/My Drive/My_AI/MY MODELS/' model_name = 'Emotion_Voice_Detection_CNN_model_18_FEMALE_8features_normalized_tanh_regularized' ``` # LABLES & FEATURES ``` dataset_files_list = [] dataset_files_list = os.listdir(dataset_path) print(len(dataset_files_list)) emo_labels_list=[] # EMO LABELS dataset = '' count = 0 for item in dataset_files_list: file_label = item[6:-16] try: file_label = int(file_label) dataset = 'RAVDESS' except: if (item[:1] == 'Y') or (item[:1] == 'O'): file_label = re.split('_|\.', item)[2] dataset = 'TESS' else: try: item = item[:-4] int(item[-3:]) dataset = 'SER_v4' except: dataset = 'SAVEE' if dataset == 'RAVDESS': if int(item[18:-4])%2==0: #female if file_label == 1: emo_labels_list.append('female_neutral') elif file_label == 2: emo_labels_list.append('female_calm') elif file_label == 3: emo_labels_list.append('female_joy') elif file_label == 4: emo_labels_list.append('female_sadness') elif file_label == 5: emo_labels_list.append('female_anger') elif file_label == 6: emo_labels_list.append('female_fear') elif file_label == 7: emo_labels_list.append('female_disgust') elif file_label == 8: emo_labels_list.append('female_surprise') else: if file_label== 1: emo_labels_list.append('male_neutral') elif file_label == 2: emo_labels_list.append('male_calm') elif file_label == 3: emo_labels_list.append('male_joy') elif file_label == 4: emo_labels_list.append('male_sadness') elif file_label == 5: emo_labels_list.append('male_anger') elif file_label == 6: emo_labels_list.append('male_fear') elif file_label == 7: emo_labels_list.append('male_disgust') elif file_label == 8: emo_labels_list.append('male_surprise') elif dataset == 'TESS': if file_label == 'neutral': emo_labels_list.append('female_neutral') elif file_label == 'angry': emo_labels_list.append('female_anger') elif file_label == 'disgust': emo_labels_list.append('female_disgust') elif file_label == 'ps': emo_labels_list.append('female_surprise') elif file_label == 'happy': emo_labels_list.append('female_joy') elif file_label == 'sad': emo_labels_list.append('female_sadness') elif file_label == 'fear': emo_labels_list.append('female_fear') elif dataset == 'SER_v4': if int(item[-3:])%2 == 1: file_label = item[:-3] if file_label == 'neutral': emo_labels_list.append('male_neutral') elif file_label == 'anger': emo_labels_list.append('male_anger') elif file_label == 'disgust': emo_labels_list.append('male_disgust') elif file_label == 'surprise': emo_labels_list.append('male_surprise') elif file_label == 'happy': emo_labels_list.append('male_joy') elif file_label == 'sad': emo_labels_list.append('male_sadness') elif file_label == 'fear': emo_labels_list.append('male_fear') else: file_label = item[:-3] if file_label == 'neutral': emo_labels_list.append('female_neutral') elif file_label == 'anger': emo_labels_list.append('female_anger') elif file_label == 'disgust': emo_labels_list.append('female_disgust') elif file_label == 'surprise': emo_labels_list.append('female_surprise') elif file_label == 'happy': emo_labels_list.append('female_joy') elif file_label == 'sad': emo_labels_list.append('female_sadness') elif file_label == 'fear': emo_labels_list.append('female_fear') elif dataset == 'SAVEE': if item[:1]=='a': emo_labels_list.append('male_anger') elif item[:1]=='f': emo_labels_list.append('male_fear') elif item[:1]=='h': emo_labels_list.append('male_joy') elif item[:1]=='n': emo_labels_list.append('male_neutral') elif item[:2]=='sa': emo_labels_list.append('male_sadness') elif item[:2]=='su': emo_labels_list.append('male_surprise') elif item[:1]=='d': emo_labels_list.append('male_disgust') labels = pd.DataFrame(emo_labels_list) labels ``` Getting the features of audio files using librosa ``` def reshape_feature(arr): # reshapes to 10 values per feature shape_arr = arr.shape[0] r = shape_arr%10 arr = arr[:(len(arr)-r)] d = int(shape_arr/10) arr = np.mean(arr.reshape(-1, d), axis=1) return arr def extract_feature(current_file, **kwargs): mfcc = kwargs.get("mfcc") chroma = kwargs.get("chroma") mel = kwargs.get("mel") contrast = kwargs.get("contrast") tonnetz = kwargs.get("tonnetz") rolloff = kwargs.get("rolloff") centroids = kwargs.get("centroids") rms = kwargs.get("rms") X, sample_rate = librosa.core.load(current_file) if chroma or contrast: stft = np.abs(librosa.stft(X)) result = np.array([]) if mfcc: mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=40).T, axis=0)# (n=40,t) -> 40 values result = np.hstack((result, mfccs)) if rms: rms = np.mean(librosa.feature.rms(X),axis=0) rms = reshape_feature(rms) # (1,t) - > 10 values (avg) result = np.hstack((result, rms)) if mel: mel = np.mean(librosa.feature.melspectrogram(X, sr=sample_rate).T,axis=0) # (128,t) -> 128 values result = np.hstack((result, mel)) if tonnetz: tonnetz = np.mean(librosa.feature.tonnetz(y=librosa.effects.harmonic(X), sr=sample_rate).T,axis=0) # (6,t) -> 6 values result = np.hstack((result, tonnetz)) if chroma: chroma = np.mean(librosa.feature.chroma_stft(S=stft, n_chroma = 14, sr=sample_rate).T,axis=0) # (n=14,t) -> 14 values result = np.hstack((result, chroma)) if contrast: contrast = np.mean(librosa.feature.spectral_contrast(S=stft, sr=sample_rate).T,axis=0) # (7,t) -> 7 values result = np.hstack((result, contrast)) if rolloff: rolloff = np.mean(librosa.feature.spectral_rolloff(X+0.01, sr=sample_rate),axis=0) rolloff = reshape_feature(rolloff) # (1,t) - > 10 values (avg) result = np.hstack((result, rolloff)) if centroids: centroids = np.mean(librosa.feature.spectral_centroid(X, sr=sample_rate),axis=0) centroids = reshape_feature(centroids) # (1,t) - > 10 values (avg) result = np.hstack((result, centroids)) return result f = os.fspath(dataset_path +'/03-01-04-01-02-02-01.wav') a = extract_feature(f, mel=True, mfcc=True, contrast=True, chroma=True, tonnetz=True, rolloff=True, centroids=True, rms=True) print(a, a.shape) ``` EXTRACT FEATURES FROM THE FILES IN THE DATASETS ``` data_frame = pd.DataFrame(columns=['all_features']) bookmark=0 for index,y in enumerate(dataset_files_list): all_features_ndarray = extract_feature(dataset_path + y, mel=True, mfcc=True, contrast=True, chroma=True, tonnetz=True, rolloff=True, centroids=True, rms=True) data_frame.loc[bookmark] = [all_features_ndarray] bookmark=bookmark+1 if bookmark%1000==0: print(bookmark) print(pd.DataFrame(data_frame['all_features']).shape) data_frame = pd.DataFrame(data_frame['all_features'].values.tolist()) data_frame_labels = pd.concat([data_frame,labels], axis=1) data_frame_labels = data_frame_labels.rename(index=str, columns={"0": "label"}) data_frame_labels ``` # SAVE DATASET FEATURES AND LABELS ``` with open((model_path + model_name + '_dataFrame.pkl'), 'wb') as f: pickle.dump(data_frame_labels, f) ``` # LOAD DATASET FEATURES AND LABELS ``` with open((model_path + model_name + '_dataFrame.pkl'), 'rb') as f: data_frame_labels = pickle.load(f) data_frame_labels data_array = data_frame_labels.iloc[:,:(data_frame_labels.shape[1]-1)].to_numpy() print(data_array) ``` # **PREPROCESSING - SCALING** ``` scaler = MinMaxScaler() data_array = scaler.fit_transform(data_array) data_array scaler.n_samples_seen_ ''' scaler = MinMaxScaler() scaler.min_ = array('d',[ 1.30154070e+00, 2.97715849e-02, 5.41415007e-01, 3.64280680e-01, 5.84953937e-01, 6.50669604e-01, 7.27531168e-01, 7.54978988e-01, 7.55184765e-01, 4.88008099e-01, 6.53292330e-01, 4.84929694e-01, 5.48758173e-01, 3.57267987e-01, 5.15166394e-01, 5.19290142e-01, 5.70538579e-01, 3.53063620e-01, 4.84194260e-01, 3.27457909e-01, 3.24381379e-01, 1.84661186e-01, 2.68046396e-01, 2.58805806e-01, 2.89444124e-01, 1.86745819e-01, 2.83281155e-01, 1.70330988e-01, 3.32254964e-01, 3.27732848e-01, 4.58707382e-01, 2.67222570e-01, 3.33327481e-01, 2.20059486e-01, 1.84829931e-01, 1.31825284e-01, 1.89310349e-01, 1.45575957e-01, 2.30085276e-01, 1.46747188e-01, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, -4.99015658e-04, -7.13259542e-05, -4.71385459e-05, -4.16212144e-04, -9.93093845e-05, -5.81941623e-06, 0.00000000e+00, -3.73939930e-09, -3.20601331e-09, -1.83520580e-09, -1.86189596e-09, -9.11656485e-10, -1.27284442e-09, -8.09480752e-10, -1.01175668e-09, -5.27261649e-09, -6.84602210e-08, -1.08224763e-07, -7.29590747e-08, -5.07011447e-08, -6.89153954e-08, -3.01679720e-07, -1.69732639e-07, -1.01042934e-07, -5.13562357e-08, -1.20337624e-08, -9.23787980e-09, -1.72962363e-08, -4.11257864e-08, -5.43108529e-08, -3.82925048e-08, -3.21115953e-08, -2.93470125e-08, -3.22921611e-08, -7.28703668e-08, -8.43160813e-08, -5.33754934e-08, -3.48581916e-08, -3.02516076e-08, -3.31239966e-08, -1.44645930e-08, -2.41520860e-08, -3.56238387e-08, -2.76142199e-08, -3.06727509e-08, -3.44345189e-08, -3.43187931e-08, -1.01301426e-08, -8.80770140e-09, -2.49743116e-08, -9.22353098e-08, -7.74890261e-08, -1.09284695e-07, -4.87964742e-08, -2.36972767e-08, -6.49535514e-08, -8.46584064e-08, -1.14254250e-07, -9.76232645e-08, -1.84039716e-07, -1.26451834e-07, -2.34784315e-07, -3.18204391e-07, -1.23961745e-07, -5.94465285e-08, -1.82377738e-07, -1.32080732e-07, -2.07906121e-07, -1.80756105e-07, -4.33149701e-07, -2.69355898e-07, -2.52788617e-07, -1.92414574e-07, -1.55175231e-07, -2.00444796e-07, -1.43011866e-07, -1.23685146e-07, -9.21234069e-08, -3.75060442e-08, -2.94232908e-08, -1.02231839e-07, -8.30043446e-08, -7.54564086e-08, -7.07511467e-08, -1.10602201e-07, -9.30265645e-08, -7.58011852e-08, -8.27770407e-08, -1.15673950e-07, -1.76792014e-07, -6.56155012e-08, -1.38688341e-07, -1.93922299e-07, -2.61136975e-07, -2.83564652e-07, -2.45926574e-07, -6.01926362e-07, -1.40350091e-07, -2.87268152e-07, -2.35565227e-07, -2.36514564e-07, -4.46589549e-07, -1.17599844e-07, -1.39270641e-07, -2.80638901e-07, -5.00816068e-07, -6.55967909e-07, -1.52268463e-06, -1.93650705e-06, -8.10495124e-07, -8.34085153e-07, -7.96100307e-07, -4.29721896e-07, -3.26402869e-07, -4.76817284e-07, -6.44593047e-07, -4.72604145e-07, -2.85124273e-07, -1.83989751e-07, -1.49900636e-07, -2.55917055e-07, -3.14224497e-07, -2.04816020e-07, -1.58638069e-07, -1.28591689e-07, -1.24634399e-07, -7.53818049e-08, -5.44385256e-08, -4.15725204e-08, -6.20036927e-08, -1.10522477e-07, -9.13172066e-08, -8.93674133e-08, -6.64198305e-08, -1.50288986e-07, 5.11835480e-01, 2.98677029e-01, 4.69133950e-01, 5.07910873e-01, 4.13857479e-01, 4.43363388e-01, -2.43311642e-01, -2.77122596e-01, -2.80790126e-01, -2.30427055e-01, -2.08248104e-01, -2.10005500e-01, -2.19419018e-01, -1.83630685e-01, -1.65850129e-01, -1.81438555e-01, -2.41213947e-01, -2.63856160e-01, -3.33203326e-01, -3.22948605e-01, -4.99982519e-01, -5.12719137e-01, -7.13829874e-01, -8.94023745e-01, -1.00051554e+00, -1.21701158e+00, -1.14511248e+00, -1.06028476e-03, -1.08813928e-03, -1.30314383e-03, -1.45257134e-02, -2.95757454e-03, -1.08830846e-03, -1.09890110e-03, -1.09340831e-03, -1.08409478e-03, -1.06791969e-03, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, -6.44111630e-02, -1.71689653e-02, -3.58174374e-02, -3.44976589e-02, -1.71945332e-02, -8.57374462e-03, 0.00000000e+00]) scaler.data_range_ = array('d',[6.52241714e+02, 1.68176227e+02, 1.04053689e+02, 9.91685069e+01, 7.46654968e+01, 5.89145889e+01, 4.82919183e+01, 4.17690403e+01, 4.34677058e+01, 4.79942337e+01, 3.88439704e+01, 3.90666637e+01, 4.05845438e+01, 3.32298918e+01, 3.28078550e+01, 2.76777036e+01, 2.86886108e+01, 3.53498761e+01, 4.21253336e+01, 4.81495815e+01, 4.00648790e+01, 4.97408242e+01, 4.42379690e+01, 5.32160192e+01, 5.06672318e+01, 4.40168920e+01, 4.04346301e+01, 3.98116873e+01, 4.14117863e+01, 3.43869411e+01, 3.32598163e+01, 2.69759558e+01, 3.96590978e+01, 4.82208183e+01, 4.95459027e+01, 4.84250894e+01, 5.34608764e+01, 5.07120762e+01, 4.87717607e+01, 3.97573287e+01, 1.52572557e-01, 5.95153689e-01, 6.81138098e-01, 6.48314431e-01, 5.41799167e-01, 6.23951944e-01, 5.27459789e-01, 4.96021721e-01, 5.00585385e-01, 2.80241996e-01, 4.91876645e+01, 6.15354190e+01, 4.03047071e+01, 2.60023588e+01, 1.02844572e+02, 1.63793195e+02, 1.70065325e+02, 1.92597128e+02, 2.24085839e+02, 2.69475772e+02, 3.33291035e+02, 1.80516423e+02, 4.28663536e+02, 5.27228310e+02, 4.63625961e+02, 7.90261887e+02, 8.89430270e+02, 1.12309630e+03, 7.82205499e+02, 6.91495361e+02, 4.66890931e+02, 3.74591320e+02, 3.95049130e+02, 4.72728471e+02, 3.32955066e+02, 4.19130227e+02, 2.93577877e+02, 1.50630791e+02, 1.39809332e+02, 1.64636118e+02, 9.64905195e+01, 7.95872410e+01, 5.83180516e+01, 1.03945840e+02, 4.73320940e+01, 3.04968803e+01, 5.93987467e+01, 4.64984402e+01, 4.36957165e+01, 4.96062071e+01, 8.77722747e+01, 7.70511810e+01, 4.14122544e+01, 3.08291732e+01, 6.08449274e+01, 2.01880310e+01, 1.87199570e+01, 2.05941656e+01, 1.55202411e+01, 2.22962572e+01, 3.57374191e+01, 1.99214858e+01, 1.81767517e+01, 2.39118579e+01, 8.76485744e+00, 6.88697759e+00, 1.97554742e+01, 3.69496037e+01, 1.15390726e+01, 7.66053593e+00, 1.05157281e+01, 1.40124607e+01, 8.43961749e+00, 7.76351909e+00, 1.05238587e+01, 8.45528025e+00, 7.57228541e+00, 6.16527602e+00, 5.71192053e+00, 4.24999604e+00, 6.27073078e+00, 7.61358274e+00, 6.16633023e+00, 4.77110642e+00, 5.24676247e+00, 4.40375569e+00, 4.93185207e+00, 3.83174051e+00, 3.09398867e+00, 5.27870434e+00, 5.42089651e+00, 4.56979798e+00, 4.56245055e+00, 7.80779558e+00, 1.96376517e+00, 1.90968377e+00, 1.79923949e+00, 1.27912124e+00, 1.32796941e+00, 1.41690693e+00, 2.10152473e+00, 2.27567075e+00, 2.49671663e+00, 2.69114916e+00, 1.73877702e+00, 1.54745254e+00, 2.14144604e+00, 1.53383522e+00, 1.08421727e+00, 7.56801499e-01, 3.64141739e-01, 2.97685975e-01, 4.17075884e-01, 4.83592052e-01, 5.46413252e-01, 3.10337275e-01, 3.80970484e-01, 2.25499727e-01, 1.60391627e-01, 1.56237879e-01, 1.90543353e-01, 2.53929373e-01, 2.84855190e-01, 1.46602727e-01, 1.13809781e-01, 1.03825763e-01, 1.06031696e-01, 9.74275767e-02, 8.92947276e-02, 1.39934682e-01, 1.47621415e-01, 1.71231672e-01, 1.09979188e-01, 6.21654654e-02, 7.42385447e-02, 6.08512963e-02, 3.10103957e-02, 3.48755490e-03, 8.64320615e-02, 1.31445625e-01, 4.64571646e-01, 4.88656089e-01, 1.00872555e-01, 1.05418486e-01, 6.63817784e-01, 6.42916587e-01, 6.64371343e-01, 6.80407515e-01, 7.03646321e-01, 6.94453016e-01, 6.73189254e-01, 7.58290981e-01, 7.89249668e-01, 7.82008484e-01, 6.85849487e-01, 6.47753881e-01, 6.06800442e-01, 6.12528130e-01, 2.25197958e+01, 1.97612471e+01, 1.75837630e+01, 1.45561777e+01, 1.40308770e+01, 1.21278373e+01, 2.13206463e+01, 1.01544434e+04, 9.89450684e+03, 8.26202087e+03, 8.15330811e+03, 9.50535400e+03, 9.89296875e+03, 9.79760742e+03, 9.84682617e+03, 9.93142090e+03, 1.00818457e+04, 8.40475004e+03, 8.07799667e+03, 6.28119379e+03, 5.56974926e+03, 7.36956027e+03, 7.40116682e+03, 7.81074859e+03, 7.72092824e+03, 7.93831423e+03, 8.39173734e+03]) scaler.n_samples_seen_ = 7198 scaler.data_min_ = array('d',[-8.48919139e+02, -5.00687281e+00, -5.63362288e+01, -3.61251711e+01, -4.36758763e+01, -3.83339322e+01, -3.51338757e+01, -3.15347478e+01, -3.28261492e+01, -2.34215747e+01, -2.53764680e+01, -1.89445853e+01,-2.22711001e+01, -1.18719766e+01, -1.69015044e+01, -1.43727587e+01,-1.63679592e+01, -1.24807552e+01, -2.03968447e+01, -1.57669613e+01,-1.29963007e+01, -9.18519960e+00, -1.18578282e+01, -1.37726148e+01, -1.46653325e+01, -8.21997057e+00, -1.14543687e+01, -6.78116402e+00,-1.37592716e+01, -1.12697302e+01, -1.52565233e+01, -7.20858423e+00, -1.32194672e+01, -1.06114485e+01, -9.15756579e+00, -6.38365115e+00, -1.01206972e+01, -7.38245902e+00, -1.12216641e+01, -5.83427619e+00,0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 3.23519052e-04, 3.86443426e-05, 2.94121874e-05, 2.19535170e-04, 4.92596118e-05, 2.91311471e-06, 0.00000000e+00, 1.83932318e-07, 1.97283372e-07,7.39674322e-08, 4.84136869e-08, 9.37589207e-08, 2.08483254e-07, 1.37664607e-07, 1.94861431e-07, 1.18151869e-06, 1.84483709e-05,3.60703433e-05, 1.31703112e-05, 2.17337320e-05, 3.63341474e-05,1.39866550e-04, 1.34133236e-04, 8.98706439e-05, 5.76779980e-05,9.41287515e-06, 6.38795103e-06, 8.07545588e-06, 1.54053626e-05, 2.14554552e-05, 1.81019572e-05, 1.06917183e-05, 1.23002200e-05,9.48026410e-06, 1.09765210e-05, 1.17881750e-05, 8.78753401e-06, 3.36348502e-06, 2.40764199e-06, 1.93172694e-06, 1.50353428e-06,1.14316880e-06, 1.08641595e-06, 1.64025005e-06, 1.42623507e-06, 1.50464098e-06, 1.70242516e-06, 8.89145661e-07, 6.78643795e-07,1.03424254e-06, 2.84353834e-06, 4.71481417e-06, 2.20624280e-06, 9.13467900e-07, 4.88025640e-07, 1.00809478e-06, 1.88756560e-06, 4.08315202e-06, 1.94480048e-06, 3.34524422e-06, 3.02369828e-06, 2.05785105e-06, 2.19146651e-06, 2.44892305e-06, 2.19652567e-06, 2.10446995e-06, 1.01180920e-06, 2.18628424e-06, 2.53283781e-06, 3.65561779e-06, 2.09114966e-06, 2.66031169e-06, 1.62691915e-06, 1.17503114e-06, 1.23579750e-06, 8.16872411e-07, 5.25661382e-07,5.77681083e-07, 2.85555371e-07, 1.81433728e-07, 4.87758981e-07, 4.35504080e-07, 3.32291589e-07, 3.48934189e-07, 4.23798935e-07, 2.87823137e-07, 4.00132045e-07, 4.48725771e-07, 5.28606582e-07, 8.06604822e-07, 5.12312420e-07, 2.72351334e-07, 3.70330268e-07,4.69847958e-07, 3.62713569e-07, 3.26582967e-07, 8.52873632e-07, 2.94949187e-07, 6.53727731e-07, 5.88139621e-07, 6.36495971e-07, 7.76519644e-07, 1.81980177e-07, 2.98240564e-07, 4.30453832e-07,5.42993432e-07, 4.96437497e-07, 5.54473029e-07, 5.76470988e-07, 3.38037971e-07, 4.03356951e-07, 4.34999757e-07, 1.33358722e-07,1.24349859e-07, 1.07522168e-07, 1.03387328e-07, 7.38386693e-08, 5.43285349e-08, 4.67204021e-08, 4.26999742e-08, 3.75181380e-08,3.57618212e-08, 2.12651796e-08, 1.68206635e-08, 1.25283766e-08, 1.11291947e-08, 1.05485289e-08, 8.03629219e-09, 7.11853220e-09, 6.81911580e-09, 6.87068121e-09, 6.77925652e-09, 5.43812294e-09, 2.05970522e-09, 5.24141090e-10, -4.42389956e-02, -3.92597887e-02, -2.17946331e-01, -2.48193741e-01, -4.17468614e-02, -4.67386971e-02, 1.61514595e-01, 1.78166714e-01, 1.86548913e-01, 1.56784300e-01, 1.46533012e-01, 1.45838953e-01, 1.47710525e-01, 1.39245492e-01,1.30897160e-01, 1.41886489e-01, 1.65436462e-01, 1.70913852e-01,2.02187925e-01, 1.97815105e-01, 1.12595042e+01, 1.01319695e+01, 1.25518153e+01, 1.30135685e+01, 1.40381104e+01, 1.47597184e+01, 2.44145381e+01, 1.07666016e+01, 1.07666016e+01, 1.07666016e+01,1.18432617e+02, 2.81127930e+01, 1.07666016e+01, 1.07666016e+01, 1.07666016e+01, 1.07666016e+01, 1.07666016e+01, 0.00000000e+00,0.00000000e+00, 0.00000000e+00, 3.58754028e+02, 1.26527725e+02,2.65090829e+02, 2.69452541e+02, 1.32757757e+02, 6.80610788e+01, 0.00000000e+00]) scaler.data_max_ = array('d',[-1.96677425e+02, 1.63169354e+02, 4.77174604e+01, 6.30433357e+01, 3.09896204e+01, 2.05806567e+01, 1.31580426e+01, 1.02342925e+01, 1.06415566e+01, 2.45726590e+01, 1.34675025e+01, 2.01220784e+01, 1.83134437e+01, 2.13579153e+01, 1.59063507e+01, 1.33049450e+01, 1.23206515e+01, 2.28691209e+01, 2.17284889e+01, 3.23826202e+01, 2.70685783e+01, 4.05556246e+01, 3.23801409e+01, 3.94434044e+01, 3.60018993e+01, 3.57969214e+01, 2.89802614e+01, 3.30305233e+01, 2.76525147e+01, 2.31172110e+01, 1.80032931e+01, 1.97673716e+01, 2.64396307e+01, 3.76093698e+01, 4.03883369e+01, 4.20414382e+01, 4.33401792e+01, 4.33296171e+01, 3.75500967e+01, 3.39230525e+01, 1.52572557e-01, 5.95153689e-01, 6.81138098e-01, 6.48637950e-01, 5.41837811e-01, 6.23981357e-01, 5.27679324e-01, 4.96070981e-01, 5.00588298e-01, 2.80241996e-01, 4.91876646e+01, 6.15354192e+01, 4.03047072e+01, 2.60023589e+01, 1.02844572e+02, 1.63793195e+02, 1.70065325e+02, 1.92597128e+02, 2.24085840e+02, 2.69475791e+02, 3.33291071e+02, 1.80516437e+02, 4.28663558e+02, 5.27228346e+02, 4.63626100e+02, 7.90262021e+02, 8.89430360e+02, 1.12309635e+03, 7.82205509e+02, 6.91495368e+02, 4.66890939e+02, 3.74591335e+02, 3.95049152e+02, 4.72728489e+02, 3.32955076e+02, 4.19130240e+02, 2.93577886e+02, 1.50630802e+02, 1.39809344e+02, 1.64636127e+02, 9.64905228e+01, 7.95872434e+01, 5.83180535e+01, 1.03945842e+02, 4.73320952e+01, 3.04968814e+01, 5.93987483e+01, 4.64984416e+01, 4.36957180e+01, 4.96062088e+01, 8.77722755e+01, 7.70511817e+01, 4.14122555e+01, 3.08291760e+01, 6.08449321e+01, 2.01880332e+01, 1.87199579e+01, 2.05941661e+01, 1.55202421e+01, 2.22962590e+01, 3.57374232e+01, 1.99214878e+01, 1.81767551e+01, 2.39118609e+01, 8.76485950e+00, 6.88697978e+00, 1.97554766e+01, 3.69496059e+01, 1.15390747e+01, 7.66053695e+00, 1.05157303e+01, 1.40124632e+01, 8.43962114e+00, 7.76352118e+00, 1.05238614e+01, 8.45528188e+00,7.57228658e+00, 6.16527725e+00, 5.71192134e+00, 4.24999656e+00, 6.27073136e+00, 7.61358303e+00, 6.16633041e+00, 4.77110690e+00, 5.24676290e+00, 4.40375603e+00, 4.93185242e+00, 3.83174094e+00, 3.09398896e+00, 5.27870474e+00, 5.42089696e+00, 4.56979851e+00, 4.56245136e+00, 7.80779609e+00, 1.96376544e+00, 1.90968414e+00, 1.79923996e+00, 1.27912160e+00, 1.32796973e+00, 1.41690778e+00, 2.10152502e+00, 2.27567141e+00, 2.49671722e+00, 2.69114980e+00, 1.73877780e+00, 1.54745272e+00, 2.14144634e+00, 1.53383565e+00, 1.08421782e+00, 7.56801995e-01, 3.64142293e-01, 2.97686552e-01, 4.17076222e-01, 4.83592455e-01, 5.46413687e-01, 3.10337408e-01, 3.80970608e-01, 2.25499835e-01, 1.60391731e-01, 1.56237953e-01, 1.90543407e-01, 2.53929420e-01, 2.84855233e-01, 1.46602764e-01, 1.13809817e-01, 1.03825784e-01, 1.06031712e-01, 9.74275892e-02, 8.92947387e-02, 1.39934692e-01, 1.47621423e-01, 1.71231680e-01, 1.09979195e-01, 6.21654723e-02, 7.42385515e-02, 6.08513017e-02, 3.10103977e-02, 3.48755542e-03, 4.21930658e-02, 9.21858359e-02, 2.46625315e-01, 2.40462348e-01, 5.91256937e-02, 5.86797888e-02, 8.25332379e-01, 8.21083300e-01, 8.50920256e-01, 8.37191816e-01, 8.50179333e-01, 8.40291969e-01, 8.20899779e-01, 8.97536473e-01, 9.20146828e-01, 9.23894973e-01, 8.51285948e-01, 8.18667733e-01, 8.08988367e-01, 8.10343235e-01, 3.37793000e+01, 2.98932166e+01, 3.01355783e+01, 2.75697463e+01, 2.80689875e+01, 2.68875557e+01, 4.57351844e+01, 1.01652100e+04, 9.90527344e+03, 8.27278748e+03,8.27174072e+03, 9.53346680e+03, 9.90373535e+03, 9.80837402e+03, 9.85759277e+03, 9.94218750e+03, 1.00926123e+04, 8.40475004e+03, 8.07799667e+03, 6.28119379e+03, 5.92850329e+03, 7.49608799e+03, 7.66625765e+03, 8.08020113e+03, 7.85368600e+03, 8.00637530e+03,8.39173734e+03]) scaler.scale_ = array('d',[1.53317394e-03, 5.94614363e-03, 9.61042331e-03, 1.00838465e-02, 1.33930670e-02, 1.69737245e-02, 2.07073986e-02, 2.39411773e-02, 2.30055850e-02, 2.08358364e-02, 2.57440212e-02, 2.55972716e-02, 2.46399222e-02, 3.00933872e-02, 3.04805053e-02, 3.61301650e-02, 3.48570381e-02, 2.82886423e-02, 2.37386844e-02, 2.07686125e-02, 2.49595163e-02, 2.01042105e-02, 2.26050160e-02, 1.87913342e-02, 1.97366219e-02, 2.27185509e-02, 2.47312761e-02, 2.51182522e-02, 2.41477147e-02, 2.90808071e-02, 3.00663115e-02, 3.70700488e-02, 2.52148953e-02, 2.07379309e-02, 2.01833037e-02, 2.06504523e-02, 1.87052676e-02, 1.97191690e-02, 2.05036682e-02, 2.51525953e-02, 6.55425863e+00, 1.68023826e+00, 1.46813106e+00, 1.54246142e+00, 1.84570236e+00, 1.60268753e+00, 1.89587912e+00, 2.01604074e+00, 1.99766120e+00, 3.56834455e+00, 2.03303005e-02, 1.62508035e-02, 2.48109978e-02, 3.84580494e-02, 9.72341061e-03, 6.10525974e-03, 5.88009342e-03, 5.19218542e-03, 4.46257562e-03, 3.71090874e-03, 3.00038074e-03, 5.53966216e-03, 2.33283197e-03, 1.89671150e-03, 2.15691114e-03, 1.26540330e-03, 1.12431523e-03, 8.90395600e-04, 1.27843642e-03, 1.44614130e-03, 2.14182785e-03, 2.66957601e-03, 2.53133072e-03, 2.11537925e-03, 3.00340828e-03, 2.38589330e-03, 3.40625121e-03, 6.63874890e-03, 7.15259835e-03, 6.07400134e-03, 1.03637125e-02, 1.25648281e-02, 1.71473493e-02, 9.62039458e-03, 2.11273137e-02, 3.27902392e-02, 1.68353721e-02, 2.15060977e-02, 2.28855384e-02, 2.01587676e-02, 1.13931193e-02, 1.29783864e-02, 2.41474417e-02, 3.24368089e-02, 1.64352238e-02, 4.95343008e-02, 5.34189261e-02, 4.85574418e-02, 6.44319886e-02, 4.48505771e-02, 2.79818752e-02, 5.01970590e-02, 5.50153303e-02, 4.18202553e-02, 1.14091987e-01, 1.45201576e-01, 5.06188812e-02, 2.70638898e-02, 8.66620775e-02, 1.30539170e-01, 9.50956501e-02, 7.13650532e-02, 1.18488782e-01, 1.28807566e-01, 9.50221800e-02, 1.18269291e-01, 1.32060527e-01, 1.62198740e-01, 1.75072464e-01, 2.35294337e-01, 1.59471047e-01, 1.31344209e-01, 1.62171010e-01, 2.09594990e-01, 1.90593724e-01, 2.27078900e-01, 2.02763584e-01, 2.60978006e-01, 3.23207389e-01, 1.89440426e-01, 1.84471332e-01, 2.18828054e-01, 2.19180458e-01, 1.28077124e-01, 5.09225856e-01, 5.23646906e-01, 5.55790379e-01, 7.81786721e-01, 7.53029395e-01, 7.05762659e-01, 4.75844984e-01, 4.39430879e-01, 4.00526029e-01, 3.71588470e-01, 5.75116872e-01, 6.46223372e-01, 4.66974175e-01, 6.51960513e-01, 9.22324356e-01, 1.32135045e+00, 2.74618341e+00, 3.35924459e+00, 2.39764522e+00, 2.06785863e+00, 1.83011667e+00, 3.22230064e+00, 2.62487526e+00, 4.43459516e+00, 6.23473942e+00, 6.40049651e+00, 5.24814949e+00, 3.93810290e+00, 3.51055566e+00, 6.82115553e+00, 8.78659102e+00, 9.63152084e+00, 9.43114221e+00, 1.02640344e+01, 1.11988695e+01, 7.14619127e+00, 6.77408490e+00, 5.84004107e+00, 9.09262938e+00, 1.60861017e+01, 1.34700916e+01, 1.64335037e+01, 3.22472506e+01, 2.86733837e+02, 1.15697807e+01, 7.60770854e+00, 2.15252052e+00, 2.04642902e+00, 9.91349925e+00, 9.48600230e+00, 1.50643750e+00, 1.55541173e+00, 1.50518232e+00, 1.46970746e+00, 1.42116852e+00, 1.43998223e+00, 1.48546637e+00, 1.31875497e+00, 1.26702619e+00, 1.27875851e+00, 1.45804585e+00, 1.54379623e+00, 1.64798825e+00, 1.63257808e+00, 4.44053760e-02, 5.06040938e-02, 5.68706483e-02, 6.86993535e-02, 7.12713822e-02, 8.24549321e-02, 4.69028934e-02, 9.84790564e-05, 1.01066179e-04, 1.21035763e-04, 1.22649603e-04, 1.05203867e-04, 1.01081892e-04, 1.02065735e-04, 1.01555565e-04, 1.00690527e-04, 9.91881873e-05, 1.18980338e-04, 1.23793069e-04, 1.59205405e-04, 1.79541296e-04, 1.35693306e-04, 1.35113831e-04, 1.28028701e-04, 1.29518106e-04, 1.25971330e-04, 1.19164836e-04]) scaler.transform(data_array) ''' ``` REPLACING OLD VALUES WITH THE SCALED ONES ``` for w in range(0,data_array.shape[0]): data_frame_labels.iloc[w,:data_array.shape[1]] = data_array[w] print(data_frame_labels.shape) data_frame_labels ``` # **SAVE NORMALIZED DATAFRAME** ``` with open((model_path + model_name + '_normalizedDataFrame.pkl'), 'wb') as f: pickle.dump(data_frame_labels, f) ``` # **LOAD NORMALIZED DATAFRAME** ``` with open((model_path + model_name + '_normalizedDataFrame.pkl'), 'rb') as f: data_frame_normalized = pickle.load(f) ``` SHUFFLE DATAFRAME ``` data_frame_normalized = shuffle(data_frame_normalized) data_frame_normalized ``` # Dividing the data into test and train ``` data_frame_normalized.rename(columns={'0': 'lables'}, inplace=True) data_frame_normalized = data_frame_normalized.dropna(axis=1) columns_arr = [i for i in range(225)] columns_arr.append('lables') data_frame_normalized.columns = columns_arr print(data_frame_normalized) data_frame_normalized = data_frame_normalized[data_frame_normalized.lables != 'male_neutral'] data_frame_normalized = data_frame_normalized[data_frame_normalized.lables != 'male_calm'] data_frame_normalized = data_frame_normalized[data_frame_normalized.lables != 'male_fear'] data_frame_normalized = data_frame_normalized[data_frame_normalized.lables != 'male_surprise'] data_frame_normalized = data_frame_normalized[data_frame_normalized.lables != 'male_joy'] data_frame_normalized = data_frame_normalized[data_frame_normalized.lables != 'male_sadness'] data_frame_normalized = data_frame_normalized[data_frame_normalized.lables != 'male_anger'] data_frame_normalized = data_frame_normalized[data_frame_normalized.lables != 'male_disgust'] print(data_frame_normalized) data_frame_normalized_set = np.random.rand(len(data_frame_normalized)) < 0.8 train = data_frame_normalized[data_frame_normalized_set] test = data_frame_normalized[~data_frame_normalized_set] train_features = train.iloc[:, :-1] train_label = train.iloc[:, -1:] test_features = test.iloc[:, :-1] test_label = test.iloc[:, -1:] test_label ``` # Lables ``` X_train = np.array(train_features) y_train = np.array(train_label) X_test = np.array(test_features) y_test = np.array(test_label) lb = LabelEncoder() y_train = np_utils.to_categorical(lb.fit_transform(y_train)) y_test = np_utils.to_categorical(lb.fit_transform(y_test)) lb.classes_ ``` Changing dimension for CNN model ``` x_traincnn = np.expand_dims(X_train, axis=2) x_testcnn = np.expand_dims(X_test, axis=2) print(x_testcnn) ``` # **MODEL** ``` model = Sequential() model.add(Conv1D(225, kernel_size=5,padding='same', activation='tanh',kernel_regularizer=regularizers.l1_l2(l1=1e-5, l2=1e-4), input_shape=(225,1))) model.add(Conv1D(128, kernel_size=5,padding='same', activation='tanh', kernel_regularizer=regularizers.l1_l2(l1=1e-5, l2=1e-4))) model.add(Conv1D(128, kernel_size=5,padding='same', activation='tanh', kernel_regularizer=regularizers.l1_l2(l1=1e-5, l2=1e-4))) model.add(Dropout(0.2)) model.add(Conv1D(64, kernel_size=5,padding='same', activation='tanh', kernel_regularizer=regularizers.l1_l2(l1=1e-5, l2=1e-4))) model.add(Dense(units=64, activation='tanh',input_dim=2,kernel_regularizer=regularizers.l1_l2(l1=1e-5, l2=1e-4))) model.add(Dropout(0.2)) model.add(Conv1D(64, kernel_size=5,padding='same', activation='tanh', kernel_regularizer=regularizers.l1_l2(l1=1e-5, l2=1e-4))) model.add(Flatten()) model.add(Dense(units=32, activation='tanh',kernel_regularizer=regularizers.l1_l2(l1=1e-5, l2=1e-4))) model.add(Dense(8)) model.add(Activation('softmax')) opt = tf.keras.optimizers.Adam(learning_rate=0.0001) model.summary() model.compile(loss= 'categorical_crossentropy', optimizer = opt, metrics=['accuracy']) cnnhistory = model.fit(x_traincnn, y_train, batch_size = 32, epochs = 50, validation_data = (x_testcnn, y_test)) ``` # **PLOTTING** ``` plt.figure(figsize=(10,6)) plt.plot(cnnhistory.history['loss'], 'm', linewidth=3) plt.plot(cnnhistory.history['val_loss'], 'b', linewidth=3) plt.legend(['Loss', 'Validation Loss'], fontsize=13) plt.xlabel('epochs') plt.ylabel('loss', fontsize=12) plt.grid(True) plt.show() plt.figure(figsize=(10,6), frameon=True) plt.plot(cnnhistory.history['accuracy'], 'g', linewidth=3) plt.plot(cnnhistory.history['val_accuracy'], 'r', linewidth=3) plt.title('Model Accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy', fontsize=12) plt.legend(['Accuracy', 'Validation Accuracy'], loc = 'upper left', fontsize=13) plt.grid(True) plt.show() tf.keras.utils.plot_model( model, to_file="img_model.png", show_shapes=False, show_layer_names=True, rankdir="TB", expand_nested=False, dpi=96, ) dot_img_file = (model_path + 'img_cnn_model_' + model_name + '.png') tf.keras.utils.plot_model(model, to_file = dot_img_file, show_shapes=True) ``` # **SAVING THE MODEL** ``` model.save(os.path.join(model_path, (model_name + '.h5'))) print('Saved trained model at %s ' % model_path) model_json = model.to_json() with open((model_path + model_name + '.json'), "w") as json_file: json_file.write(model_json) ``` # **LOADING THE MODEL** ``` json_file = open(model_path + model_name + '.json', 'r') loaded_model_json = json_file.read() json_file.close() loaded_model = model_from_json(loaded_model_json) loaded_model.load_weights(model_path + model_name + '.h5') print("Loaded model from disk") opt = tf.keras.optimizers.Adam(learning_rate=0.0001) loaded_model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) score = loaded_model.evaluate(x_testcnn, y_test, verbose=0) print("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100)) ``` # **Predicting emotions on the test data** ``` lb = LabelEncoder() #lb.classes_ = predicted = loaded_model.predict(x_testcnn, batch_size=32, verbose=1) predicted = predicted.argmax(axis=1) predicted = predicted.astype(int).flatten() predictions = (lb.inverse_transform((predicted))) actual = pd.DataFrame({'predictedvalues': predictions}) actual = y_test.argmax(axis=1) actual = actual.astype(int).flatten() actual_labels = (lb.inverse_transform((actual))) actual_labels = pd.DataFrame({'actualvalues': actual_labels}) prediction_df = actual_labels.join(predictions) prediction_df[100:110] prediction_df.groupby('actualvalues').count() prediction_df.groupby('predictedvalues').count() prediction_df.to_csv(model_path + model_name + '.csv', index=False) from sklearn.metrics import confusion_matrix, accuracy_score, classification_report classes = prediction_df.actualvalues.unique() classes.sort() print(classification_report(prediction_df.actualvalues, prediction_df.predictedvalues, target_names=classes)) def print_confusion_matrix(confusion_matrix, class_names, figsize = (10,7), fontsize=14): df_cm = pd.DataFrame( confusion_matrix, index=class_names, columns=class_names, ) fig = plt.figure(figsize=figsize) try: heatmap = sns.heatmap(df_cm, annot=True, fmt="d") except ValueError: raise ValueError("Confusion matrix values must be integers.") heatmap.yaxis.set_ticklabels(heatmap.yaxis.get_ticklabels(), rotation=0, ha='right', fontsize=fontsize) heatmap.xaxis.set_ticklabels(heatmap.xaxis.get_ticklabels(), rotation=45, ha='right', fontsize=fontsize) plt.ylabel('True label') plt.xlabel('Predicted label') def gender(row): if row == 'female_disgust' or 'female_fear' or 'female_joy' or 'female_sadness' or 'female_surprise' or 'female_neutral' or 'female_anger' or 'female_calm': return 'female' prediction_df = pd.read_csv(model_path + model_name + '.csv') classes = prediction_df.actualvalues.unique() classes.sort() c = confusion_matrix(prediction_df.actualvalues, prediction_df.predictedvalues) #print(accuracy_score(prediction_df.actualvalues, prediction_df.predictedvalues)) print_confusion_matrix(c, class_names = classes) ```
github_jupyter
# This code implements segmentation of pathological regions from retinal images using a U-net model with depth 4 and tensorflow 2.x versions. ## This code implements binary classification ## This model is adapted from the original codebase in https://github.com/HZCTony/U-net-with-multiple-classification ``` #This code snippet helps if your computer has RTX 2070 GPU. If not then comment this cell. from tensorflow.compat.v1 import ConfigProto from tensorflow.compat.v1 import InteractiveSession config = ConfigProto() config.gpu_options.allow_growth = True session = InteractiveSession(config=config) ``` # A. Lets start by stepwise defining all libraries and functions needed to generate the model and pre-process the data ``` #Step 1: Load libraries for the U-net Model import numpy as np import os import skimage.io as io import skimage.transform as trans import numpy as np from tensorflow.keras.models import * from tensorflow.keras.layers import * from tensorflow.keras.optimizers import * from tensorflow.keras.callbacks import ModelCheckpoint, LearningRateScheduler from tensorflow.keras import backend as keras #from tensorflow import keras import tensorflow as tf img_size=(256,256) def dice_coef(y_true, y_pred, smooth=1): intersection = keras.sum(y_true * y_pred, axis=[1,2,3]) union = keras.sum(y_true, axis=[1,2,3]) + keras.sum(y_pred, axis=[1,2,3]) return keras.mean( (2. * intersection + smooth) / (union + smooth), axis=0) def dice_coef_loss(y_true, y_pred): return -dice_coef(y_true, y_pred) #Step 2: Define the U-net model with Depth=4 def unet(pretrained_weights = None,input_size = (256,256,1)): inputs = tf.keras.Input(shape=input_size) conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs) conv1 = BatchNormalization()(conv1) conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1) conv1 = BatchNormalization()(conv1) pool1 = MaxPooling2D(pool_size=(2, 2))(conv1) conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1) conv2 = BatchNormalization()(conv2) conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2) conv2 = BatchNormalization()(conv2) pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2) conv3 = BatchNormalization()(conv3) conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3) conv3 = BatchNormalization()(conv3) pool3 = MaxPooling2D(pool_size=(2, 2))(conv3) conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3) conv4 = BatchNormalization()(conv4) conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4) conv4 = BatchNormalization()(conv4) drop4 = Dropout(0.5)(conv4, training=True) pool4 = MaxPooling2D(pool_size=(2, 2))(drop4) conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4) conv5 = BatchNormalization()(conv5) conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5) conv5 = BatchNormalization()(conv5) drop5 = Dropout(0.5)(conv5, training=True) up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5)) merge6 = concatenate([drop4,up6], axis = 3) conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6) conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6) up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6)) merge7 = concatenate([conv3,up7], axis = 3) conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7) conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7) up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7)) merge8 = concatenate([conv2,up8], axis = 3) conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8) conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8) up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8)) merge9 = concatenate([conv1,up9], axis = 3) conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9) conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9) #conv9 = Conv2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9) conv10 = Conv2D(1, 1, activation = 'sigmoid')(conv9) model = tf.keras.Model(inputs = inputs, outputs = conv10) model.compile(optimizer = Adam(lr = 0.0001), loss = dice_coef_loss, metrics = dice_coef) if(pretrained_weights): model=keras.models.load_model(pretrained_weights) return model n_class=2 #Hamorrhages vs background #Step 3:Define functions for pre-processing data from tensorflow.keras.preprocessing.image import ImageDataGenerator import skimage.io as io import skimage.transform as trans import matplotlib.pyplot as plt import scipy.misc as sc def adjustData(img,mask,flag_multi_class,n_class): if(flag_multi_class): img /= 255 mask = mask[:,:,:,0] if(len(mask.shape) == 4) else mask[:,:,0] new_mask = np.zeros(mask.shape + (n_class,)) for i in range(n_class): new_mask[mask == i,i] = 1 new_mask = np.reshape(new_mask,(new_mask.shape[0],new_mask.shape[1]*new_mask.shape[2],new_mask.shape[3])) if flag_multi_class else np.reshape(new_mask,(new_mask.shape[0]*new_mask.shape[1],new_mask.shape[2])) mask = new_mask elif(np.max(img)>1): img = img / 255 mask = mask /255 mask[mask > 0.1] = 1 mask[mask <= 0.1] = 0 #print(np.shape(mask),np.shape(img)) return (img,mask) def trainGenerator(batch_size,train_path,image_folder,mask_folder,aug_dict,image_color_mode = "grayscale", mask_color_mode = "grayscale",image_save_prefix = "image",mask_save_prefix = "mask", flag_multi_class = False,n_class = n_class,save_to_dir = None,target_size = img_size,seed = 1): ''' can generate image and mask at the same time use the same seed for image_datagen and mask_datagen to ensure the transformation for image and mask is the same if you want to visualize the results of generator, set save_to_dir = "your path" ''' image_datagen = ImageDataGenerator(**aug_dict) mask_datagen = ImageDataGenerator(**aug_dict) image_generator = image_datagen.flow_from_directory( #'./', train_path, classes = [image_folder], color_mode = image_color_mode, target_size = target_size, batch_size = batch_size, save_to_dir = save_to_dir, save_prefix = image_save_prefix, class_mode=None, seed = seed) mask_generator = mask_datagen.flow_from_directory( train_path, classes = [mask_folder], color_mode = mask_color_mode, target_size = target_size, batch_size = batch_size, save_to_dir = save_to_dir, save_prefix = mask_save_prefix, class_mode=None, seed = seed) train_generator = zip(image_generator, mask_generator) for (img,mask) in train_generator: img,mask = adjustData(img,mask,flag_multi_class,n_class) yield (img,mask) def testGenerator(test_path,target_size = img_size,flag_multi_class = True,as_gray = True): files=sorted(os.listdir(test_path)) num_image=len(files) for i in range(num_image): img = io.imread(os.path.join(test_path,files[i]),as_gray = as_gray) print(files[i]) img = trans.resize(img,target_size) img = np.reshape(img,img.shape+(1,)) if (not flag_multi_class) else img img = np.reshape(img,(1,)+img.shape) #print(np.max(img)) yield img #Step 4: Define function to save the test images def labelVisualize(num_class,color_dict,img): img = img[:,:,0] if len(img.shape) == 3 else img img_out = np.zeros(img.shape + (3,)) for i in range(num_class): img_out[img == i] = color_dict[i] return img_out def saveResult(img_path,save_path,npyfile,flag_multi_class = False,num_class = 2): files=os.listdir(img_path) #print(len(img_path)) #print(len(npyfile)) for i,item in enumerate(npyfile): img = labelVisualize(num_class,COLOR_DICT,item) if flag_multi_class else item[:,:,0] #img1=np.array(((img - np.min(img))/np.ptp(img))>0.6).astype(float) img[img>0.5]=1 img[img<=0.5]=0 io.imsave(os.path.join(save_path, files[i]),img) def SaveResultwImage(img_path,save_path,npyfile,target_size=img_size,flag_multi_class = False,num_class = 2): files=os.listdir(img_path) #print(len(img_path)) #print(len(npyfile)) for i,item in enumerate(npyfile): img = labelVisualize(num_class,COLOR_DICT,item) if flag_multi_class else item[:,:,0] #img1=np.array(((img - np.min(img))/np.ptp(img))>0.6).astype(float) img[img>0.5]=1 img[img<=0.5]=0 I = io.imread(os.path.join(img_path,files[i])) I = trans.resize(I,target_size) I[:,:,0]=np.true_divide((I[:,:,0]+img),2) io.imsave(os.path.join(save_path, files[i]),I) #Step 5: Define functions to evaluate the output import sklearn.metrics as sm def get_confusion_matrix_elements(groundtruth_list, predicted_list): """returns confusion matrix elements i.e TN, FP, FN, TP as floats See example code for helper function definitions """ tn, fp, fn, tp = sm.confusion_matrix(groundtruth_list, predicted_list,labels=[0,1]).ravel() tn, fp, fn, tp = np.float64(tn), np.float64(fp), np.float64(fn), np.float64(tp) return tn, fp, fn, tp def get_prec_rec_IoU_accuracy(groundtruth_list, predicted_list): """returns precision, recall, IoU and accuracy metrics """ tn, fp, fn, tp = get_confusion_matrix_elements(groundtruth_list, predicted_list) total = tp + fp + fn + tn accuracy = (tp + tn) / total prec=tp/(tp+fp) rec=tp/(tp+fn) IoU=tp/(tp+fp+fn) return prec,rec,IoU,accuracy def get_f1_score(groundtruth_list, predicted_list): """Return f1 score covering edge cases""" tn, fp, fn, tp = get_confusion_matrix_elements(groundtruth_list, predicted_list) f1_score = (2 * tp) / ((2 * tp) + fp + fn) return f1_score def get_validation_metrics(groundtruth,predicted): """Return all output metrics. Input is binary images""" u,v=np.shape(groundtruth) groundtruth_list=np.reshape(groundtruth,(u*v,)) predicted_list=np.reshape(predicted,(u*v,)) prec,rec,IoU,acc=get_prec_rec_IoU_accuracy(groundtruth_list, predicted_list) f1_score=get_f1_score(groundtruth_list, predicted_list) # print("Precision=",prec, "Recall=",rec, "IoU=",IoU, "acc=",acc, "F1=",f1_score) return prec,rec,IoU,acc,f1_score def evalResult(gth_path,npyfile,target_size=img_size,flag_multi_class = False,num_class = 2): files=sorted(os.listdir(gth_path)) #print(files) prec=0 rec=0 acc=0 IoU=0 f1_score=0 for i,item in enumerate(npyfile): img = item[:,:,0] gth = io.imread(os.path.join(gth_path,files[i])) gth = trans.resize(gth,target_size) if(np.sum(img)>0): img1=np.array(((img - np.min(img))/np.ptp(img))>0.1).astype(float) if(np.sum(gth)>0): gth1=np.array(((gth - np.min(gth))/np.ptp(gth))>0.1).astype(float) gth1=(gth1>0.1).astype(int) p,r,I,a,f=get_validation_metrics(gth1,img1) prec=prec+p rec=rec+r acc=acc+a IoU=IoU+I f1_score=f1_score+f print("Precision=",prec/(i+1), "Recall=",rec/(i+1), "IoU=",IoU/(i+1), "acc=",acc/(i+1), "F1=",f1_score/(i+1)) ``` # All definitions are now done! Lets start using the functions now... # B. Call to image data generator, model initialization, followed by model fitting. ``` #Step 1: Call to image data generator in keras data_gen_args = dict(rotation_range=0.3, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.1, zoom_range=[0.7,1], horizontal_flip=True, fill_mode='nearest') PATH='./train/' if not os.path.exists(PATH+'aug_bin'): os.makedirs(PATH+'aug_bin') if not os.path.exists(PATH+'pred'): os.makedirs(PATH+'pred') data_gen = trainGenerator(3,PATH,'images','GT',data_gen_args, save_to_dir = None) for e in range(5): print('Epoch', e) batches = 0 for x_batch, y_batch in data_gen: #print(np.max(x_batch)) for i in range(0, 2): plt.subplot(330+1 + i) plt.imshow(y_batch[i], cmap=plt.get_cmap('gray')) plt.show() break #Step 2: Initialize the model. Train from scratch! model = unet() model.summary() #Step 3: Initialize Tensorboard to monitor changes in Model Loss import datetime %load_ext tensorboard log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1) #Visualize on tensorboard (move this above) %tensorboard --logdir logs/fit #Step 4: Fit the u-net model #model_checkpoint = tf.keras.callbacks.ModelCheckpoint('unet_DB1_bin.hdf5', monitor='loss',verbose=0)# save_best_only=True) model.fit(data_gen,steps_per_epoch=20,epochs=60,verbose=1,callbacks=[tensorboard_callback])#,model_checkpoint]) ``` # Final trained model is saved as unet_DB1.hdf5 # C. Run the trained model on test images and save the outputs, and evaluate pixel-level segmentation performance ``` #Step 1: Run model on test images and save the images #number of test images n_i=len(os.listdir('./test/images/')) #Call test generator test_gen = testGenerator('./test/images/') #Return model outcome for each test image results = model.predict_generator(test_gen,n_i,verbose=1) #saveResult('./test/images/',PATH+'pred/',results) SaveResultwImage('./test/images/',PATH+'pred/',results) evalResult('./test/GT/',results) ``` # Now we have replicated code for binary semantic segmentation for hemorrhages, specifically. Next, lets look at multi-class
github_jupyter
<table style="float:left; border:none"> <tr style="border:none; background-color: #ffffff"> <td style="border:none"> <a href="http://bokeh.pydata.org/"> <img src="assets/bokeh-transparent.png" style="width:50px" > </a> </td> <td style="border:none"> <h1>Bokeh Tutorial</h1> </td> </tr> </table> <div style="float:right;"><h2>06. Linking and Interactions</h2></div> ``` from bokeh.io import output_notebook, show from bokeh.plotting import figure output_notebook() ``` Now that we know from the previous chapter how multiple plots can be placed together in a layout, we can start to look at how different plots can be linked togeher, or how plots can be linked to widgets. # Linked Interactions It is possible to link various interactions between different Bokeh plots. For instance, the ranges of two (or more) plots can be linked, so that when one of the plots is panned (or zoomed, or otherwise has its range changed) the other plots will update in unison. It is also possible to link selections between two plots, so that when items are selected on one plot, the corresponding items on the second plot also become selected. ## Linked panning Linked panning (when multiple plots have ranges that stay in sync) is simple to spell with Bokeh. You simply share the appropriate range objects between two (or more) plots. The example below shows how to accomplish this by linking the ranges of three plots in various ways: ``` from bokeh.layouts import gridplot x = list(range(11)) y0, y1, y2 = x, [10-i for i in x], [abs(i-5) for i in x] plot_options = dict(width=250, plot_height=250, tools='pan,wheel_zoom') # create a new plot s1 = figure(**plot_options) s1.circle(x, y0, size=10, color="navy") # create a new plot and share both ranges s2 = figure(x_range=s1.x_range, y_range=s1.y_range, **plot_options) s2.triangle(x, y1, size=10, color="firebrick") # create a new plot and share only one range s3 = figure(x_range=s1.x_range, **plot_options) s3.square(x, y2, size=10, color="olive") p = gridplot([[s1, s2, s3]]) # show the results show(p) # EXERCISE: create two plots in a gridplot, and link their ranges ``` ## Linked brushing Linking selections is accomplished in a similar way, by sharing data sources between plots. Note that normally with ``bokeh.plotting`` and ``bokeh.charts`` creating a default data source for simple plots is handled automatically. However to share a data source, we must create them by hand and pass them explicitly. This is illustrated in the example below: ``` from bokeh.models import ColumnDataSource x = list(range(-20, 21)) y0, y1 = [abs(xx) for xx in x], [xx**2 for xx in x] # create a column data source for the plots to share source = ColumnDataSource(data=dict(x=x, y0=y0, y1=y1)) TOOLS = "box_select,lasso_select,help" # create a new plot and add a renderer left = figure(tools=TOOLS, width=300, height=300) left.circle('x', 'y0', source=source) # create another new plot and add a renderer right = figure(tools=TOOLS, width=300, height=300) right.circle('x', 'y1', source=source) p = gridplot([[left, right]]) show(p) # EXERCISE: create two plots in a gridplot, and link their data sources ``` # Hover Tools Bokeh has a Hover Tool that allows additional information to be displayed in a popup whenever the user hovers over a specific glyph. Basic hover tool configuration amounts to providing a list of ``(name, format)`` tuples. The full details can be found in the User's Guide [here](http://bokeh.pydata.org/en/latest/docs/user_guide/tools.html#hovertool). The example below shows some basic usage of the Hover tool with a circle glyph, using hover information defined in utils.py: ``` from bokeh.models import HoverTool source = ColumnDataSource( data=dict( x=[1, 2, 3, 4, 5], y=[2, 5, 8, 2, 7], desc=['A', 'b', 'C', 'd', 'E'], ) ) hover = HoverTool( tooltips=[ ("index", "$index"), ("(x,y)", "($x, $y)"), ("desc", "@desc"), ] ) p = figure(plot_width=300, plot_height=300, tools=[hover], title="Mouse over the dots") p.circle('x', 'y', size=20, source=source) show(p) ``` # Widgets Bokeh supports direct integration with a small basic widget set. Thse can be used in conjunction with a Bokeh Server, or with ``CustomJS`` models to add more interactive capability to your documents. You can see a complete list, with example code in the [Adding Widgets](http://bokeh.pydata.org/en/latest/docs/user_guide/interaction.html#adding-widgets) section of the User's Guide. To use the widgets, include them in a layout like you would a plot object: ``` from bokeh.layouts import widgetbox from bokeh.models.widgets import Slider slider = Slider(start=0, end=10, value=1, step=.1, title="foo") show(widgetbox(slider)) # EXERCISE: create and show a Select widget ``` # CustomJS Callbacks ``` from bokeh.models import TapTool, CustomJS, ColumnDataSource callback = CustomJS(code="alert('hello world')") tap = TapTool(callback=callback) p = figure(plot_width=600, plot_height=300, tools=[tap]) p.circle(x=[1, 2, 3, 4, 5], y=[2, 5, 8, 2, 7], size=20) show(p) ``` ## Lots of places to add callbacks * Widgets - Button, Toggle, Dropdown, TextInput, AutocompleteInput, Select, Multiselect, Slider, (DateRangeSlider), DatePicker, * Tools - TapTool, BoxSelectTool, HoverTool, * Selection - ColumnDataSource, AjaxDataSource, BlazeDataSource, ServerDataSource * Ranges - Range1d, DataRange1d, FactorRange ## Callbacks for widgets Widgets that have values associated can have small JavaScript actions attached to them. These actions (also referred to as "callbacks") are executed whenever the widget's value is changed. In order to make it easier to refer to specific Bokeh models (e.g., a data source, or a glyhph) from JavaScript, the ``CustomJS`` obejct also accepts a dictionary of "args" that map names to Python Bokeh models. The corresponding JavaScript models are made available automaticaly to the ``CustomJS`` code. And example below shows an action attached to a slider that updates a data source whenever the slider is moved: ``` from bokeh.layouts import column from bokeh.models import CustomJS, ColumnDataSource, Slider x = [x*0.005 for x in range(0, 201)] source = ColumnDataSource(data=dict(x=x, y=x)) plot = figure(plot_width=400, plot_height=400) plot.line('x', 'y', source=source, line_width=3, line_alpha=0.6) slider = Slider(start=0.1, end=6, value=1, step=.1, title="power") update_curve = CustomJS(args=dict(source=source, slider=slider), code=""" var data = source.get('data'); var f = slider.value; x = data['x'] y = data['y'] for (i = 0; i < x.length; i++) { y[i] = Math.pow(x[i], f) } source.change.emit(); """) slider.js_on_change('value', update_curve) show(column(slider, plot)) ``` ## Calbacks for selections It's also possible to make JavaScript actions that execute whenever a user selection (e.g., box, point, lasso) changes. This is done by attaching the same kind of CustomJS object to whatever data source the selection is made on. The example below is a bit more sophisticated, and demonstrates updating one glyph's data source in response to another glyph's selection: ``` from random import random x = [random() for x in range(500)] y = [random() for y in range(500)] color = ["navy"] * len(x) s = ColumnDataSource(data=dict(x=x, y=y, color=color)) p = figure(plot_width=400, plot_height=400, tools="lasso_select", title="Select Here") p.circle('x', 'y', color='color', size=8, source=s, alpha=0.4) s2 = ColumnDataSource(data=dict(xm=[0,1],ym=[0.5, 0.5])) p.line(x='xm', y='ym', color="orange", line_width=5, alpha=0.6, source=s2) s.callback = CustomJS(args=dict(s2=s2), code=""" var inds = cb_obj.get('selected')['1d'].indices; var d = cb_obj.get('data'); var ym = 0 if (inds.length == 0) { return; } for (i = 0; i < d['color'].length; i++) { d['color'][i] = "navy" } for (i = 0; i < inds.length; i++) { d['color'][inds[i]] = "firebrick" ym += d['y'][inds[i]] } ym /= inds.length s2.get('data')['ym'] = [ym, ym] cb_obj.trigger('change'); s2.trigger('change'); """) show(p) ``` # More For more interactions, see the User Guide - http://bokeh.pydata.org/en/latest/docs/user_guide/interaction.html
github_jupyter
``` #%load /Users/gully/astroML/bombcat/DFM_GP_example.py ``` ##Author: gully ##Date: Mar 24 16:35:28 2014 ###Updated: Jan 6, 2015 #Desc: Gaussian Process example Originals available at: https://github.com/dfm/gp-tutorial & https://speakerdeck.com/dfm/an-astronomers-introduction-to-gaussian-processes ``` %pylab inline # Auto imports np and plt %config InlineBackend.figure_format = 'retina' from matplotlib import rcParams #rcParams["savefig.dpi"] = 150 import emcee # http://dan.iel.fm/emcee import triangle # https://github.com/dfm/triangle.py import numpy as np import matplotlib.pyplot as plt from astroML.plotting import hist from astroML.stats.random import bivariate_normal from matplotlib.patches import Ellipse import timeit #from IPython.display import display, Math, Latex import seaborn as sns sns.set_context("notebook", font_scale=1.5, rc={"lines.linewidth": 2.5}) cmap = sns.cubehelix_palette(light=1, as_cmap=True) sns.palplot(sns.cubehelix_palette(light=1)) ``` ## First we construct the data. 50 ($x$, $y$) points with errors $\epsilon$. We also constuct and apply the covariance matrix. ``` np.random.seed(123456) #First, build the "true" dataset with N=50 datapoints from a line model y=mx+b. true_m, true_b = 0.5, -0.25 N = 50 x = np.linspace(-5, 5, N) y = true_m * x + true_b #Introduce some noise with both measurement uncertainties # and non-trivial correlated errors. yerr = 0.1 + 0.4 * np.random.rand(N) yerr_hom = 0.4*np.ones(N) hom_cov = np.diag(yerr_hom ** 2) iid_cov = np.diag(yerr ** 2) true_cov = 0.5 * np.exp(-0.5 * (x[:, None]-x[None, :])**2 / 1.3**2) + np.diag(yerr ** 2) y = np.random.multivariate_normal(y, true_cov) #y = np.random.multivariate_normal(y, iid_cov) #plt.hist(yerr, normed=True, histtype='stepfilled', alpha=0.4); #plt.xlabel('$\epsilon$'); ``` ##Plot II: Make a heatmap of the covariance matrix The key idea about the covariance matrix is that the on-diagonal terms are the variances (*i.e.* $\sigma^2$) of the data points, whereas the off-diagonal terms demonstrate the extent to which neighboring values are correlated. ``` plt.pcolormesh(hom_cov, cmap=cmap); plt.colorbar(); plt.title('Homoscedastic, no covariance') plt.pcolormesh(iid_cov, cmap=cmap); plt.colorbar(); plt.title('Heteroscedastic, no covariance') #Visualize the covariance plt.pcolormesh(true_cov, cmap=cmap); plt.colorbar(); plt.title('Heteroscedastic and covariance') ``` ## Plot III: Data vs 'truth' And plot the data with the observational uncertainties. The true line is plotted in black. ``` x0 = np.linspace(-6, 6, 1000) plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0) plt.plot(x0, true_m * x0 + true_b, "-k", lw=2, alpha=0.8) plt.xlabel('$x$'); plt.ylabel('$y$'); plt.title('Data with covariance'); ``` ## Do the linear regression in Matrix form ``` A = np.vander(x, 2) AT= A.T C = iid_cov C_inv = np.linalg.inv(C) S_inv = np.dot( np.dot(AT, C_inv), A) S= np.linalg.inv(S_inv) ls_m, ls_b = np.linalg.solve(S_inv, np.dot(A.T, np.linalg.solve(iid_cov, y))) ls_S = np.linalg.inv(S_inv) A ``` ###Plot the least squares solution as an oband ``` rand_params=np.random.multivariate_normal([ls_m, ls_b], ls_S, size=5000) samples = np.dot(np.vander(x0, 2), rand_params.T) ls_mu = np.mean(samples, axis=1) ls_std = np.std(samples, axis=1) plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0) plt.plot(x0, true_m * x0 + true_b, "k", lw=2, alpha=0.8); plt.fill_between(x0, ls_mu+ls_std, ls_mu-ls_std, color="r", alpha=0.3) plt.ylim(-4, 4) plt.xlabel('$x$'); plt.ylabel('$y$'); ``` Okay, now let's use the True Covariance matrix we have and do the same least squares. This is sort of cheating, since the covariance matrix was used in drawing the data in the first place. It's important to note that we don't get back exactly the true line when we stick in the covariance matrix though. The y's were merely DRAWN FROM a multivariate normal (MVN) whose mean values were the true y's (=true_m x + true_b), and the covariance matrix of the MVN was this True Covariance matrix, true_cov. In the limit that we fit tons of iterations of the least squares, one of the fits will be close to true_m, true_b, but there will be ample (honest) scatter. ``` S_inv1 = np.dot(AT, np.linalg.solve(true_cov, A)) corr_m, corr_b = np.linalg.solve(S_inv1, np.dot(AT, np.linalg.solve(true_cov, y))) corr_S = np.linalg.inv(S_inv1) rand_paramsCorr=np.random.multivariate_normal([corr_m, corr_b], corr_S, size=5000) ``` Visualize the [m, b] derived from independent error Least Squares AND true Correlation matrix least squares, and the true [m, b] Put the distribution of parameters as a Hess diagram Really, we could skip this and just plot the contours ``` #Pick m, b at random HC, xbinsC, ybinsC = np.histogram2d(rand_paramsCorr[:,0], rand_paramsCorr[:,1], bins=(np.linspace(0.0,1.0, 50), np.linspace(-1.0,1.0, 50))) # Create a black and white color map where bad data (NaNs) are white cmap.set_bad('w', 1.) # Use the image display function imshow() to plot the result fig, ax = plt.subplots(figsize=(8, 6)) HC[HC == 0] = 1 # prevent warnings in log10 ax.imshow(np.log10(HC).T, origin='lower', extent=[xbinsC[0], xbinsC[-1], ybinsC[0], ybinsC[-1]], cmap=cmap, interpolation='nearest', aspect='auto') ax.plot([ls_m], [ls_b], 'rx') ax.plot([corr_m], [corr_b], 'b+') ax.plot([true_m], [true_b], 'go') #From astroML book page 110 eq. 3.82 ang1=0.5*np.arctan(2.0*ls_S[1,0]/(ls_S[0,0]-ls_S[1,1])) ang2=0.5*np.arctan(2.0*corr_S[1,0]/(corr_S[0,0]-corr_S[1,1])) for N in (1, 2, 3): ax.add_patch(Ellipse([corr_m, corr_b], N * sqrt(corr_S[0,0]), N*sqrt(corr_S[1,1]), angle=ang2 * 180. / np.pi, lw=1, ec='b', fc='none')) for N in (1, 2, 3): ax.add_patch(Ellipse(np.array([ls_m, ls_b]), N * sqrt(ls_S[0,0]), N * sqrt(ls_S[1,1]), angle=ang1 * 180. / np.pi, lw=1, ec='r', fc='none')) ax.set_xlabel(r'$m$') ax.set_ylabel(r'$b$') ax.set_xlim(0.0, 1.0) ax.set_ylim(-1, 1); ``` Plot a nice big blue band over the data- the band is the uncertainty surrounding a line fit, for a range of m's and b's drawn from the MVN centered on the linear regression with the true covariance matrix ``` samples = np.dot(np.vander(x0, 2), rand_paramsCorr.T) corr_mu = np.mean(samples, axis=1) corr_std = np.std(samples, axis=1) fig = plt.figure(figsize=(6, 6)) fig.subplots_adjust(left=0.11, right=0.95, wspace=0.3, bottom=0.17, top=0.9) ax = fig.add_subplot(111) ax.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0) ax.plot(x0, true_m * x0 + true_b, "k", lw=2, alpha=0.8); ax.fill_between(x0, corr_mu+corr_std, corr_mu-corr_std, color="b", alpha=0.3) ax.fill_between(x0, ls_mu+ls_std, ls_mu-ls_std, color="r", alpha=0.3) plt.ylim(-4, 4) plt.xlabel('$x$'); plt.ylabel('$y$'); ``` Now we move on to applying a Kernel to model the off-diagonal elements for an unknown covariance matrix! To do that we should define some convenient functions ##Constructing a guess at the covariance matrix The off-diagonal terms are characterized by parameters $a$ and $s$. The term $a$ controls the strength of the interaction. The term $s$ controls the range of correlation. First we define a natural log likelihood function. You hand it an $m$, a $b$, an $a$, and an $s$ and it gives you the likelihood of the data: $\ln{p}=\ln{p(y|m,b,a,s)}$ Note: x is not passed to this function. The way Python works, `x`, `y`, and `iid_cov` are searched for in the local namespace, then global namespace. ``` def lnlike(m, b, lna, lns): a, s = np.exp(lna), np.exp(lns) off_diag_terms = a * np.exp(-0.5 * (x[:, None] - x[None, :])**2 / s**2) C = iid_cov + off_diag_terms s, logdet = np.linalg.slogdet(C) if s <= 0: return -np.inf r = y - (m*x + b) return -0.5 * (np.dot(r, np.linalg.solve(C, r)) + logdet) # Apply a uniform prior over some range. # Shape params are uniform in log space def lnprior(m, b, lna, lns): if not (-2 < m < 2 and -2 < b < 2 and -5 < lna < 5 and -5 < lns < 5): return -np.inf return 0.0 def lnprob(p): lp = lnprior(*p) if not np.isfinite(lp): return -np.inf return lp + lnlike(*p) ``` ## Run `emcee`! ``` ndim, nwalkers = 4, 32 p0 = np.array([true_m, true_b, np.log(0.5), np.log(1.3)]) #WTF this is cheating!! pos = [p0 + 1.0e-2 * np.random.randn(ndim) for i in range(nwalkers)] sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob) # This is the burn-in pos, lp, state = sampler.run_mcmc(pos, 300) sampler.reset() pos, lp, state = sampler.run_mcmc(pos, 5000) chain = sampler.chain fig, axes = plt.subplots(4, 1, figsize=(5, 6), sharex=True) fig.subplots_adjust(left=0.1, bottom=0.1, right=0.96, top=0.98, wspace=0.0, hspace=0.05) [a.plot(np.arange(chain.shape[1]), chain[:, :, i].T, "k", alpha=0.5) for i, a in enumerate(axes)] [a.set_ylabel("${0}$".format(l)) for a, l in zip(axes, ["m", "b", "\ln a", "\ln s"])] axes[-1].set_xlim(0, chain.shape[1]) axes[-1].set_xlabel("iteration"); fig = triangle.corner(sampler.flatchain[::5], labels=map("${0}$".format, ["m", "b", "\ln a", "\ln s"])) ```
github_jupyter
SAM008 - Spark using azdata =========================== Description ----------- ### Parameters ``` spark_statement = "2+2" max_tries_for_ready_state = 50 ``` ### Common functions Define helper functions used in this notebook. ``` # Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows import sys import os import re import json import platform import shlex import shutil import datetime from subprocess import Popen, PIPE from IPython.display import Markdown retry_hints = {} # Output in stderr known to be transient, therefore automatically retry error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help install_hint = {} # The SOP to help install the executable if it cannot be found first_run = True rules = None debug_logging = False def run(cmd, return_output=False, no_output=False, retry_count=0): """Run shell command, stream stdout, print stderr and optionally return output NOTES: 1. Commands that need this kind of ' quoting on Windows e.g.: kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name} Need to actually pass in as '"': kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name} The ' quote approach, although correct when pasting into Windows cmd, will hang at the line: `iter(p.stdout.readline, b'')` The shlex.split call does the right thing for each platform, just use the '"' pattern for a ' """ MAX_RETRIES = 5 output = "" retry = False global first_run global rules if first_run: first_run = False rules = load_rules() # When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see: # # ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)') # if platform.system() == "Windows" and cmd.startswith("azdata sql query"): cmd = cmd.replace("\n", " ") # shlex.split is required on bash and for Windows paths with spaces # cmd_actual = shlex.split(cmd) # Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries # user_provided_exe_name = cmd_actual[0].lower() # When running python, use the python in the ADS sandbox ({sys.executable}) # if cmd.startswith("python "): cmd_actual[0] = cmd_actual[0].replace("python", sys.executable) # On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail # with: # # UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128) # # Setting it to a default value of "en_US.UTF-8" enables pip install to complete # if platform.system() == "Darwin" and "LC_ALL" not in os.environ: os.environ["LC_ALL"] = "en_US.UTF-8" # When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc` # if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ: cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc") # To aid supportabilty, determine which binary file will actually be executed on the machine # which_binary = None # Special case for CURL on Windows. The version of CURL in Windows System32 does not work to # get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance # of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost # always the first curl.exe in the path, and it can't be uninstalled from System32, so here we # look for the 2nd installation of CURL in the path) if platform.system() == "Windows" and cmd.startswith("curl "): path = os.getenv('PATH') for p in path.split(os.path.pathsep): p = os.path.join(p, "curl.exe") if os.path.exists(p) and os.access(p, os.X_OK): if p.lower().find("system32") == -1: cmd_actual[0] = p which_binary = p break # Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this # seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound) # # NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split. # if which_binary == None: which_binary = shutil.which(cmd_actual[0]) if which_binary == None: if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None: display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.')) raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") else: cmd_actual[0] = which_binary start_time = datetime.datetime.now().replace(microsecond=0) print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)") print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})") print(f" cwd: {os.getcwd()}") # Command-line tools such as CURL and AZDATA HDFS commands output # scrolling progress bars, which causes Jupyter to hang forever, to # workaround this, use no_output=True # # Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait # wait = True try: if no_output: p = Popen(cmd_actual) else: p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1) with p.stdout: for line in iter(p.stdout.readline, b''): line = line.decode() if return_output: output = output + line else: if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file regex = re.compile(' "(.*)"\: "(.*)"') match = regex.match(line) if match: if match.group(1).find("HTML") != -1: display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"')) else: display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"')) wait = False break # otherwise infinite hang, have not worked out why yet. else: print(line, end='') if rules is not None: apply_expert_rules(line) if wait: p.wait() except FileNotFoundError as e: if install_hint is not None: display(Markdown(f'HINT: Use {install_hint} to resolve this issue.')) raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait() if not no_output: for line in iter(p.stderr.readline, b''): try: line_decoded = line.decode() except UnicodeDecodeError: # NOTE: Sometimes we get characters back that cannot be decoded(), e.g. # # \xa0 # # For example see this in the response from `az group create`: # # ERROR: Get Token request returned http error: 400 and server # response: {"error":"invalid_grant",# "error_description":"AADSTS700082: # The refresh token has expired due to inactivity.\xa0The token was # issued on 2018-10-25T23:35:11.9832872Z # # which generates the exception: # # UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte # print("WARNING: Unable to decode stderr line, printing raw bytes:") print(line) line_decoded = "" pass else: # azdata emits a single empty line to stderr when doing an hdfs cp, don't # print this empty "ERR:" as it confuses. # if line_decoded == "": continue print(f"STDERR: {line_decoded}", end='') if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"): exit_code_workaround = 1 # inject HINTs to next TSG/SOP based on output in stderr # if user_provided_exe_name in error_hints: for error_hint in error_hints[user_provided_exe_name]: if line_decoded.find(error_hint[0]) != -1: display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.')) # apply expert rules (to run follow-on notebooks), based on output # if rules is not None: apply_expert_rules(line_decoded) # Verify if a transient error, if so automatically retry (recursive) # if user_provided_exe_name in retry_hints: for retry_hint in retry_hints[user_provided_exe_name]: if line_decoded.find(retry_hint) != -1: if retry_count < MAX_RETRIES: print(f"RETRY: {retry_count} (due to: {retry_hint})") retry_count = retry_count + 1 output = run(cmd, return_output=return_output, retry_count=retry_count) if return_output: return output else: return elapsed = datetime.datetime.now().replace(microsecond=0) - start_time # WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so # don't wait here, if success known above # if wait: if p.returncode != 0: raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n') else: if exit_code_workaround !=0 : raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n') print(f'\nSUCCESS: {elapsed}s elapsed.\n') if return_output: return output def load_json(filename): """Load a json file from disk and return the contents""" with open(filename, encoding="utf8") as json_file: return json.load(json_file) def load_rules(): """Load any 'expert rules' from the metadata of this notebook (.ipynb) that should be applied to the stderr of the running executable""" try: # Load this notebook as json to get access to the expert rules in the notebook metadata. # j = load_json("sam008-spark-using-azdata.ipynb") except: pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename? else: if "metadata" in j and \ "azdata" in j["metadata"] and \ "expert" in j["metadata"]["azdata"] and \ "rules" in j["metadata"]["azdata"]["expert"]: rules = j["metadata"]["azdata"]["expert"]["rules"] rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first. # print (f"EXPERT: There are {len(rules)} rules to evaluate.") return rules def apply_expert_rules(line): """Determine if the stderr line passed in, matches the regular expressions for any of the 'expert rules', if so inject a 'HINT' to the follow-on SOP/TSG to run""" global rules for rule in rules: # rules that have 9 elements are the injected (output) rules (the ones we want). Rules # with only 8 elements are the source (input) rules, which are not expanded (i.e. TSG029, # not ../repair/tsg029-nb-name.ipynb) if len(rule) == 9: notebook = rule[1] cell_type = rule[2] output_type = rule[3] # i.e. stream or error output_type_name = rule[4] # i.e. ename or name output_type_value = rule[5] # i.e. SystemExit or stdout details_name = rule[6] # i.e. evalue or text expression = rule[7].replace("\\*", "*") # Something escaped *, and put a \ in front of it! if debug_logging: print(f"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.") if re.match(expression, line, re.DOTALL): if debug_logging: print("EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'".format(output_type_name, output_type_value, expression, notebook)) match_found = True display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.')) print('Common functions defined successfully.') # Hints for binary (transient fault) retry, (known) error and install guide # retry_hints = {'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond'], 'azdata': ['Endpoint sql-server-master does not exist', 'Endpoint livy does not exist', 'Failed to get state for cluster', 'Endpoint webhdfs does not exist', 'Adaptive Server is unavailable or does not exist', 'Error: Address already in use']} error_hints = {'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb']], 'azdata': [['azdata login', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['The token is expired', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Reason: Unauthorized', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Max retries exceeded with url: /api/v1/bdc/endpoints', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Look at the controller logs for more details', 'TSG027 - Observe cluster deployment', '../diagnose/tsg027-observe-bdc-create.ipynb'], ['provided port is already allocated', 'TSG062 - Get tail of all previous container logs for pods in BDC namespace', '../log-files/tsg062-tail-bdc-previous-container-logs.ipynb'], ['Create cluster failed since the existing namespace', 'SOP061 - Delete a big data cluster', '../install/sop061-delete-bdc.ipynb'], ['Failed to complete kube config setup', 'TSG067 - Failed to complete kube config setup', '../repair/tsg067-failed-to-complete-kube-config-setup.ipynb'], ['Error processing command: "ApiError', 'TSG110 - Azdata returns ApiError', '../repair/tsg110-azdata-returns-apierror.ipynb'], ['Error processing command: "ControllerError', 'TSG036 - Controller logs', '../log-analyzers/tsg036-get-controller-logs.ipynb'], ['ERROR: 500', 'TSG046 - Knox gateway logs', '../log-analyzers/tsg046-get-knox-logs.ipynb'], ['Data source name not found and no default driver specified', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ["Can't open lib 'ODBC Driver 17 for SQL Server", 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Control plane upgrade failed. Failed to upgrade controller.', 'TSG108 - View the controller upgrade config map', '../diagnose/tsg108-controller-failed-to-upgrade.ipynb']]} install_hint = {'kubectl': ['SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb'], 'azdata': ['SOP063 - Install azdata CLI (using package manager)', '../install/sop063-packman-install-azdata.ipynb']} ``` ### Get the Kubernetes namespace for the big data cluster Get the namespace of the Big Data Cluster use the kubectl command line interface . **NOTE:** If there is more than one Big Data Cluster in the target Kubernetes cluster, then either: - set \[0\] to the correct value for the big data cluster. - set the environment variable AZDATA\_NAMESPACE, before starting Azure Data Studio. ``` # Place Kubernetes namespace name for BDC into 'namespace' variable if "AZDATA_NAMESPACE" in os.environ: namespace = os.environ["AZDATA_NAMESPACE"] else: try: namespace = run(f'kubectl get namespace --selector=MSSQL_CLUSTER -o jsonpath={{.items[0].metadata.name}}', return_output=True) except: from IPython.display import Markdown print(f"ERROR: Unable to find a Kubernetes namespace with label 'MSSQL_CLUSTER'. SQL Server Big Data Cluster Kubernetes namespaces contain the label 'MSSQL_CLUSTER'.") display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.')) display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.')) display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.')) raise print(f'The SQL Server Big Data Cluster Kubernetes namespace is: {namespace}') ``` ### Get the controller username and password Get the controller username and password from the Kubernetes Secret Store and place in the required AZDATA\_USERNAME and AZDATA\_PASSWORD environment variables. ``` # Place controller secret in AZDATA_USERNAME/AZDATA_PASSWORD environment variables import os, base64 os.environ["AZDATA_USERNAME"] = run(f'kubectl get secret/controller-login-secret -n {namespace} -o jsonpath={{.data.username}}', return_output=True) os.environ["AZDATA_USERNAME"] = base64.b64decode(os.environ["AZDATA_USERNAME"]).decode('utf-8') os.environ["AZDATA_PASSWORD"] = run(f'kubectl get secret/controller-login-secret -n {namespace} -o jsonpath={{.data.password}}', return_output=True) os.environ["AZDATA_PASSWORD"] = base64.b64decode(os.environ["AZDATA_PASSWORD"]).decode('utf-8') print(f"Controller username '{os.environ['AZDATA_USERNAME']}' and password stored in environment variables") ``` ### Create a Spark Session ``` import os import secrets import json session_name = secrets.token_urlsafe(16).replace("-", "_") # session name can't start with a '-' (when passed in with azdata) print(session_name) session_create = run(f'azdata bdc spark session create --name "{session_name}" --session-kind pyspark', return_output=True) print(session_create) session_create_json = json.loads(session_create) print(session_create_json) ``` ### Wait for Spark Session to finish starting ``` import json session_id = session_create_json["id"] state = "starting" counter = 0 while state == "starting": session_state = run(f'azdata bdc spark session state --session-id {session_id}', return_output=True) print(session_state) session_state_json = json.loads(session_state) print (session_state_json) state = session_state_json["state"] counter = counter + 1 if counter == max_tries_for_ready_state: raise SystemExit(f'Session has not moved out of starting state (after {max_tries_for_ready_state} attempts)') if state == "dead" or state == "killed": display(Markdown(f'HINT: Use [TSG034 - Livy logs](../log-analyzers/tsg034-get-livy-logs.ipynb) to resolve this issue.')) raise SystemExit(f"Session moved from 'starting' to '{state}' state") print (f"Session successfully moved out of 'starting' state to '{state}'") ``` ### Create a Spark Statement ``` import json statement_create = run(f'azdata bdc spark statement create --code "{spark_statement}" --session-id {session_id}', return_output=True) statement_create_json = json.loads(statement_create) print (statement_create_json) statement_id = statement_create_json["id"] ``` ### Wait for Spark Statement to complete ``` import json statement_state = "waiting" counter = 0 while statement_state == "waiting": statement_info = run(f'azdata bdc spark statement info --session-id {session_id} --statement-id {statement_id}', return_output=True) print(statement_info) statement_info_json = json.loads(statement_info) print (statement_info_json) statement_state = statement_info_json["state"] counter = counter + 1 if counter == 25: raise SystemExit('Statement has not moved out of waiting state') print(f'Statement completed successfully. Output: {statement_info_json["output"]["data"]["text/plain"]}') ``` ### Get the Spark log for the session ``` run(f"azdata bdc spark session log --session-id {session_id}") ``` ### Delete the Spark session ``` run(f"azdata bdc spark session delete --session-id {session_id}") print('Notebook execution complete.') ```
github_jupyter
``` %config IPCompleter.greedy=True from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow as tf import variational_autoencoder as vae import weapon_data as weapons print("Tensor Flow version {}".format(tf.__version__)) ``` # Test Utils ``` def print_decoded_tensors_as_dict(weapon_data, array_of_tensors): genDict = {} for tensor in array_of_tensors: decoded, _ = weapon_data.decode_processed_tensor(tensor) for key, value in decoded.items(): if key not in genDict: genDict[key] = [] genDict[key].append(value) for key, value in genDict.items(): print(key, "=", value) def get_weapon_data(): return weapons.get_data() ``` # Initial VAE Training Initializes all network hyperparameters and shows training debug messages of the training epoch and cost. Trains and saves the trained model in the specified folder. ``` network_architecture = \ dict(n_input=0, #set it in with scope n_hidden_1=26, n_hidden_2=12, n_z=2) learning_rate = 0.01 optimizer = tf.train.RMSPropOptimizer(learning_rate) transfer_fct = tf.nn.elu num_epochs = 70 batch_size = 4 epoch_debug_step = 1 saved_model_folder = "trained_vae/" saved_model_full_path = saved_model_folder + "model.ckpt" with tf.Session() as sess: train_data, test_data = get_weapon_data() network_architecture['n_input'] = train_data.num_features network = vae.get_new_trained(sess, train_data, network_architecture, optimizer, transfer_fct, batch_size, num_epochs, epoch_debug_step, trained_model_save_path=saved_model_folder) ``` # Encode and Decode Testing #1 Tests the encoding and decoding functionality and outputs the inputted and generated values. This case uses the same size as the training batch_size. ``` with tf.Session(graph=tf.Graph()) as sess: network = vae.get_untrained(sess, network_architecture, optimizer, transfer_fct, batch_size) network = vae.restore(network, saved_model_full_path) train_data, test_data = get_weapon_data() samples = test_data.next_batch(batch_size) x_reconstructed = network.encode_and_decode(samples, True) print_decoded_tensors_as_dict(test_data, np.concatenate((samples,x_reconstructed), axis=0)) ``` # Encode and Decode Testing #2 Tests the encoding and decoding functionality and outputs the inputted and generated values. This case does not use the same size as the training batch_size. ``` with tf.Session(graph=tf.Graph()) as sess: network = vae.get_untrained(sess, network_architecture, optimizer, transfer_fct, batch_size) network = vae.restore(network, saved_model_full_path) train_data, test_data = get_weapon_data() samples = test_data.next_batch(1) x_reconstructed_mean = network.encode_and_decode(samples, False) print_decoded_tensors_as_dict(test_data, np.concatenate((samples,[x_reconstructed_mean]), axis=0)) ``` # Latent Space Visualization ``` import matplotlib.pyplot as plt %matplotlib inline def show_z_distribution(vae_model, title, z_mean=True): all_z = np.zeros((batch_size,network_architecture['n_z'])) train_data, test_data = get_weapon_data() total_batch = int(train_data.num_examples / batch_size) # Loop over all batches for i in range(total_batch): batch = train_data.next_batch(batch_size) z_dist = vae_model.calculate_z(batch) if z_mean: z_dist = vae_model.calculate_z_mean(batch) all_z = np.vstack((all_z, z_dist)) plt.figure(figsize=(15,5)) plt.subplot(1,2,1) plt.scatter(all_z[:,0], all_z[:,1]) plt.xlim(-3,3) plt.ylim(-3,3) plt.title(title) plt.subplot(1,2,2) plt.hist2d(all_z[:,0], all_z[:,1], (50, 50), cmap=plt.cm.jet) plt.colorbar() plt.title(title) with tf.Session(graph=tf.Graph()) as sess: network = vae.get_untrained(sess, network_architecture, optimizer, transfer_fct, batch_size) show_z_distribution(network, "Untrained Latent Space", z_mean=True) network = vae.restore(network, "trained_vae/model.ckpt") show_z_distribution(network, "Trained Latent Space - Z Mean", z_mean=True) show_z_distribution(network, "Trained Latent Space - Z", z_mean=False) ``` # Random Input Decoding Test #1 This tests the decoding from latent space functionality with random input. This case does not use the same size as the training batch_size. ``` with tf.Session(graph=tf.Graph()) as sess: network = vae.get_untrained(sess, network_architecture, optimizer, transfer_fct, batch_size) network = vae.restore(network, saved_model_full_path) generated = [] random_val = np.random.normal(size=(1,network_architecture["n_z"])) x_test = network.decode_from_latent_space(random_val, False) #[generated.append(x) for x in x_test] generated.append(x_test) train_data, test_data = get_weapon_data() print_decoded_tensors_as_dict(train_data, generated) ``` # Random Input Decoding Test #2 This tests the decoding from latent space functionality with random input. This case uses the same size as the training batch_size. ``` with tf.Session(graph=tf.Graph()) as sess: network = vae.get_untrained(sess, network_architecture, optimizer, transfer_fct, batch_size) network = vae.restore(network, saved_model_full_path) generated = [] random_val = np.random.normal(size=(batch_size,network_architecture["n_z"])) x_test = network.decode_from_latent_space(random_val, True) [generated.append(x) for x in x_test] train_data, test_data = get_weapon_data() print_decoded_tensors_as_dict(train_data, generated) ```
github_jupyter
# Convolutional Neural Networks: Step by Step Welcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation. **Notation**: - Superscript $[l]$ denotes an object of the $l^{th}$ layer. - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters. - Superscript $(i)$ denotes an object from the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example input. - Lowerscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer. - $n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$. - $n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$. We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started! ## 1 - Packages Let's first import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python. - [matplotlib](http://matplotlib.org) is a library to plot graphs in Python. - np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. ``` import numpy as np import h5py import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' %load_ext autoreload %autoreload 2 np.random.seed(1) ``` ## 2 - Outline of the Assignment You will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed: - Convolution functions, including: - Zero Padding - Convolve window - Convolution forward - Convolution backward (optional) - Pooling functions, including: - Pooling forward - Create mask - Distribute value - Pooling backward (optional) This notebook will ask you to implement these functions from scratch in `numpy`. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model: <img src="images/model.png" style="width:800px;height:300px;"> **Note** that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation. ## 3 - Convolutional Neural Networks Although programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below. <img src="images/conv_nn.png" style="width:350px;height:200px;"> In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself. ### 3.1 - Zero-Padding Zero-padding adds zeros around the border of an image: <img src="images/PAD.png" style="width:600px;height:400px;"> <caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **Zero-Padding**<br> Image (3 channels, RGB) with a padding of 2. </center></caption> The main benefits of padding are the following: - It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer. - It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image. **Exercise**: Implement the following function, which pads all the images of a batch of examples X with zeros. [Use np.pad](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html). Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with `pad = 1` for the 2nd dimension, `pad = 3` for the 4th dimension and `pad = 0` for the rest, you would do: ```python a = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..)) ``` ``` # GRADED FUNCTION: zero_pad def zero_pad(X, pad): """ Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, as illustrated in Figure 1. Argument: X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images pad -- integer, amount of padding around each image on vertical and horizontal dimensions Returns: X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C) """ ### START CODE HERE ### (≈ 1 line) X_pad = np.pad(X, ((0,0), (pad,pad), (pad,pad), (0,0)), 'constant', constant_values = (0,0)) ### END CODE HERE ### return X_pad np.random.seed(1) x = np.random.randn(4, 3, 3, 2) x_pad = zero_pad(x, 2) print ("x.shape =", x.shape) print ("x_pad.shape =", x_pad.shape) print ("x[1,1] =", x[1,1]) print ("x_pad[1,1] =", x_pad[1,1]) fig, axarr = plt.subplots(1, 2) axarr[0].set_title('x') axarr[0].imshow(x[0,:,:,0]) axarr[1].set_title('x_pad') axarr[1].imshow(x_pad[0,:,:,0]) ``` **Expected Output**: <table> <tr> <td> **x.shape**: </td> <td> (4, 3, 3, 2) </td> </tr> <tr> <td> **x_pad.shape**: </td> <td> (4, 7, 7, 2) </td> </tr> <tr> <td> **x[1,1]**: </td> <td> [[ 0.90085595 -0.68372786] [-0.12289023 -0.93576943] [-0.26788808 0.53035547]] </td> </tr> <tr> <td> **x_pad[1,1]**: </td> <td> [[ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.]] </td> </tr> </table> ### 3.2 - Single step of convolution In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which: - Takes an input volume - Applies a filter at every position of the input - Outputs another volume (usually of different size) <img src="images/Convolution_schematic.gif" style="width:500px;height:300px;"> <caption><center> <u> <font color='purple'> **Figure 2** </u><font color='purple'> : **Convolution operation**<br> with a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide) </center></caption> In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output. Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation. **Exercise**: Implement conv_single_step(). [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.sum.html). ``` # GRADED FUNCTION: conv_single_step def conv_single_step(a_slice_prev, W, b): """ Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation of the previous layer. Arguments: a_slice_prev -- slice of input data of shape (f, f, n_C_prev) W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev) b -- Bias parameters contained in a window - matrix of shape (1, 1, 1) Returns: Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data """ ### START CODE HERE ### (≈ 2 lines of code) # Element-wise product between a_slice and W. Do not add the bias yet. s = np.multiply(a_slice_prev,W) # Sum over all entries of the volume s. Z = np.sum(s) # Add bias b to Z. Cast b to a float() so that Z results in a scalar value. Z = Z + b ### END CODE HERE ### return Z np.random.seed(1) a_slice_prev = np.random.randn(4, 4, 3) W = np.random.randn(4, 4, 3) b = np.random.randn(1, 1, 1) Z = conv_single_step(a_slice_prev, W, b) print("Z =", Z) ``` **Expected Output**: <table> <tr> <td> **Z** </td> <td> -6.99908945068 </td> </tr> </table> ### 3.3 - Convolutional Neural Networks - Forward pass In the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume: <center> <video width="620" height="440" src="images/conv_kiank.mp4" type="video/mp4" controls> </video> </center> **Exercise**: Implement the function below to convolve the filters W on an input activation A_prev. This function takes as input A_prev, the activations output by the previous layer (for a batch of m inputs), F filters/weights denoted by W, and a bias vector denoted by b, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding. **Hint**: 1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do: ```python a_slice_prev = a_prev[0:2,0:2,:] ``` This will be useful when you will define `a_slice_prev` below, using the `start/end` indexes you will define. 2. To define a_slice you will need to first define its corners `vert_start`, `vert_end`, `horiz_start` and `horiz_end`. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below. <img src="images/vert_horiz_kiank.png" style="width:400px;height:300px;"> <caption><center> <u> <font color='purple'> **Figure 3** </u><font color='purple'> : **Definition of a slice using vertical and horizontal start/end (with a 2x2 filter)** <br> This figure shows only a single channel. </center></caption> **Reminder**: The formulas relating the output shape of the convolution to the input shape is: $$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$ $$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$ $$ n_C = \text{number of filters used in the convolution}$$ For this exercise, we won't worry about vectorization, and will just implement everything with for-loops. ``` # GRADED FUNCTION: conv_forward def conv_forward(A_prev, W, b, hparameters): """ Implements the forward propagation for a convolution function Arguments: A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) W -- Weights, numpy array of shape (f, f, n_C_prev, n_C) b -- Biases, numpy array of shape (1, 1, 1, n_C) hparameters -- python dictionary containing "stride" and "pad" Returns: Z -- conv output, numpy array of shape (m, n_H, n_W, n_C) cache -- cache of values needed for the conv_backward() function """ ### START CODE HERE ### # Retrieve dimensions from A_prev's shape (≈1 line) (m, n_H_prev, n_W_prev, n_C_prev) = np.shape(A_prev) # Retrieve dimensions from W's shape (≈1 line) (f, f, n_C_prev, n_C) = np.shape(W) # Retrieve information from "hparameters" (≈2 lines) stride = hparameters["stride"] pad = hparameters["pad"] # Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines) n_H = int(((n_H_prev-f+2*pad)/(stride))+1) n_W = int(((n_W_prev-f+2*pad)/(stride))+1) # Initialize the output volume Z with zeros. (≈1 line) Z = np.zeros((m, n_H, n_W, n_C)) # Create A_prev_pad by padding A_prev A_prev_pad = zero_pad(A_prev, pad) for i in range(m): # loop over the batch of training examples a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation for h in range(n_H): # loop over vertical axis of the output volume for w in range(n_W): # loop over horizontal axis of the output volume for c in range(n_C): # loop over channels (= #filters) of the output volume # Find the corners of the current "slice" (≈4 lines) vert_start = h *stride vert_end = vert_start + f horiz_start = w *stride horiz_end = horiz_start + f # Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line) a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line) Z[i, h, w, c] = conv_single_step(a_slice_prev, W[...,c], b[...,c]) ### END CODE HERE ### # Making sure your output shape is correct assert(Z.shape == (m, n_H, n_W, n_C)) # Save information in "cache" for the backprop cache = (A_prev, W, b, hparameters) return Z, cache np.random.seed(1) A_prev = np.random.randn(10,4,4,3) W = np.random.randn(2,2,3,8) b = np.random.randn(1,1,1,8) hparameters = {"pad" : 2, "stride": 2} Z, cache_conv = conv_forward(A_prev, W, b, hparameters) print("Z's mean =", np.mean(Z)) print("Z[3,2,1] =", Z[3,2,1]) print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3]) ``` **Expected Output**: <table> <tr> <td> **Z's mean** </td> <td> 0.0489952035289 </td> </tr> <tr> <td> **Z[3,2,1]** </td> <td> [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437 5.18531798 8.75898442] </td> </tr> <tr> <td> **cache_conv[0][1][2][3]** </td> <td> [-0.20075807 0.18656139 0.41005165] </td> </tr> </table> Finally, CONV layer should also contain an activation, in which case we would add the following line of code: ```python # Convolve the window to get back one output neuron Z[i, h, w, c] = ... # Apply activation A[i, h, w, c] = activation(Z[i, h, w, c]) ``` You don't need to do it here. ## 4 - Pooling layer The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are: - Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output. - Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output. <table> <td> <img src="images/max_pool1.png" style="width:500px;height:300px;"> <td> <td> <img src="images/a_pool.png" style="width:500px;height:300px;"> <td> </table> These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the fxf window you would compute a max or average over. ### 4.1 - Forward Pooling Now, you are going to implement MAX-POOL and AVG-POOL, in the same function. **Exercise**: Implement the forward pass of the pooling layer. Follow the hints in the comments below. **Reminder**: As there's no padding, the formulas binding the output shape of the pooling to the input shape is: $$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$ $$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$ $$ n_C = n_{C_{prev}}$$ ``` # GRADED FUNCTION: pool_forward def pool_forward(A_prev, hparameters, mode = "max"): """ Implements the forward pass of the pooling layer Arguments: A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) hparameters -- python dictionary containing "f" and "stride" mode -- the pooling mode you would like to use, defined as a string ("max" or "average") Returns: A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C) cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters """ # Retrieve dimensions from the input shape (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve hyperparameters from "hparameters" f = hparameters["f"] stride = hparameters["stride"] # Define the dimensions of the output n_H = int(1 + (n_H_prev - f) / stride) n_W = int(1 + (n_W_prev - f) / stride) n_C = n_C_prev # Initialize output matrix A A = np.zeros((m, n_H, n_W, n_C)) ### START CODE HERE ### for i in range(m): # loop over the training examples for h in range(n_H): # loop on the vertical axis of the output volume for w in range(n_W): # loop on the horizontal axis of the output volume for c in range (n_C): # loop over the channels of the output volume # Find the corners of the current "slice" (≈4 lines) vert_start = h *stride vert_end = vert_start + f horiz_start = w *stride horiz_end = horiz_start + f # Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line) a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c] # Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean. if mode == "max": A[i, h, w, c] = np.max(a_prev_slice) elif mode == "average": A[i, h, w, c] = np.mean(a_prev_slice) ### END CODE HERE ### # Store the input and hparameters in "cache" for pool_backward() cache = (A_prev, hparameters) # Making sure your output shape is correct assert(A.shape == (m, n_H, n_W, n_C)) return A, cache np.random.seed(1) A_prev = np.random.randn(2, 4, 4, 3) hparameters = {"stride" : 2, "f": 3} A, cache = pool_forward(A_prev, hparameters) print("mode = max") print("A =", A) print() A, cache = pool_forward(A_prev, hparameters, mode = "average") print("mode = average") print("A =", A) ``` **Expected Output:** <table> <tr> <td> A = </td> <td> [[[[ 1.74481176 0.86540763 1.13376944]]] [[[ 1.13162939 1.51981682 2.18557541]]]] </td> </tr> <tr> <td> A = </td> <td> [[[[ 0.02105773 -0.20328806 -0.40389855]]] [[[-0.22154621 0.51716526 0.48155844]]]] </td> </tr> </table> Congratulations! You have now implemented the forward passes of all the layers of a convolutional network. The remainer of this notebook is optional, and will not be graded. ## 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED) In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below. ### 5.1 - Convolutional layer backward pass Let's start by implementing the backward pass for a CONV layer. #### 5.1.1 - Computing dA: This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example: $$ dA += \sum _{h=0} ^{n_H} \sum_{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$ Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices. In code, inside the appropriate for-loops, this formula translates into: ```python da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c] ``` #### 5.1.2 - Computing dW: This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss: $$ dW_c += \sum _{h=0} ^{n_H} \sum_{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$ Where $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$. In code, inside the appropriate for-loops, this formula translates into: ```python dW[:,:,:,c] += a_slice * dZ[i, h, w, c] ``` #### 5.1.3 - Computing db: This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$: $$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$ As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost. In code, inside the appropriate for-loops, this formula translates into: ```python db[:,:,:,c] += dZ[i, h, w, c] ``` **Exercise**: Implement the `conv_backward` function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above. ``` def conv_backward(dZ, cache): """ Implement the backward propagation for a convolution function Arguments: dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C) cache -- cache of values needed for the conv_backward(), output of conv_forward() Returns: dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev), numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) dW -- gradient of the cost with respect to the weights of the conv layer (W) numpy array of shape (f, f, n_C_prev, n_C) db -- gradient of the cost with respect to the biases of the conv layer (b) numpy array of shape (1, 1, 1, n_C) """ ### START CODE HERE ### # Retrieve information from "cache" (A_prev, W, b, hparameters) = cache # Retrieve dimensions from A_prev's shape (m, n_H_prev, n_W_prev, n_C_prev) = np.shape(A_prev) # Retrieve dimensions from W's shape (f, f, n_C_prev, n_C) = np.shape(W) # Retrieve information from "hparameters" stride = hparameters["stride"] pad = hparameters["pad"] # Retrieve dimensions from dZ's shape (m, n_H, n_W, n_C) = np.shape(dZ) # Initialize dA_prev, dW, db with the correct shapes dA_prev = None dW = None db = None # Pad A_prev and dA_prev A_prev_pad = None dA_prev_pad = None for i in range(None): # loop over the training examples # select ith training example from A_prev_pad and dA_prev_pad a_prev_pad = None da_prev_pad = None for h in range(None): # loop over vertical axis of the output volume for w in range(None): # loop over horizontal axis of the output volume for c in range(None): # loop over the channels of the output volume # Find the corners of the current "slice" vert_start = None vert_end = None horiz_start = None horiz_end = None # Use the corners to define the slice from a_prev_pad a_slice = None # Update gradients for the window and the filter's parameters using the code formulas given above da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += None dW[:,:,:,c] += None db[:,:,:,c] += None # Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :]) dA_prev[i, :, :, :] = None ### END CODE HERE ### # Making sure your output shape is correct assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev)) return dA_prev, dW, db np.random.seed(1) dA, dW, db = conv_backward(Z, cache_conv) print("dA_mean =", np.mean(dA)) print("dW_mean =", np.mean(dW)) print("db_mean =", np.mean(db)) ``` ** Expected Output: ** <table> <tr> <td> **dA_mean** </td> <td> 1.45243777754 </td> </tr> <tr> <td> **dW_mean** </td> <td> 1.72699145831 </td> </tr> <tr> <td> **db_mean** </td> <td> 7.83923256462 </td> </tr> </table> ## 5.2 Pooling layer - backward pass Next, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer. ### 5.2.1 Max pooling - backward pass Before jumping into the backpropagation of the pooling layer, you are going to build a helper function called `create_mask_from_window()` which does the following: $$ X = \begin{bmatrix} 1 && 3 \\ 4 && 2 \end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix} 0 && 0 \\ 1 && 0 \end{bmatrix}\tag{4}$$ As you can see, this function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask. **Exercise**: Implement `create_mask_from_window()`. This function will be helpful for pooling backward. Hints: - [np.max()]() may be helpful. It computes the maximum of an array. - If you have a matrix X and a scalar x: `A = (X == x)` will return a matrix A of the same size as X such that: ``` A[i,j] = True if X[i,j] = x A[i,j] = False if X[i,j] != x ``` - Here, you don't need to consider cases where there are several maxima in a matrix. ``` def create_mask_from_window(x): """ Creates a mask from an input matrix x, to identify the max entry of x. Arguments: x -- Array of shape (f, f) Returns: mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x. """ ### START CODE HERE ### (≈1 line) mask = None ### END CODE HERE ### return mask np.random.seed(1) x = np.random.randn(2,3) mask = create_mask_from_window(x) print('x = ', x) print("mask = ", mask) ``` **Expected Output:** <table> <tr> <td> **x =** </td> <td> [[ 1.62434536 -0.61175641 -0.52817175] <br> [-1.07296862 0.86540763 -2.3015387 ]] </td> </tr> <tr> <td> **mask =** </td> <td> [[ True False False] <br> [False False False]] </td> </tr> </table> Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost. ### 5.2.2 - Average pooling - backward pass In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this. For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like: $$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix} 1/4 && 1/4 \\ 1/4 && 1/4 \end{bmatrix}\tag{5}$$ This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average. **Exercise**: Implement the function below to equally distribute a value dz through a matrix of dimension shape. [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ones.html) ``` def distribute_value(dz, shape): """ Distributes the input value in the matrix of dimension shape Arguments: dz -- input scalar shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz Returns: a -- Array of size (n_H, n_W) for which we distributed the value of dz """ ### START CODE HERE ### # Retrieve dimensions from shape (≈1 line) (n_H, n_W) = None # Compute the value to distribute on the matrix (≈1 line) average = None # Create a matrix where every entry is the "average" value (≈1 line) a = None ### END CODE HERE ### return a a = distribute_value(2, (2,2)) print('distributed value =', a) ``` **Expected Output**: <table> <tr> <td> distributed_value = </td> <td> [[ 0.5 0.5] <br\> [ 0.5 0.5]] </td> </tr> </table> ### 5.2.3 Putting it together: Pooling backward You now have everything you need to compute backward propagation on a pooling layer. **Exercise**: Implement the `pool_backward` function in both modes (`"max"` and `"average"`). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an `if/elif` statement to see if the mode is equal to `'max'` or `'average'`. If it is equal to 'average' you should use the `distribute_value()` function you implemented above to create a matrix of the same shape as `a_slice`. Otherwise, the mode is equal to '`max`', and you will create a mask with `create_mask_from_window()` and multiply it by the corresponding value of dA. ``` def pool_backward(dA, cache, mode = "max"): """ Implements the backward pass of the pooling layer Arguments: dA -- gradient of cost with respect to the output of the pooling layer, same shape as A cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters mode -- the pooling mode you would like to use, defined as a string ("max" or "average") Returns: dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev """ ### START CODE HERE ### # Retrieve information from cache (≈1 line) (A_prev, hparameters) = None # Retrieve hyperparameters from "hparameters" (≈2 lines) stride = None f = None # Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines) m, n_H_prev, n_W_prev, n_C_prev = None m, n_H, n_W, n_C = None # Initialize dA_prev with zeros (≈1 line) dA_prev = None for i in range(None): # loop over the training examples # select training example from A_prev (≈1 line) a_prev = None for h in range(None): # loop on the vertical axis for w in range(None): # loop on the horizontal axis for c in range(None): # loop over the channels (depth) # Find the corners of the current "slice" (≈4 lines) vert_start = None vert_end = None horiz_start = None horiz_end = None # Compute the backward propagation in both modes. if mode == "max": # Use the corners and "c" to define the current slice from a_prev (≈1 line) a_prev_slice = None # Create the mask from a_prev_slice (≈1 line) mask = None # Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line) dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None elif mode == "average": # Get the value a from dA (≈1 line) da = None # Define the shape of the filter as fxf (≈1 line) shape = None # Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line) dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None ### END CODE ### # Making sure your output shape is correct assert(dA_prev.shape == A_prev.shape) return dA_prev np.random.seed(1) A_prev = np.random.randn(5, 5, 3, 2) hparameters = {"stride" : 1, "f": 2} A, cache = pool_forward(A_prev, hparameters) dA = np.random.randn(5, 4, 2, 2) dA_prev = pool_backward(dA, cache, mode = "max") print("mode = max") print('mean of dA = ', np.mean(dA)) print('dA_prev[1,1] = ', dA_prev[1,1]) print() dA_prev = pool_backward(dA, cache, mode = "average") print("mode = average") print('mean of dA = ', np.mean(dA)) print('dA_prev[1,1] = ', dA_prev[1,1]) ``` **Expected Output**: mode = max: <table> <tr> <td> **mean of dA =** </td> <td> 0.145713902729 </td> </tr> <tr> <td> **dA_prev[1,1] =** </td> <td> [[ 0. 0. ] <br> [ 5.05844394 -1.68282702] <br> [ 0. 0. ]] </td> </tr> </table> mode = average <table> <tr> <td> **mean of dA =** </td> <td> 0.145713902729 </td> </tr> <tr> <td> **dA_prev[1,1] =** </td> <td> [[ 0.08485462 0.2787552 ] <br> [ 1.26461098 -0.25749373] <br> [ 1.17975636 -0.53624893]] </td> </tr> </table> ### Congratulations ! Congratulation on completing this assignment. You now understand how convolutional neural networks work. You have implemented all the building blocks of a neural network. In the next assignment you will implement a ConvNet using TensorFlow.
github_jupyter
# Test notebook Meteorites ``` from pathlib import Path import numpy as np import pandas as pd import requests from IPython.display import display from IPython.utils.capture import capture_output import pandas_profiling from pandas_profiling.utils.cache import cache_file file_name = cache_file( "meteorites.csv", "https://data.nasa.gov/api/views/gh4g-9sfh/rows.csv?accessType=DOWNLOAD", ) df = pd.read_csv(file_name) # Note: Pandas does not support dates before 1880, so we ignore these for this analysis df["year"] = pd.to_datetime(df["year"], errors="coerce") # Example: Constant variable df["source"] = "NASA" # Example: Boolean variable df["boolean"] = np.random.choice([True, False], df.shape[0]) # Example: Mixed with base types df["mixed"] = np.random.choice([1, "A"], df.shape[0]) # Example: Highly correlated variables df["reclat_city"] = df["reclat"] + np.random.normal(scale=5, size=(len(df))) # Example: Duplicate observations duplicates_to_add = pd.DataFrame(df.iloc[0:10]) duplicates_to_add["name"] = duplicates_to_add["name"] + " copy" df = df.append(duplicates_to_add, ignore_index=True) # Inline report without saving with capture_output() as out: pr = df.profile_report( sort="None", html={"style": {"full_width": True}}, progress_bar=False, minimal=True, ) display(pr) assert len(out.outputs) == 2 assert out.outputs[0].data["text/plain"] == "<IPython.core.display.HTML object>" assert all( s in out.outputs[0].data["text/html"] for s in ["<iframe", "Profile report generated with the `pandas-profiling`"] ) assert out.outputs[1].data["text/plain"] == "" # There should also 2 progress bars in minimal mode with capture_output() as out: pfr = df.profile_report( html={"style": {"full_width": True}}, minimal=True, progress_bar=True, lazy=False, ) assert all( any(v in s.data["text/plain"] for v in ["%|", "FloatProgress"]) for s in out.outputs ) assert len(out.outputs) == 2 # Write to a file with capture_output() as out: pfr.to_file("/tmp/example.html") assert all( any(v in s.data["text/plain"] for v in ["%|", "FloatProgress"]) for s in out.outputs ) assert len(out.outputs) == 2 # Print existing ProfileReport object inline with capture_output() as out: display(pfr) assert len(out.outputs) == 2 assert out.outputs[0].data["text/plain"] == "<IPython.core.display.HTML object>" assert all( s in out.outputs[0].data["text/html"] for s in ["<iframe", "Profile report generated with the `pandas-profiling`"] ) assert out.outputs[1].data["text/plain"] == "" ```
github_jupyter
### We want to download images of bees and ants from the internet. ### imports ``` from bs4 import BeautifulSoup as bs import requests import cv2 import os, re ``` ### Data Minning - Web Scrapping ``` class Insects: bees_path = r"data/bees_raw" ants_path = r"data/ants_raw" bee = 'bee' ant = 'ant' ants= [ 'https://www.google.com/search?q=african+ants&tbm=isch&ved=2ahUKEwiQybO93MPwAhWF4oUKHdYxBHwQ2-cCegQIABAA&oq=african+ants&gs_lcp=CgNpbWcQAzICCAAyAggAMgIIADICCAAyBggAEAUQHjIGCAAQBRAeMgYIABAFEB4yBggAEAUQHjIGCAAQBRAeMgYIABAIEB46BAgjECc6BAgAEEM6CAgAELEDEIMBOgUIABCxA1CJ8AFYvfUBYOn4AWgAcAB4AIAB0AKIAegLkgEFMi0zLjKYAQCgAQGqAQtnd3Mtd2l6LWltZ8ABAQ&sclient=img&ei=a5GbYNDnF4XFlwTW45DgBw&bih=741&biw=1499&rlz=1C1CHZN_enZA940ZA940&hl=en-GB', 'https://www.google.com/search?q=african+ants&tbm=isch&hl=en-GB&chips=q:african+ants,g_1:nest:Hr0V1HOmzjY%3D,online_chips:garden:9kocmP6T1vk%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwiUtMLV3MPwAhUC44UKHSaNDYYQ4lYoAXoECAEQHg&biw=1499&bih=741', "https://www.google.com/search?q=ants&rlz=1C1CHZN_enZA940ZA940&sxsrf=ALeKk0149Kvr4uk6ZY4iLqnjx8evw81iNw:1620806376575&source=lnms&tbm=isch&sa=X&ved=2ahUKEwie2cyi1sPwAhULUBUIHQtgDVMQ_AUoAXoECAEQAw&biw=1517&bih=741" ,'https://www.google.com/search?q=black%20ants&tbm=isch&rlz=1C1CHZN_enZA940ZA940&hl=en-GB&sa=X&ved=0CFsQrNwCKAFqFwoTCLiYwqrWw_ACFQAAAAAdAAAAABAD&biw=1499&bih=741', 'https://www.google.com/search?q=black+ants&tbm=isch&hl=en-GB&chips=q:black+ants,g_1:queen:Hx4PQ3GRb1g%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwjku9fL2cPwAhUjgXMKHeylDWAQ4lYoAHoECAEQGQ&biw=1499&bih=741', 'https://www.google.com/search?q=black+ants&tbm=isch&hl=en-GB&chips=q:black+ants,g_1:house:jmgdCUFwSk8%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwjku9fL2cPwAhUjgXMKHeylDWAQ4lYoAXoECAEQGw&biw=1499&bih=741', 'https://www.google.com/search?q=black+ants&tbm=isch&hl=en-GB&chips=q:black+ants,g_1:small:KzyxLk-F-q4%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwjku9fL2cPwAhUjgXMKHeylDWAQ4lYoAnoECAEQHQ&biw=1499&bih=741', 'https://www.google.com/search?q=black+ants&tbm=isch&hl=en-GB&chips=q:black+ants,g_1:male:LgWj8FjHdsY%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwjku9fL2cPwAhUjgXMKHeylDWAQ4lYoB3oECAEQJw&biw=1499&bih=741', 'https://www.google.com/search?q=black+ants&tbm=isch&hl=en-GB&chips=q:black+ants,g_1:male:LgWj8FjHdsY%3D,online_chips:garden:nQ_5U4dJs-A%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwi2ut6e2sPwAhXI_IUKHdo5BEoQ4lYoBHoECAEQJA&biw=1499&bih=741', 'https://www.google.com/search?q=black+ants&tbm=isch&hl=en-GB&chips=q:black+ants,g_1:male:LgWj8FjHdsY%3D,online_chips:garden:nQ_5U4dJs-A%3D,online_chips:south+africa:rkNoyMcmWFg%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwiYjomu2sPwAhXFwIUKHb84A9cQ4lYoA3oECAEQJA&biw=1499&bih=741', 'https://www.google.com/search?q=black+ants&tbm=isch&hl=en-GB&chips=q:black+ants,g_1:male:LgWj8FjHdsY%3D,online_chips:garden:nQ_5U4dJs-A%3D,online_chips:south+africa:rkNoyMcmWFg%3D,online_chips:driver:tYYbOBKazcI%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwjblsq72sPwAhWPwIUKHWO_BpkQ4lYoAXoECAEQIg&biw=1499&bih=741', 'https://www.google.com/search?q=black+ants&tbm=isch&hl=en-GB&chips=q:black+ants,g_1:male:LgWj8FjHdsY%3D,online_chips:garden:nQ_5U4dJs-A%3D,online_chips:south+africa:rkNoyMcmWFg%3D,online_chips:driver:tYYbOBKazcI%3D,online_chips:dorylus:FpIWkR2t4Ug%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwiToKPK2sPwAhUPxOAKHfPQB6cQ4lYoAXoECAEQJA&biw=1499&bih=741' ] bees = [ 'https://www.google.com/search?q=african+bees&tbm=isch&hl=en-GB&chips=q:african+bees,g_1:swarm:WVcV9S6VyL8%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwi98JW53MPwAhUx5IUKHdTFBTwQ4lYoAXoECAEQGw&biw=1499&bih=741', 'https://www.google.com/search?q=bees&tbm=isch&hl=en-GB&chips=q:bees,g_1:honey:mZv9-3ENAPI%3D,g_1:honeycomb:3MuUSpLY4jk%3D,online_chips:hive:EmRRMSChhOE%3D,online_chips:queen:GhcOaSpO6I8%3D,online_chips:swarm:pCkeZ6d4cCI%3D,online_chips:winter:-zxSUp6c-LI%3D,online_chips:cells:fq4ryvixFNE%3D,online_chips:nest:m8_ZoCwfzzE%3D,online_chips:brood+nest:HhlgMyEIaKw%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwjFncXm28PwAhVCnRoKHRvLDpwQ4lYoAHoECAEQKg&biw=1499&bih=741', 'https://www.google.com/search?q=bees&tbm=isch&ved=2ahUKEwiP56be2sPwAhVLwIUKHW1TAiYQ2-cCegQIABAA&oq=bees&gs_lcp=CgNpbWcQAzIHCAAQsQMQQzIECAAQQzIECAAQQzIECAAQQzIECAAQQzIECAAQQzIECAAQQzIECAAQQzIECAAQQzIECAAQQzoECCMQJzoFCAAQsQM6AggAOggIABCxAxCDAVCfxAJYssgCYITMAmgAcAB4AIABoQSIAaYMkgEJMi0xLjEuMS4xmAEAoAEBqgELZ3dzLXdpei1pbWfAAQE&sclient=img&ei=l4-bYM-_H8uAlwTtpomwAg&bih=741&biw=1499&rlz=1C1CHZN_enZA940ZA940&hl=en-GB', 'https://www.google.com/search?q=bees&tbm=isch&hl=en-GB&chips=q:bees,g_1:queen:Fm3LdkJsBq8%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwiP7rzz2sPwAhUR_hoKHb2YD04Q4lYoAXoECAEQGw&biw=1499&bih=741', 'https://www.google.com/search?q=bees&tbm=isch&hl=en-GB&chips=q:bees,g_1:honey:mZv9-3ENAPI%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwiP7rzz2sPwAhUR_hoKHb2YD04Q4lYoAHoECAEQGQ&biw=1499&bih=741', 'https://www.google.com/search?q=bees&tbm=isch&hl=en-GB&chips=q:bees,g_1:honey:mZv9-3ENAPI%3D,g_1:honeycomb:3MuUSpLY4jk%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwiSr_2S28PwAhUi4uAKHYJgC9oQ4lYoAHoECAEQGw&biw=1499&bih=741', 'https://www.google.com/search?q=bees&tbm=isch&hl=en-GB&chips=q:bees,g_1:honey:mZv9-3ENAPI%3D,g_1:honeycomb:3MuUSpLY4jk%3D,online_chips:hive:EmRRMSChhOE%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwiex72c28PwAhUD8xoKHVnaDFEQ4lYoAXoECAEQHg&biw=1499&bih=741', 'https://www.google.com/search?q=bees&tbm=isch&hl=en-GB&chips=q:bees,g_1:honey:mZv9-3ENAPI%3D,g_1:honeycomb:3MuUSpLY4jk%3D,online_chips:hive:EmRRMSChhOE%3D,online_chips:queen:GhcOaSpO6I8%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwiv2tms28PwAhUNRhoKHSuiCm0Q4lYoAXoECAEQIg&biw=1499&bih=741', 'https://www.google.com/search?q=bees&tbm=isch&hl=en-GB&chips=q:bees,g_1:honey:mZv9-3ENAPI%3D,g_1:honeycomb:3MuUSpLY4jk%3D,online_chips:hive:EmRRMSChhOE%3D,online_chips:queen:GhcOaSpO6I8%3D,online_chips:swarm:pCkeZ6d4cCI%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwj34ou828PwAhULZhoKHcC4D3QQ4lYoAnoECAEQJg&biw=1499&bih=741', 'https://www.google.com/search?q=bees&tbm=isch&hl=en-GB&chips=q:bees,g_1:honey:mZv9-3ENAPI%3D,g_1:honeycomb:3MuUSpLY4jk%3D,online_chips:hive:EmRRMSChhOE%3D,online_chips:queen:GhcOaSpO6I8%3D,online_chips:swarm:pCkeZ6d4cCI%3D,online_chips:winter:-zxSUp6c-LI%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwjfxrzK28PwAhWXgM4BHaPpB5sQ4lYoAXoECAEQJg&biw=1499&bih=741', 'https://www.google.com/search?q=bees&tbm=isch&hl=en-GB&chips=q:bees,g_1:honey:mZv9-3ENAPI%3D,g_1:honeycomb:3MuUSpLY4jk%3D,online_chips:hive:EmRRMSChhOE%3D,online_chips:queen:GhcOaSpO6I8%3D,online_chips:swarm:pCkeZ6d4cCI%3D,online_chips:winter:-zxSUp6c-LI%3D,online_chips:cells:fq4ryvixFNE%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwiyj__S28PwAhW0gM4BHQN5Cc0Q4lYoAHoECAEQJg&biw=1499&bih=741', 'https://www.google.com/search?q=bees&tbm=isch&hl=en-GB&chips=q:bees,g_1:honey:mZv9-3ENAPI%3D,g_1:honeycomb:3MuUSpLY4jk%3D,online_chips:hive:EmRRMSChhOE%3D,online_chips:queen:GhcOaSpO6I8%3D,online_chips:swarm:pCkeZ6d4cCI%3D,online_chips:winter:-zxSUp6c-LI%3D,online_chips:cells:fq4ryvixFNE%3D,online_chips:nest:m8_ZoCwfzzE%3D&rlz=1C1CHZN_enZA940ZA940&sa=X&ved=2ahUKEwiJo5nb28PwAhUQfBoKHQztCU4Q4lYoAHoECAEQKA&biw=1499&bih=741', ] len(bees), len(ants) if os.path.exists(Insects.bees_path) == False: os.makedirs(Insects.bees_path) for i, dog in enumerate(dogs_src_filted): res = requests.get(dog) file = open(f'{dogs_folder_path}/dog{i+1}.png', 'wb') print(f"Downloading dog{i+1}.png") file.write(res.content) file.close() def mineData(): for _ in range(2): filtered_imgs = [] try: if _ == 0: for url in bees: res = requests.get(url) soup = bs(res.content) all_imgs = soup.findAll('img') all_imgs_urls = [img['src'] for img in all_imgs] for img in all_imgs_urls: match = re.match(r'^https?://', img) if match is None: continue else: filtered_imgs.append(img) else: for url in ants: res = requests.get(url) soup = bs(res.content) all_imgs = soup.findAll('img') all_imgs_urls = [img['src'] for img in all_imgs] for img in all_imgs_urls: match = re.match(r'^https?://', img) if match is None: continue else: filtered_imgs.append(img) current_path = Insects.bees_path if _ == 0 else Insects.ants_path current_insect = Insects.bee if _ == 0 else Insects.ant if os.path.exists(current_path) == False: os.makedirs(current_path) for i, insect in enumerate(filtered_imgs): insect_ = requests.get(insect) file = open(f'{current_path}/{current_insect}{i+1}.png', 'wb') print(f"Downloading {current_insect}{i+1}.png") file.write(insect_.content) file.close() except Exception as e: print(e) pass mineData() ``` > Done fetching `240` images of bees and `240` images of ants
github_jupyter
# Fundamentals of Computer Science for Neuroengineering # Lecture 3a: Numpy ## Learning Goals: - What is a Numpy array, and why to use them? (motivation) - Importing and Generating Data - Getting insight about the Data (type, dimension, size, etc.) - Manipulating the array (arithmetic operations, transpose, etc.) - Slicing and Masking - Combining arrays - Saving data --- ## Motivation NumPy arrays are a bit like Python lists, but still very much different at the same time. ``` my_list = [11.7, 21.2, 13.5, 17.0, 19.9] my_list + 1 import numpy as np my_list = [11.7, 21.2, 13.5, 17.0, 19.9] my_arr = np.array(my_list) my_arr + 1 ``` <div style="text-align:center"> <img src ="img/numpy-diagram.png" height="600" width="600"/> </div> <br> ## Import Data (from text file) Creating arrays with the help of initial placeholders or with some example data is a great way of getting started with `numpy`. But when you want to get started with data analysis, you’ll need to load data from text files. With that what you have seen up until now, you won’t really be able to do much. Make use of some specific functions to load data from your files, such as `loadtxt()` or `genfromtxt()`. ``` x = np.loadtxt('data/data1.txt') x ``` <br> In the code above, you use `loadtxt()` to load the data in your environment. You see that the only argument that the functions takes is the text file data.txt. And it returns the data as a 2D aray. However, there are other arguments that gives us us more freedom in defining how we want to import the data. For instance, we might want to store each column as a single variable: to do that we can add `unpack=TRUE` to our function. Since we have three columns, we also should provide three variable names: ``` x, y, z = np.loadtxt('data/data1.txt', unpack=True) x, y, z ``` <br> Note that, in case you have comma-delimited data or if you want to specify the data type, there are also the arguments `delimiter` and `dtype` that you can add to the `loadtxt()` arguments. ``` x = np.loadtxt('data/data2.txt', delimiter=',') x ``` <br> Now, let's try `data3.txt` ``` x = np.loadtxt('data/data3.txt', delimiter=',') x ``` <br> Have a look at the data file and try to figure out what happened. <br> What is happening is that we have data points with different types in out data file. And loadtxt can only handle a single data format. Instead of `loadtxt()` we can use `genformatxt()`. This function would define anything that is not a number as `nan` (i.e., Not A Number). ``` x = np.genfromtxt('data/data3.txt') x ``` ## Create a numpy array - **zeros** : Return a new array setting values to zero. - **ones** : Return a new array setting values to one. - **random.random** : Returns a new array containing random values. - **empty** : Return a new uninitialized array. - **full** : Returns an array with the given dimension, with all elements set to the given scaler value (i.e., fill_value). - **full_like** : Return a new array with shape of input (another array) filled with value. - **eye** : Returns a diagonal matrix. - **identity** : Returns identity matrix (only square matrices, so you can specify only one dimension). ``` my_list = [1,2,3,4,5,6,7] my_arr = np.array(my_list) my_arr my_range = np.arange(10) my_linspace = np.linspace(1, 10, 10) my_range my_linspace zeros_arr = np.zeros(10) # (10, 10) ones_arr = np.zeros((2, 10)) random_arr = np.random.random((2, 10)) empty_arr = np.empty((2, 10)) full_arr = np.full((2, 10), fill_value=10) full_like_arr = np.full_like(full_arr, 0) eye_arr = np.eye(4) identity_arr = np.identity(4) ``` ## Data Inspection ``` data = np.random.random((5, 5)) data data.dtype data.ndim data.shape data.size data.strides ``` On a structural level, an array is basically nothing but pointers. It’s a combination of a **memory address**, a **data type**, a **shape** and **strides**: - The **`data` pointer** indicates the memory address of the first byte in the array, - The data type or **`dtype`** pointer describes the kind of elements that are contained within the array, - The **`shape`** indicates the shape of the array, and - The **`strides`** are the number of bytes that should be skipped in memory to go to the next element. If your strides are (10,1), you need to proceed one byte to get to the next column and 10 bytes to locate the next row. So in our case we need to jump 8 bytes to go to the next column and jump 40 (5 x 8) bytes to go to the next row Let's explore this a bit more with an array that only has one element: ``` # when defining an array, we can specfiy the datatype x = np.array([10], dtype='float16') x.strides ``` We can explicitly get the byte size of the elements of the array ``` data.itemsize ``` Or the number of bytes for the whole array ``` data.nbytes data = np.random.random((5, 2)) data data.sum() data.sum(axis=0) # compute the sum for each columns data.sum(axis=1) data.min() # axis data.max() data.mean() data.std() data.cumsum() x = np.arange(11) x, x.cumsum() ``` ## Array Transformation Data Transformation is essentially the application of any kind of operation on your data so that you tranform your data from one representation to another, making it ready for upcoming analysis. Numpy provides a handful number of functions that can be used to transform the data. Let's go through some of them. Let's start by creating a 2D array: ``` arr = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16], [17, 18, 19, 20]] data = np.array(arr) data data.shape ``` <br> Seems like we have 5 rows and 4 columns. What if we wanted to change our 2D array so that the rows become columns and the columns become rows (i.e., transpose) ``` data.T ``` <br> We can also change the shape of the whole array using the `reshape`: ``` data data.reshape(2, 10) ``` <br> We can do more with reshape. Let's say you do not know the dimensions of the your data exactly, but you want to have a fixed number of rows, and you dont care about the number of columns (or the other way around) ``` data.reshape(10, -1) # in this case we are fixing the number of rows and dont care about the number of columns ``` <br> With `Reshape` we will preserve all the data points in our array. What if we know that we want the first N (in this example 8) elements and we want the with a specific shape? ``` np.resize(data, (4, 2)) ``` <br> What we want to add a new dimension to our array? The application of this is when you have a system that accpets an input with specific diemsnions - a clear application of this is actually in deep learning! ``` np.expand_dims(data, axis=2).shape np.ravel(data) # flattens the input ``` <br> ### Esercise Let's try an explore the `np.pad()` function. <br> 1. Go through the documentation (docstring) of `np.pad()` function and try to understand how does it work. 2. Create the following matrices using Numpy's `pad()` function <div style="text-align:center"> <img src ="img/padding_ex_1.png" height="300" width="300" style="border:0px;margin:50px"/> <img src ="img/padding_ex_2.png" height="200" width="200" style="border:0px;margin:90px"/> </div> <div style="text-align:center"> </div> You have 10 minutes to solve this task. After that we'll take a 10-minute break. ``` import time mins = .1 finish_time = time.time() + mins * 60 print("Start time: " + time.ctime().split(' ')[3] + "\n") while time.time() < finish_time: print("\rCurrent time: " + time.ctime().split(' ')[3], end="") print("\n\n====Done====") ``` ### Your Answer <br><br> ## Data Transformation Beside changing the shape of our data, we can also play around on the level of data elements (i.e., applying functions on them) ``` a = np.arange(11, 21) b = np.arange(1, 11) print("a is", a) print("b is", b) a_plus_b = np.add(a, b) # a + b a_minus_b = np.subtract(a, b) # a - b a_mult_b = np.multiply(a, b) # a * b a_div_b = np.divide(a, b) # a / b a_remain_b = np.remainder(a, b) a_remain_b = np.remainder(a[1:], b[1:]) a_remain_b b_exp = np.exp(b) b_exp np.log(b_exp) ``` ## Slicing <div style="text-align:center"> <img src ="img/indexing.png" height="700" width="700"/> </div> ``` a = np.arange(11) a a[0] a[1] a[-1] a[-2] a[0:3] # returns index 0, 1, and 2 a[:3] a[1:3] # returns index 1 and 2 a[1:] ``` Note that we are alway jumping by 1 index (starting from index 1, and go through all of them till the end) ``` a[1:9:2] ``` <br><br><br><br> ## Masking The basic idea of masking is to index your data not by explicitly using the index values, but rather use another data to crop the data. We can also think of this as conditioned slicing. ``` a[[True, False, False, False, True, False, False, False, False, False, False]] a > 5 a[a > 5] ``` <br><br><br><br><br> ## Combining Arrays <div style="text-align:center"> <img src ="img/split_stack.png" height="600" width="600"/> </div> ``` a, b np.append(a, b) # axis np.vstack((a, b)).shape np.hstack((a, b)).shape data data_left, data_right = np.hsplit(data, 2) data_left data_right data_up, data_down = np.vsplit(data, 2) data np.vsplit(data, (3, 3)) # the indeces: left: upper bound for the rows starting from 0, right: lower bound for the rows till the end aa = np.vsplit(data, (3, 3)) ``` <br><br> ### Exercise Create a function that get any 2D array and performs either a vertical or a horizontal split (divide into half) depending on the mode ('horizontal' or 'vertical'). Here is how your finction definition should look like: ``` python def split_2d(arr, mode): ###################### ### Your code here ### ###################### return array_1, array_2 ``` Here is how one will use the function: ``` python array_up, array_down = split_2d(example_arr, mode='vertical') array_left, array_right = split_2d(example_arr, mode='horizontal') ``` ### Your Answer <br><br><br><br> ## Saving the Array - **save()**: saves data in .npy format - **savez()**: Save several arrays into an uncompressed .npz archive - **savez_compressed()**: - **savetxt()**: saves the data in the given format (e.g., txt, csv, etc.) And you probably wanna load the data as well? We can use `np.load()` ``` # save() example x = np.arange(10) outfile = 'test_save' np.save(outfile, x) # import the .npy file np.load(outfile + '.npy') ``` <br><br><br><br><br> ``` # savez() example x = np.arange(10) y = np.exp(x) outfile = 'test_savez' np.savez(outfile, x, y) # import the .npz file npzfile = np.load(outfile + '.npz') npzfile.files npzfile['arr_0'] npzfile['arr_1'] ``` <br><br><br><br><br> ``` # savez_compressed() example x = np.arange(10) y = np.exp(x) outfile = 'test_savez_compressed' np.savez_compressed(outfile, x, y) ``` Note that this file has a smaller size than the file we saved with `np.savez()` ``` # import the .npz file npzfile = np.load(outfile + '.npz') npzfile.files npzfile['arr_0'] ``` <br><br><br><br><br> ``` # savetxt() example x = np.arange(10) outfile = 'test_savetxt.txt' np.savetxt(outfile, x, delimiter=',') # saves the data in the given format (e.g., txt, csv, etc.) np.loadtxt(outfile) ``` Note that since we are dealing with a text file, we gotta use `np.loadtxt()`. <br><br><br><br> --- ### References - https://www.datacamp.com/community/tutorials/python-numpy-tutorial#visualize - the [cheetsheet](https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Numpy_Python_Cheat_Sheet.pdf)
github_jupyter
``` import os os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'prepare/mesolitica-tpu.json' from google.cloud import storage client = storage.Client() bucket = client.bucket('mesolitica-tpu-general') !rm -rf t5-small-bahasa !mkdir t5-small-bahasa blob = bucket.blob('t5-small/model.ckpt-510000.data-00000-of-00002') blob.download_to_filename('t5-small-bahasa/model.ckpt-510000.data-00000-of-00002') blob = bucket.blob('t5-small/model.ckpt-510000.data-00001-of-00002') blob.download_to_filename('t5-small-bahasa/model.ckpt-510000.data-00001-of-00002') blob = bucket.blob('t5-small/model.ckpt-510000.index') blob.download_to_filename('t5-small-bahasa/model.ckpt-510000.index') blob = bucket.blob('t5-small/model.ckpt-510000.meta') blob.download_to_filename('t5-small-bahasa/model.ckpt-510000.meta') blob = bucket.blob('t5-small/checkpoint') blob.download_to_filename('t5-small-bahasa/checkpoint') blob = bucket.blob('t5-small/operative_config.gin') blob.download_to_filename('t5-small-bahasa/operative_config.gin') # blob = bucket.blob('t5-base/events.out.tfevents.1589423125.general') # blob.download_to_filename('t5-base-bahasa/events.out.tfevents.1589423125.general') # !cp sp10m.cased.t5* t5-small-bahasa # !pip3 install transformers -U from transformers import T5Config, T5Model, load_tf_weights_in_t5 import os out = 't5-small-bahasa-cased' os.makedirs(out, exist_ok=True) config = T5Config( vocab_size = 32128, n_positions=1024, d_ff = 2048, d_kv = 64, d_model = 512, dropout_rate = 0.1, inputs_length = 1024, num_heads = 8, num_layers = 6, decoder_start_token_id = 0, eos_token_id = 1, pad_token_id = 0) print(config) config.save_pretrained(out) model = T5Model(config) load_tf_weights_in_t5(model, config, 't5-small-bahasa/model.ckpt-510000') from transformers import CONFIG_NAME, WEIGHTS_NAME CONFIG_NAME, WEIGHTS_NAME import torch torch.save(model.state_dict(), out + '/' + WEIGHTS_NAME) from transformers import T5Config, T5Model, T5Tokenizer tokenizer = T5Tokenizer('sp10m.cased.t5.model') tokenizer.save_pretrained(out) tokenizer = T5Tokenizer.from_pretrained('./t5-small-bahasa-cased', lower = False) config = T5Config.from_pretrained('./t5-small-bahasa-cased') model = T5Model.from_pretrained('./t5-small-bahasa-cased/pytorch_model.bin', config = config) model.save_pretrained(out) # !transformers-cli upload ./t5-small-bahasa-cased from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained('./t5-small-bahasa-cased') model = T5ForConditionalGeneration.from_pretrained('./t5-small-bahasa-cased') input_ids = tokenizer.encode('soalan: gunung apakah yang paling tinggi didunia?', return_tensors = 'pt') outputs = model.generate(input_ids) tokenizer.decode(outputs[0]) # https://www.hmetro.com.my/mutakhir/2020/05/580438/peletakan-jawatan-tun-m-ditolak-bukan-lagi-isu # original title, Peletakan jawatan Tun M ditolak, bukan lagi isu string = 'PELETAKAN jawatan Tun Dr Mahathir Mohamad sebagai Pengerusi Parti Pribumi Bersatu Malaysia (Bersatu) ditolak di dalam mesyuarat khas Majlis Pimpinan Tertinggi (MPT) pada 24 Februari lalu. Justeru, tidak timbul soal peletakan jawatan itu sah atau tidak kerana ia sudah pun diputuskan pada peringkat parti yang dipersetujui semua termasuk Presiden, Tan Sri Muhyiddin Yassin. Bekas Setiausaha Agung Bersatu Datuk Marzuki Yahya berkata, pada mesyuarat itu MPT sebulat suara menolak peletakan jawatan Dr Mahathir. "Jadi ini agak berlawanan dengan keputusan yang kita sudah buat. Saya tak faham bagaimana Jabatan Pendaftar Pertubuhan Malaysia (JPPM) kata peletakan jawatan itu sah sedangkan kita sudah buat keputusan di dalam mesyuarat, bukan seorang dua yang buat keputusan. "Semua keputusan mesti dibuat melalui parti. Walau apa juga perbincangan dibuat di luar daripada keputusan mesyuarat, ini bukan keputusan parti. "Apa locus standy yang ada pada Setiausaha Kerja untuk membawa perkara ini kepada JPPM. Seharusnya ia dibawa kepada Setiausaha Agung sebagai pentadbir kepada parti," katanya kepada Harian Metro. Beliau mengulas laporan media tempatan hari ini mengenai pengesahan JPPM bahawa Dr Mahathir tidak lagi menjadi Pengerusi Bersatu berikutan peletakan jawatannya di tengah-tengah pergolakan politik pada akhir Februari adalah sah. Laporan itu juga menyatakan, kedudukan Muhyiddin Yassin memangku jawatan itu juga sah. Menurutnya, memang betul Dr Mahathir menghantar surat peletakan jawatan, tetapi ditolak oleh MPT. "Fasal yang disebut itu terpakai sekiranya berhenti atau diberhentikan, tetapi ini mesyuarat sudah menolak," katanya. Marzuki turut mempersoal kenyataan media yang dibuat beberapa pimpinan parti itu hari ini yang menyatakan sokongan kepada Perikatan Nasional. "Kenyataan media bukanlah keputusan rasmi. Walaupun kita buat 1,000 kenyataan sekali pun ia tetap tidak merubah keputusan yang sudah dibuat di dalam mesyuarat. Kita catat di dalam minit apa yang berlaku di dalam mesyuarat," katanya.' len(string.split()) input_ids = tokenizer.encode(f'tajuk: {string}', return_tensors = 'pt') outputs = model.generate(input_ids) tokenizer.decode(outputs[0]) !tar -czvf t5-small-bahasa.gz t5-small-bahasa ```
github_jupyter
``` import micropip await micropip.install(['bqplot==0.12.30', 'ipyleaflet==0.14.0']) import os import io import json from urllib.request import urlopen from datetime import datetime import numpy as np import pandas as pd from js import fetch from ipywidgets import Dropdown from bqplot import Lines, Figure, LinearScale, DateScale, Axis from ipyleaflet import Map, GeoJSON, WidgetControl URL = "https://raw.githubusercontent.com/jupyter-widgets/ipyleaflet/master/examples/nations.json" resp = await fetch(URL) text = io.BytesIO((await resp.arrayBuffer()).to_py()) data = pd.read_json(text) def clean_data(data): for column in ['income', 'lifeExpectancy', 'population']: data = data.drop(data[data[column].apply(len) <= 4].index) return data def extrap_interp(data): data = np.array(data) x_range = np.arange(1800, 2009, 1.) y_range = np.interp(x_range, data[:, 0], data[:, 1]) return y_range def extrap_data(data): for column in ['income', 'lifeExpectancy', 'population']: data[column] = data[column].apply(extrap_interp) return data data = clean_data(data) data = extrap_data(data) data date_start = datetime(1800, 12, 31) date_end = datetime(2009, 12, 31) date_scale = DateScale(min=date_start, max=date_end) date_data = pd.date_range(start=date_start, end=date_end, freq='A', normalize=True) country_name = 'Angola' data_name = 'income' x_data = data[data.name == country_name][data_name].values[0] x_scale = LinearScale() lines = Lines(x=date_data, y=x_data, scales={'x': date_scale, 'y': x_scale}) ax_x = Axis(label='Year', scale=date_scale, num_ticks=10, tick_format='%Y') ax_y = Axis(label=data_name.capitalize(), scale=x_scale, orientation='vertical', side='left') figure = Figure(axes=[ax_x, ax_y], title=country_name, marks=[lines], animation_duration=500, layout={'max_height': '250px', 'max_width': '400px'}) def update_figure(country_name, data_name): try: lines.y = data[data.name == country_name][data_name].values[0] ax_y.label = data_name.capitalize() figure.title = country_name except IndexError: pass URL = "https://raw.githubusercontent.com/jupyter-widgets/ipyleaflet/master/examples/countries.geo.json" resp = await fetch(URL) text = io.BytesIO((await resp.arrayBuffer()).to_py()) countries = json.loads(text.read()) m = Map(zoom=3) geo = GeoJSON(data=countries, style={'fillColor': 'white', 'weight': 0.5}, hover_style={'fillColor': '#1f77b4'}, name='Countries') m.add_layer(geo) widget_control1 = WidgetControl(widget=figure, position='bottomright') m.add_control(widget_control1) def on_hover(event, feature, **kwargs): global country_name country_name = feature['properties']['name'] update_figure(country_name, data_name) geo.on_hover(on_hover) dropdown = Dropdown( options=['income', 'population', 'lifeExpectancy'], value=data_name, description='Plotting:' ) def on_click(change): global data_name data_name = change['new'] update_figure(country_name, data_name) dropdown.observe(on_click, 'value') widget_control2 = WidgetControl(widget=dropdown, position='bottomleft') m.add_control(widget_control2) m ```
github_jupyter
``` import numpy as np import pandas as pd from tqdm.notebook import tqdm_notebook import warnings warnings.filterwarnings("ignore") from numba import jit import pickle from skopt import gp_minimize from skopt.space import Integer, Real, Categorical from skopt.utils import use_named_args ``` Data provided must be in the following form: datesA closeA datesB closeB datesC closeC ... This method prevents survivorship bias if one just selected current members of S&P 500 index (or any other index). ``` close = pd.read_parquet('spx_close.parquet') close.head() def get_tick_df(x, close, split=pd.to_datetime('2017-01-01')): """ Identifies and splits data into 'train' and 'test' parts split - date to split into 'test' and 'train' parts """ tick_df = close.iloc[:,x:x+2] tick = tick_df.columns[1] tick_df.columns = ['date', tick] tick_df = tick_df.set_index('date').dropna() train = tick_df.loc[:split] test = tick_df.loc[split-pd.Timedelta(100, 'D'):] return train, test x = 0 train_df = {} test_df = {} split = pd.to_datetime('2019-03-01') for _ in tqdm_notebook(range(int(close.shape[1]/2))): tick = close.iloc[:,x+1].name if close[tick].dropna().shape[0] == 0: # some are empty, these are ignored x+=2 continue else: train_df[tick], test_df[tick] = get_tick_df(x, close=close, split=split) x+=2 @jit(nopython=True) def ema_rsi(series, a): """ numba-accelerated loop to calculate EMA/Wilder rsi series - np.array of values, not pd.Series! a - alpha, or decay parameter of RSI calculation """ prev = series[0] ema_series = [prev] for v in series[1:]: new_val = a*v + (1-a)*prev ema_series.append( new_val ) prev = new_val return ema_series def rsi(frame, days=14, method='ws'): """ calculates rsi series based on provided parameters days - period to calculate the RSI method - 'ws' (Wilder'), 'sma' (Simple Moving Average), 'ema' (Exponential Moving Average) """ tick = frame.columns[0] frame['change'] = frame[tick] - frame[tick].shift(1) frame['up_move'] = np.where( frame['change'] > 0, frame['change'], 0 ) frame['down_move'] = np.abs(np.where( frame['change'] < 0, frame['change'], 0 )) if method == 'sma': frame['avg_up'] = frame['up_move'].rolling(days).mean() frame['avg_down'] = frame['down_move'].rolling(days).mean() elif method == 'ema': alpha = 2/(days+1) frame['avg_up'] = ema_rsi( frame['up_move'].values, alpha ) frame['avg_down'] = ema_rsi( frame['down_move'].values, alpha ) else: alpha = 1/days frame['avg_up'] = ema_rsi( frame['up_move'].values, alpha ) frame['avg_down'] = ema_rsi( frame['down_move'].values, alpha ) frame['rs'] = frame['avg_up']/frame['avg_down'] frame['rsi'] = 100 - 100/(1+frame['rs']) frame['rsi'].iloc[:days] = np.nan return frame['rsi'] class Stock_history(): def __init__(self, df, rsi_days=14, calc_type='ws'): """ Calculates rsi for this particular stock """ self.df = df self.tick = df.columns[0] self.df['rsi'] = rsi(self.df[[self.tick]], days=rsi_days, method=calc_type) def get_signals(self, **key_args): """ calculates signals for the stock with given keywords and then calculates geometric mean of trades' return fwd - holding period after purchasing a stock low - minimum period after previous low to find a new low in days high - maximum period after previous low to find a new low in days step - how often to look for lower low between 'low' and 'high' rsi1 - maximum RSI value during the first low rsi2 - maximum RSI value during the second low rsi_chng - minimum rsi increase from first to second low Attempts to find an increase in RSI while the price goes further down price_chng - maximum price change from first to second low Attempts to find an increase in price while the RSI goes up threshold - minimum number of signals required to consider this a True buy recommendation Maximum number of signals is int((high-low)/step), so there can be more than 1 signal in a single day. This threshold may decrease the algorithm's false positives. """ #### uses default kwargs if not all are provided kwargs = { 'fwd': 5, 'low': 5, 'high': 21, 'step': 5, 'rsi1': 30, 'rsi2': 35, 'rsi_chng': 0, 'price_chng': -0.01, 'threshold': 2 } for k, v in key_args.items(): if k in kwargs.keys(): kwargs[k] = v self.df['signal'] = 0 self.df['fwd'] = self.df[self.tick].shift(-kwargs['fwd'])/self.df[self.tick]-1 for x in range(kwargs['low'], kwargs['high'], kwargs['step']): self.df['d'+str(x)] = self.df[self.tick]/self.df[self.tick].shift(x)-1 self.df['r'+str(x)] = self.df['rsi']-self.df['rsi'].shift(x) self.df['r_'+str(x)] = self.df['rsi'].shift(x) self.df['signal'] = np.where( (self.df.rsi <= kwargs['rsi2']) & (self.df['r_'+str(x)] <= kwargs['rsi1']) & (self.df['r'+str(x)] > kwargs['rsi_chng']) & (self.df['d'+str(x)] < kwargs['price_chng']), self.df['signal'] + 1, self.df['signal']) self.geoslice = self.df[self.df.signal >= kwargs['threshold']][[self.tick, 'rsi', 'signal', 'fwd']] self.geoslice.columns = ['tick', 'rsi', 'signal', 'fwd'] self.geoslice['tick'] = self.tick self.geomean = np.prod(self.df[self.df.signal >= kwargs['threshold']]['fwd']+1)-1 self.count = self.df[self.df.signal >= kwargs['threshold']].shape[0] #### Checking if stuff works: ticker = 'aa' itick = Stock_history(train_df[ticker]) itick.get_signals() print(itick.geomean, itick.count) itick.geoslice ``` The following is a Bayesian optimizatio of parameters listed in 'get_signals' method of Stock_history class. It attempts to find the best parameters withing some bounds. ``` search_space = [ Integer(2, 15, name='fwd'), Integer(3, 12, name='low'), Integer(12, 50, name='high'), Integer(1, 4, name='step'), Integer(30, 40, name='rsi1'), Integer(20, 30, name='rsi2'), Real(0, 2, prior='uniform', name='rsi_chng'), Real(-0.05, 0, prior='uniform', name='price_chng'), Integer(1, 4, name='threshold'), ] default_params = { 'fwd': 5, 'low': 5, 'high': 21, 'step': 5, 'rsi1': 30, 'rsi2': 35, 'rsi_chng': 0, 'price_chng': -0.01, 'threshold': 2 } @use_named_args(search_space) def assess_kwargs(**kwargs): """ Assesses parameters and returns a value based on the geometric mean of returns divided by the standard deviation of returns to penalize for volatility. Low number of predictions (i.e. less than 500) can be additionally penalized. """ print(kwargs) stocks = {} slices = {} for k, v in tqdm_notebook(train_df.items()): stocks[k] = Stock_history(v) stocks[k].get_signals(**kwargs) slices[k] = stocks[k].geoslice geoslice = pd.concat(slices.values()).sort_index() geoslice['date_'] = geoslice.index geoslice = geoslice.sort_values(by=['date', 'signal']).drop_duplicates(subset=['date_']) #Full penalty for extremely low number of predictions if geoslice.shape[0] < 50: return 0 #Proportionally penalizes for low number of predictions elif geoslice.shape[0] < 500: return -geoslice['fwd'].mean()/geoslice['fwd'].std() * geoslice.shape[0]/500 else: return -geoslice['fwd'].mean()/geoslice['fwd'].std() """ Bayesian optimization process to find optimal parameters. Saves them to disk afterwards for future use. """ result = gp_minimize(assess_kwargs, search_space, random_state=17, n_calls=50, verbose=True, n_initial_points=20) opt_params = dict(zip(default_params.keys(), result.x)) with open('mid_rsi_params.dict', 'wb') as config_dictionary_file: pickle.dump(opt_params, config_dictionary_file) """ Testing parameters on test data. """ stocks = {} slices = {} for k, v in tqdm_notebook(test_df.items()): stocks[k] = Stock_history(v) stocks[k].get_signals(**opt_params) slices[k] = stocks[k].geoslice geoslice = pd.concat(slices.values()).sort_index() geoslice['date_'] = geoslice.index geoslice = geoslice.sort_values(by=['date', 'signal']).drop_duplicates(subset=['date_']) print(geoslice['fwd'].mean(), geoslice['fwd'].median(), geoslice['fwd'].std()) ```
github_jupyter
``` ####### ---- This notebook is for playing around with and understanding the inner workings of what is going on in the code. It is not abstracted. ---- ####### import torch import torchvision from torchvision import transforms, datasets import torch.nn as nn import torch.nn.functional as F # set the device, uses GPU if it's available otherwise CPU. if torch.cuda.is_available(): dev = torch.device("cuda:0") else: dev = torch.device("cpu") print(f"Using {dev}") # ---- data loading and pre-processing ---- # import utils.polar_pla as pla import numpy as np import torch from torch.autograd import Variable torch.set_default_tensor_type('torch.cuda.FloatTensor') # read in time series into temporary list series = [] f = open('DataSets/CTtemp.csv', 'r') for line in f: series.append(float(line)) # median filter the time series filtered_series = pla.median_filter(series, 5) # run bottom up piecewise linear approximation on that list and store processed values data, max_len = pla.sliding_window_pla(filtered_series,6000) pla.display_trends(data, 112) # this will show a messy reconstruction of the data as if it were interpolated # set the sequence length (the number of trends we look at to predict the next) and the train to test ratio seq_length = 8 train_proportion = 0.7 # segment the data into input output pairs that we will use to train the model. The way we do this depends on the model type. def sliding_window_MLP(data): inputs = [] outputs = [] for i in range(0, len(data)-seq_length*2, 2): inputs.append(data[i:(i+seq_length*2)]) # the next n are the input outputs.append(data[i+seq_length*2:i+seq_length*2+1]) # and the one after that is the output return Variable(torch.cuda.FloatTensor(np.array(inputs)).to(dev)), Variable(torch.cuda.FloatTensor(np.array(outputs)).to(dev)) def sliding_window_CNN(data): inputs = [] outputs = [] for i in range(0, len(data)-seq_length*2, 2): temp = data[i:(i+seq_length*2)] new = [] for x in range(0,len(temp),2): new.append([temp[x],temp[x+1]]) inputs.append(new) outputs.append(data[i+seq_length*2:i+seq_length*2+1]) # and the one after that is the output return Variable(torch.cuda.FloatTensor(np.array(inputs)).to(dev)), Variable(torch.cuda.FloatTensor(np.array(outputs)).to(dev)) def sliding_window_RNN(data): inputs = [] outputs = [] for i in range(0, len(data)-seq_length*2, 2): inputs.append(np.array(data[i:(i+seq_length*2)]).reshape(int(seq_length*2/2),2)) outputs.append(np.array(data[i+seq_length*2+1:i+seq_length*2+2])) return Variable(torch.cuda.FloatTensor(inputs).to(dev)), Variable(torch.cuda.FloatTensor(outputs).to(dev)) # convert data to tensor, and apply dataloader total_data_input, total_data_output = sliding_window_MLP(data) train_size = int(len(total_data_input)*train_proportion) training_data_input = torch.narrow(total_data_input, 0, 0, train_size) training_data_output = torch.narrow(total_data_output, 0, 0, train_size) validation_index = int((len(total_data_input) - train_size)*0.5) #Calculates how many data points in the validation set testing_index = len(total_data_input) - train_size - validation_index print(testing_index) validation_data_input = torch.narrow(total_data_input, 0, train_size, validation_index).to(dev) validation_data_output = torch.narrow(total_data_output, 0, train_size, validation_index).to(dev) testing_data_input = torch.narrow(total_data_input, 0, train_size+validation_index, testing_index).to(dev) testing_data_output = torch.narrow(total_data_output, 0, train_size+validation_index, testing_index).to(dev) print(testing_data_output) train = torch.utils.data.TensorDataset(training_data_input, training_data_output) validate = torch.utils.data.TensorDataset(validation_data_input, validation_data_output) test = torch.utils.data.TensorDataset(testing_data_input, testing_data_output) trainset = torch.utils.data.DataLoader(train, batch_size=128, shuffle=False) validateset = torch.utils.data.DataLoader(validate, batch_size=128, shuffle=False) testset = torch.utils.data.DataLoader(test, batch_size=128, shuffle=False) from models import MLP, TCN, CNN, RNN, LSTM, BiLSTM # copy the model you'd like to use and paste it in the next block #model = MLP(seq_length*2, 128, 0.1).to(dev) #model = CNN(seq_length, 64, 1,0.3,2).to(dev) #model = TCN(seq_length,1, [64]*3,4,0.2).to(dev) #model = LSTM(1,2,64,1,0.5) #model = RNN(1,2,64,1,0.3) testing_file = open('test.txt', 'w') # output file import math import statistics res = [] total_dir_acc = 0 for i in range(10): model = MLP(seq_length*2, 128, 1, 0.1).to(dev) # paste model here epochs = 1000 learning_rate = 0.001 import torch.optim as optim train_loss = [] validation_loss = [] model.train() epoch_total_trainloss = 0 # the total loss for each epoch, used for plotting min_val_loss_epoch = 0 # the epoch with the lowest validation loss min_val_loss = 9999999 # the lowest validation loss optimizer = optim.Adam(model.parameters(), lr=learning_rate) validation_direction_accuracy = [] for epoch in range(epochs+1): epoch_total_trainloss = 0 # reset this for the validation epoch''' model.train() for data in trainset: # for each batch features, labels = data # split the batches up into their features and labels model.zero_grad() output = model(features) # get a prediction from the model #print(output.shape) loss = F.mse_loss(output, labels) # calculate the loss of our prediction loss.backward() # backpropogate the loss optimizer.step() # optimize weights epoch_total_trainloss += loss.item()/len(trainset) torch.cuda.synchronize() train_loss.append(epoch_total_trainloss) # add this epoch's loss in order to plot it later epoch_total_trainloss = 0 # reset this for the validation epoch # now we'll calculate the direction accuracy for the training and validation sets correct=0 total_points = 0 model.eval() for data in validateset: inputs, labels = data output = model(inputs) total_points += len(output) for i in range(len(output)): pred = output[i] actual = labels[i] #if pred < 0 and actual < 0 or pred > 0 and actual > 0: #or (pred-actual)<0.01: # correct += 1 #print(output[0],labels[0]) loss = F.mse_loss(output, labels) # calculate the loss of our prediction epoch_total_trainloss += loss.item()/len(validateset) torch.cuda.synchronize() if epoch_total_trainloss < min_val_loss: torch.save(model.state_dict(), 'temp.pt') min_val_loss = epoch_total_trainloss min_val_loss_epoch = epoch validation_direction_accuracy.append(correct/(total_points)) validation_loss.append(epoch_total_trainloss) # we'll need to plot validation loss too #import matplotlib.pyplot as plt #plt.plot(train_loss) #plt.plot(validation_loss) #plt.show() #plt.plot(validation_direction_accuracy) #plt.show() #print(f"Lowest validation loss: {min_val_loss} at epoch {min_val_loss_epoch}") model.load_state_dict(torch.load('temp.pt')) model.eval() correct=0 output_file = open("utils/angles.txt", "w") for data in trainset: inputs, labels = data output = model(inputs) for i in range(len(output)): pred = output[i] #output_file.write(str(pred.item()*90)+"\n") total_loss = 0 for data in validateset: inputs, labels = data output = model(inputs) #for i in range(len(output)): #pred = output[i] #actual = labels[i] #if pred < 0 and actual < 0 or pred > 0 and actual > 0: # correct += 1 model.zero_grad() total_loss += F.mse_loss(output, labels).item()/len(validateset) #print(f'Directional Accuracy: {correct*100/len(test)} MSE on validate set: {total_loss}') total_loss = 0 total_loss_slope = 0 total_loss_length = 0 for data in testset: inputs, labels = data output = model(inputs) ''' if the model is a dual prediction you'll need to use this code instead output_slopes = [] for out in labels: output_slopes.append(np.array([out[0]])) output_slopes = Variable(torch.FloatTensor(output_slopes)) output_lengths = [] for out in labels: output_lengths.append(np.array([out[1]])) output_lengths = Variable(torch.FloatTensor(output_lengths)) pred_slopes = [] for out in output: pred_slopes.append(np.array([out[0]])) pred_slopes = Variable(torch.FloatTensor(pred_slopes)) pred_lengths = [] for out in output: pred_lengths.append(np.array([out[1]])) pred_lengths = Variable(torch.FloatTensor(pred_lengths)) ''' for i in range(len(output)): pred = output[i]#[0] actual = labels[i]#[0] if pred < 0 and actual < 0 or pred > 0 and actual > 0: #print(str(pred)+" vs "+str(actual)) correct += 1 output_file.write(str(pred.item()*90)+"\n") model.zero_grad() total_loss += F.mse_loss(output, labels).item()/len(testset) #total_loss_slope += F.mse_loss(output_slopes,pred_slopes).item()/len(testset) #model.zero_grad() #total_loss_length += F.mse_loss(output_lengths, pred_lengths).item()/len(testset) total_dir_acc += correct*100/len(test) #print(f'Directional Accuracy: {correct*100/len(test)} MSE test: {total_loss}, RMSE test: {math.sqrt(total_loss)}\n') res.append(math.sqrt(total_loss)) #print(f'{math.sqrt(total_loss_slope)}, {math.sqrt(total_loss_length)}') #testing_file.write(f'{math.sqrt(total_loss_slope)},{math.sqrt(total_loss_length)}\n') print(f'{math.sqrt(total_loss)}') testing_file.write(f'{math.sqrt(total_loss)}\n') print(f'μ = {round(sum(res) / len(res ),3)} | σ = {round(statistics.pstdev(res),3)} | dir = {total_dir_acc/10}') testing_file.close() # test import math import matplotlib.pyplot as plt plt.plot(train_loss) plt.plot(validation_loss) plt.show() # test model.load_state_dict(torch.load('temp.pt')) model.eval() correct=0 output_file = open("utils/angles.txt", "w") for data in trainset: inputs, labels = data output = model(inputs) for i in range(len(output)): pred = output[i] output_file.write(str(pred.item()*90)+"\n") total_loss = 0 for data in validateset: inputs, labels = data output = model(inputs) for i in range(len(output)): pred = output[i] actual = labels[i] if pred < 0 and actual < 0 or pred > 0 and actual > 0: correct += 1 model.zero_grad() total_loss += F.mse_loss(output, labels).item()/len(validateset) print(f'Directional Accuracy: {correct*100/len(test)} MSE on validate set: {total_loss}') correct = 0 total_loss = 0 for data in testset: inputs, labels = data output = model(inputs) for i in range(len(output)): pred = output[i] actual = labels[i] if pred < 0 and actual < 0 or pred > 0 and actual > 0: #print(str(pred)+" vs "+str(actual)) correct += 1 output_file.write(str(pred.item()*90)+"\n") model.zero_grad() total_loss += F.mse_loss(output, labels).item()/len(testset) print(f'Directional Accuracy: {correct*100/len(test)} MSE test: {total_loss}, RMSE test: {math.sqrt(total_loss)}') ```
github_jupyter
### Python Training - Lesson 3 - loops, flow control and exceptions Now that we have seen some basics in action, let's summarize what we should already know by this point: - types and their methods - classes and objects - simple condition checks with "if" - using imported libraries ## Theory level 2 In this lesson, we will do some more elaborate exercises, that need more than simple conditions and looping over a collection. We will use the following constructs: #### "for" loop #### "while" loop #### "break", "continue", "raise Exception", "return" keywords to navigate through an algorithm #### condition checking using "if" #### enumerate, zip #### unpacking #### opening files using "with" #### lambda functions #### map, filter, reduce ## Example interview task - "FizzBuzz" ### Task description Count from 0 to 100. Every three repetitions, print "Fizz". Every five repetitions, print "Buzz". When both of them should be printed, print "FizzBuzz". ### Breaking it down Every 3 loop passes - print "Fizz" Every 5 loop passes - print "Buzz" Every 15 loop passes - print "FizzBuzz" #### How to count from 0 to 100? We have two basic loops. ##### - For It will do exactly X repetition, no more, no less, will only do other amount when an error occurs or the loop is exited. ``` for i in range(0,100): pass ``` ##### - While Will run until the condition is satisfied. Will stop on exception, or when loop is exited. ``` i = 0 while i < 100: i = i - 1 ``` #### How to do something "every N repetitions" We can do it the lame way, with a counter. And we can do it the smart way, with dividing and the "remainder of dividing" (modulo). ``` # The lame way. i = 0 counter = 0 while i < 10: counter += 1 if counter == 3: print("We did something every 3-rd time") counter = 0 i += 1 # The smart way. for i in range(0,10): if i % 3 == 0: print("We did something every 3-rd time") ``` It's not exactly right, isn't it? Why are there 4 repetitions? It's because we start from 0. 0 divided by integer>0 gives always 0 remainder. ``` 0 % 3 # The fixed smart way. for i in range(1,11): if i % 3 == 0: print("We did something every 3-rd time") # The FizzBuzz for i in range(1,101): if i % 15 == 0: print("FizzBuzz") elif i % 5 == 0: print("Buzz") elif i % 3 == 0: print("Fizz") ``` ## Flow control Sometimes, we do not want to do all loop iterations. Sometimes, we want to: ### continue - Skip this whole loop iteration, from this moment, and go to next loop iteration ``` for i in range(0,4): if i == 2: continue print(i) ``` ### break - Skip this whole loop iteration, from this moment, and do not do any more loop iterations ``` for i in range(0,100): if i == 2: break print(i) ``` ### return - skip this whole loop iteration, and exit this scope (for example, method), not doing any more iterations ``` def print_a_lot_of_numbers_but_exit_on(number): N = 1000 for i in range(0,N): if i == number: return i print(i) # Notice, how we return the last number. Otherwise, on 1000, it would return None automatically - Python feature. return N print_a_lot_of_numbers_but_exit_on(1) print_a_lot_of_numbers_but_exit_on(1000) ``` ## Exceptions in flow control Before we go to a more general approach, I will show you what role the exceptions play in flow control. ### raise Exception - skip this whole loop iteration, and exit this program! ``` for i in range(0,100): if i == 5: raise Exception("I just hate the number 5. I'm out of here.") ``` ### catch Exception - ignore it, and go on as if nothing happened ``` for i in range(0,10): try: if i == 5: raise Exception("I hate fives.") except Exception as error_message: print("Stop hate! Just go on. Details: " + str(error_message)) print(i) ``` ## Exceptions general purpose and definition An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions. Anomalous or exceptional conditions requiring special processing – often changing the normal flow of program execution ### Exception handling - the process of responding to occurence of exceptions ### Handled exception - exception, that arose during normal flow of program, but was caught - the program went on ### Unhandled exception - exception, that arose during normal flow of program, but was uncaught - it crashed the whole process ### Types of exceptions Exceptions come in hundreds of flavours, and you can also write your own kinds. Exceptions main role is to signal that a terrible or unexpected situation happened, and there is just no way of going on with program flow. Most popular Python exception types: ``` # IndexError a = [1,2,3] print(a[4]) # KeyError a = {"something": 1} a["something_else"] # ModuleNotFoundError import whatever # NameError print(for_sure_I_dont_exist) # TypeError int([a,a,a]) # ValueError int("a") ``` In most cases, you will see exceptions that result from mistakes in code, or unexpected behavior of external files, services, and all kinds of funny situations. It is not a rule of thumb, though. ## Exception handling Why do we catch exceptions? So that the program can continue. We can catch many exception types in one try...catch statement, to behave differently. Observe: ``` my_list = [1,2,"a",3,14,[1,3]] for item in my_list: try: converted = int(item) print(converted) print(my_list[converted]) except TypeError: print("Woops! Next time give my program the proper type!") except ValueError: print("Woops! Next time give me a proper value! I got: " + item) except Exception as e: print("Something else went wrong." "Luckily I am catching all possible exceptions with this clause." "Here are the details of what actually happened: " + str(e) + " for item= " + str(item)) print("All those errors, but here we are, successfully ending our program as expected, in controlled fashion") ``` ## Example covering all those functionalities This example will show you how to control your program, that behaves accordingly to user input. You have no idea what the users will input, so you need to prepare for the worst. Simple idea is to print out characters from the ASCII table, corresponding to numbers - as much as user desires. Requirements: - skip words for inputs: 30, 60 - if the counter reaches 3000, stop printing new words - print every 30th word - skip every 150th letter - take iterations amount from keyboard user input - program raises Exception for values over 9000 ``` # Handle various user inputs. main_counter = 0 while True: iterations = input() try: iterations = int(iterations) break except ValueError: print("Please provide a number") iterations = 0 if iterations > 9000: raise Exception("This value if over 9000! This program cannot handle such input. Exiting") # Show the words. while main_counter < iterations: if main_counter == 3000: break if main_counter in [30,60]: # Why is this here when it is also at the end? main_counter += 1 continue if main_counter % 30 == 0: word = "" for small_counter in range(200, 200 + main_counter): if small_counter % 150 == 0: continue word += chr(small_counter) print(word) main_counter += 1 print(chr(17110)) ``` ## Example interview task - folder crawler Task is to create a program, that will print out contents of a folder, recursively down the folder structure. ### Requirements - use Python module "os" - go down N floors of folders - it is an input parameter - must accept absolute paths ``` import os def print_current_path(path, level): print("level: {0}, path: {1}".format(level, path)) def folder_crawler(starting_path, level_limit, current_level=0): if current_level > level_limit: return print_current_path(starting_path, current_level) try: contents = os.listdir(starting_path) for item in contents: item_path = os.path.join(starting_path, item) print_current_path(item_path, current_level) if os.path.isdir(item_path): folder_crawler(item_path, level_limit, current_level + 1) except PermissionError: print("Permission denied. Skipping") folder_crawler(r"C:\Users", 3) ``` ### Extend example by yourself - if it's a text file, print out the first line of this file to the screen. ## folder_crawler evolves into folder_spy! ``` a = ["a", "d", "b", "c"] for letter in a: if letter == "d": print("d was the number: " + str(a.index(letter))) enumerate(a) for index, letter in enumerate(a): if letter == "d": print(str(index)) a = ["a", "b", "c"] b = [3342,4554,334] dict(zip(a,b)) dict(enumerate(a)) def bla(): return [1,2,3,4] x, y, z, w = bla() x x, y = (1,3) print(y) a = 7 b = 4 c = a a = b b = c a, b = b, (a+12) range(0,88) a = [1,2,3,4,5,6,7,8] sum(a) def add_number(number): return number + 30 new_list = map(add_number, a) print(new_list) for i in new_list: print(i) def some_filter(x): if x > 4: return True return False new_list = filter(some_filter, a) print(new_list) for i in new_list: print(i) new_list = filter((lambda x: x > 4), a) another_list = map((lambda x: x + 30), a) reduced_list = reduce((lambda x,a: a + x), a) sum = 0 [x+1 for x in [1,2,1,1] if x < 2] sum ```
github_jupyter
# Overview This notebook introduces you MONAI's transformation module for 3D images. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/3d_image_transforms.ipynb) ## Setup environment ``` %pip install -q "monai[nibabel]" %pip install -q matplotlib %matplotlib inline ``` ## Setup imports ``` # Copyright 2020 MONAI Consortium # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import glob import os import shutil import tempfile import matplotlib.pyplot as plt import numpy as np from monai.apps import download_and_extract from monai.config import print_config from monai.transforms import ( AddChanneld, LoadNifti, LoadNiftid, Orientationd, Rand3DElasticd, RandAffined, Spacingd, ) print_config() ``` ## Setup data directory You can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used. ``` directory = os.environ.get("MONAI_DATA_DIRECTORY") root_dir = tempfile.mkdtemp() if directory is None else directory print(f"root dir is: {root_dir}") ``` ## Download dataset Downloads and extracts the dataset. The dataset comes from http://medicaldecathlon.com/. ``` resource = "https://drive.google.com/uc?id=1jzeNU1EKnK81PyTsrx0ujfNl-t0Jo8uE" md5 = "410d4a301da4e5b2f6f86ec3ddba524e" compressed_file = os.path.join(root_dir, "Task09_Spleen.tar") data_dir = os.path.join(root_dir, "Task09_Spleen") if not os.path.exists(data_dir): download_and_extract(resource, compressed_file, root_dir, md5) ``` ## Set MSD Spleen dataset path The following groups images and labels from `Task09_Spleen/imagesTr` and `Task09_Spleen/labelsTr` into pairs. ``` train_images = sorted(glob.glob(os.path.join(data_dir, "imagesTr", "*.nii.gz"))) train_labels = sorted(glob.glob(os.path.join(data_dir, "labelsTr", "*.nii.gz"))) data_dicts = [ {"image": image_name, "label": label_name} for image_name, label_name in zip(train_images, train_labels) ] train_data_dicts, val_data_dicts = data_dicts[:-9], data_dicts[-9:] ``` The image file names are organised into a list of dictionaries. ``` train_data_dicts[0] ``` The list of data dictionaries, `train_data_dicts`, could be used by PyTorch's data loader. For example, ```python from torch.utils.data import DataLoader data_loader = DataLoader(train_data_dicts) for training_sample in data_loader: # run the deep learning training with training_sample ``` The rest of this tutorial presents a set of "transforms" converting `train_data_dict` into data arrays that will eventually be consumed by the deep learning models. ## Load the NIfTI files One design choice of MONAI is that it provides not only the high-level workflow components, but also relatively lower level APIs in their minimal functioning form. For example, a `LoadNifti` class is a simple callable wrapper of the underlying `Nibabel` image loader. After constructing the loader with a few necessary system parameters, calling the loader instance with a `NIfTI` filename will return the image data arrays, as well as the metadata -- such as affine information and voxel sizes. ``` loader = LoadNifti(dtype=np.float32) image, metadata = loader(train_data_dicts[0]["image"]) print(f"input: {train_data_dicts[0]['image']}") print(f"image shape: {image.shape}") print(f"image affine:\n{metadata['affine']}") print(f"image pixdim:\n{metadata['pixdim']}") ``` Oftentimes, we want to load a group of inputs as a training sample. For example training a supervised image segmentation network requires a pair of image and label as a training sample. To ensure a group of inputs are beining preprocessed consistently, MONAI also provides dictionary-based interfaces for the minimal functioning transforms. `LoadNiftid` is the corresponding dict-based version of `LoadNifti`: ``` loader = LoadNiftid(keys=("image", "label")) data_dict = loader(train_data_dicts[0]) print(f"input:, {train_data_dicts[0]}") print(f"image shape: {data_dict['image'].shape}") print(f"label shape: {data_dict['label'].shape}") print(f"image pixdim:\n{data_dict['image_meta_dict']['pixdim']}") image, label = data_dict["image"], data_dict["label"] plt.figure("visualize", (8, 4)) plt.subplot(1, 2, 1) plt.title("image") plt.imshow(image[:, :, 30], cmap="gray") plt.subplot(1, 2, 2) plt.title("label") plt.imshow(label[:, :, 30]) plt.show() ``` ## Add the channel dimension Most of MONAI's image transformations assume that the input data has the shape: `[num_channels, spatial_dim_1, spatial_dim_2, ... ,spatial_dim_n]` so that they could be interpreted consistently (as "channel-first" is commonly used in PyTorch). Here the input image has shape `(512, 512, 55)` which isn't in the acceptable shape (missing the channel dimension), we therefore create a transform which is called to update the shape: ``` add_channel = AddChanneld(keys=["image", "label"]) datac_dict = add_channel(data_dict) print(f"image shape: {datac_dict['image'].shape}") ``` Now we are ready to do some intensity and spatial transforms. ## Resample to a consistent voxel size The input volumes might have different voxel sizes. The following transform is created to normalise the volumes to have (1.5, 1.5, 5.) millimetre voxel size. The transform is set to read the original voxel size information from `data_dict['image.affine']`, which is from the corresponding NIfTI file, loaded earlier by `LoadNiftid`. ``` spacing = Spacingd(keys=["image", "label"], pixdim=(1.5, 1.5, 5.0), mode=("bilinear", "nearest")) data_dict = spacing(datac_dict) print(f"image shape: {data_dict['image'].shape}") print(f"label shape: {data_dict['label'].shape}") print(f"image affine after Spacing:\n{data_dict['image_meta_dict']['affine']}") print(f"label affine after Spacing:\n{data_dict['label_meta_dict']['affine']}") ``` To track the spacing changes, the data_dict was updated by `Spacingd`: * An `image.original_affine` key is added to the `data_dict`, logs the original affine. * An `image.affine` key is updated to have the current affine. ``` image, label = data_dict["image"], data_dict["label"] plt.figure("visualise", (8, 4)) plt.subplot(1, 2, 1) plt.title("image") plt.imshow(image[0, :, :, 30], cmap="gray") plt.subplot(1, 2, 2) plt.title("label") plt.imshow(label[0, :, :, 30]) plt.show() ``` ## Reorientation to a designated axes codes Sometimes it is nice to have all the input volumes in a consistent axes orientation. The default axis labels are Left (L), Right (R), Posterior (P), Anterior (A), Inferior (I), Superior (S). The following transform is created to reorientate the volumes to have 'Posterior, Left, Inferior' (PLI) orientation: ``` orientation = Orientationd(keys=["image", "label"], axcodes="PLI") data_dict = orientation(data_dict) print(f"image shape: {data_dict['image'].shape}") print(f"label shape: {data_dict['label'].shape}") print(f"image affine after Spacing:\n{data_dict['image_meta_dict']['affine']}") print(f"label affine after Spacing:\n{data_dict['label_meta_dict']['affine']}") image, label = data_dict["image"], data_dict["label"] plt.figure("visualise", (8, 4)) plt.subplot(1, 2, 1) plt.title("image") plt.imshow(image[0, :, :, 30], cmap="gray") plt.subplot(1, 2, 2) plt.title("label") plt.imshow(label[0, :, :, 30]) plt.show() ``` ## Random affine transformation The following affine transformation is defined to output a (300, 300, 50) image patch. The patch location is randomly chosen in a range of (-40, 40), (-40, 40), (-2, 2) in x, y, and z axes respectively. The translation is relative to the image centre. The 3D rotation angle is randomly chosen from (-45, 45) degrees around the z axis, and 5 degrees around x and y axes. The random scaling factor is randomly chosen from (1.0 - 0.15, 1.0 + 0.15) along each axis. ``` rand_affine = RandAffined( keys=["image", "label"], mode=("bilinear", "nearest"), prob=1.0, spatial_size=(300, 300, 50), translate_range=(40, 40, 2), rotate_range=(np.pi / 36, np.pi / 36, np.pi / 4), scale_range=(0.15, 0.15, 0.15), padding_mode="border", ) ``` You can rerun this cell to generate a different randomised version of the original image. ``` affined_data_dict = rand_affine(data_dict) print(f"image shape: {affined_data_dict['image'].shape}") image, label = affined_data_dict["image"][0], affined_data_dict["label"][0] plt.figure("visualise", (8, 4)) plt.subplot(1, 2, 1) plt.title("image") plt.imshow(image[:, :, 15], cmap="gray") plt.subplot(1, 2, 2) plt.title("label") plt.imshow(label[:, :, 15]) plt.show() ``` ## Random elastic deformation Similarly, the following elastic deformation is defined to output a (300, 300, 10) image patch. The image is resampled from a combination of affine transformations and elastic deformations. `sigma_range` controls the smoothness of the deformation (larger than 15 could be slow on CPU) `magnitude_range` controls the amplitude of the deformation (large than 500, the image becomes unrealistic). ``` rand_elastic = Rand3DElasticd( keys=["image", "label"], mode=("bilinear", "nearest"), prob=1.0, sigma_range=(5, 8), magnitude_range=(100, 200), spatial_size=(300, 300, 10), translate_range=(50, 50, 2), rotate_range=(np.pi / 36, np.pi / 36, np.pi), scale_range=(0.15, 0.15, 0.15), padding_mode="border", ) ``` You can rerun this cell to generate a different randomised version of the original image. ``` deformed_data_dict = rand_elastic(data_dict) print(f"image shape: {deformed_data_dict['image'].shape}") image, label = deformed_data_dict["image"][0], deformed_data_dict["label"][0] plt.figure("visualise", (8, 4)) plt.subplot(1, 2, 1) plt.title("image") plt.imshow(image[:, :, 5], cmap="gray") plt.subplot(1, 2, 2) plt.title("label") plt.imshow(label[:, :, 5]) plt.show() ``` ## Cleanup data directory Remove directory if a temporary was used. ``` if directory is None: shutil.rmtree(root_dir) ```
github_jupyter
``` import pandas as pd import logging import glob from sklearn.model_selection import train_test_split pd.set_option('display.max_colwidth', 500) logger = logging.getLogger() logger.setLevel(logging.WARNING) import tensorflow as tf from nltk.corpus import stopwords #provides list of english stopwords stop = stopwords.words('english') ``` # Process Data ``` train, test = train_test_split(pd.read_csv('ita.txt', sep='\t',header = None), test_size=.10) #print out stats about shape of data print(f'Train: {train.shape[0]:,} rows {train.shape[1]:,} columns') print(f'Test: {test.shape[0]:,} rows {test.shape[1]:,} columns') train.columns = ['english','italian'] # preview data train.head(3) train['english_lower'] = train['english'].str.lower() train['english_no_punctuation'] = train['english_lower'].str.replace('[^\w\s]','') #train['english_no_stopwords'] = train['english_no_punctuation'].apply(lambda x: ' '.join([word for word in x.split() if word not in (stop)])) #train["english_no_stopwords"] = train["english_no_stopwords"].fillna("fillna") #train["english_no_stopwords"] = train["english_no_stopwords"] train['italian_lower'] = train["italian"].str.lower() train['italian_no_punctuation'] = '_start_' + ' ' +train['italian_lower'].str.replace('[^\w\s]','')+' '+'_end_' #VERY IMPORTANT TRICK!! #NOTICE THAT WE ADD "_start_" and "_end_" EXACTLY AT THE BEGINNING AND THE END OF EACH SENTENCE TO HAVE SOME KIND OF 'DELIMITERS' #THAT WILL TELL OUR DECODER TO START AND FINISH. BECAUSE WE DON'T HAVE GENERAL SIGNALS OF START AND FINISH. max_features1 = 5000 maxlen1 = 35 max_features2 = 5000 maxlen2 = 35 tok1 = tf.keras.preprocessing.text.Tokenizer(num_words=max_features1) tok1.fit_on_texts(list(train['english_no_punctuation'])) #fit to cleaned text tf_train_english =tok1.texts_to_sequences(list(train['english_no_punctuation'])) tf_train_english =tf.keras.preprocessing.sequence.pad_sequences(tf_train_english, maxlen=maxlen1) #let's execute pad step #the processing has to be done for both #two different tokenizers tok2 = tf.keras.preprocessing.text.Tokenizer(num_words=max_features2, filters = '*') tok2.fit_on_texts(list(train['italian_no_punctuation'])) #fit to cleaned text tf_train_italian = tok2.texts_to_sequences(list(train['italian_no_punctuation'])) tf_train_italian = tf.keras.preprocessing.sequence.pad_sequences(tf_train_italian, maxlen=maxlen2, padding ='post') ``` # Define Model Architecture ``` vectorized_italian = tf_train_italian # For Decoder Input, you don't need the last word as that is only for prediction # when we are training using Teacher Forcing. decoder_input_data = vectorized_italian[:, :-1] # Decoder Target Data Is Ahead By 1 Time Step From Decoder Input Data (Teacher Forcing) decoder_target_data = vectorized_italian[:, 1:] print(f'Shape of decoder input: {decoder_input_data.shape}') print(f'Shape of decoder target: {decoder_target_data.shape}') vectorized_english = tf_train_english # Encoder input is simply the body of the issue text encoder_input_data = vectorized_english doc_length = encoder_input_data.shape[1] print(f'Shape of encoder input: {encoder_input_data.shape}') num_encoder_tokens = len(tok1.word_index) + 1 num_decoder_tokens = len(tok1.word_index) + 1 ``` ### Define Model Architecture ``` #arbitrarly set latent dimension for embedding and hidden units latent_dim = 40 ``` Encoder Model ``` encoder_inputs = tf.keras.Input(shape=(doc_length,), name='Encoder-Input') # Word embeding for encoder (English text) x = tf.keras.layers.Embedding(num_encoder_tokens, latent_dim, name='Body-Word-Embedding', mask_zero=False)(encoder_inputs) #Batch normalization is used so that the distribution of the inputs #to a specific layer doesn't change over time x = tf.keras.layers.BatchNormalization(name='Encoder-Batchnorm-1')(x) # We do not need the `encoder_output` just the hidden state. _, state_h = tf.keras.layers.GRU(latent_dim, return_state=True, name='Encoder-Last-GRU')(x) # Encapsulate the encoder as a separate entity so we can just # encode without decoding if we want to. encoder_model = tf.keras.Model(inputs=encoder_inputs, outputs=state_h, name='Encoder-Model') seq2seq_encoder_out = encoder_model(encoder_inputs) ######################## #### Decoder Model #### decoder_inputs = tf.keras.Input(shape=(None,), name='Decoder-Input') # for teacher forcing # Word Embedding For Decoder (Italian text) dec_emb = tf.keras.layers.Embedding(num_decoder_tokens, latent_dim, name='Decoder-Word-Embedding', mask_zero=False)(decoder_inputs) #again batch normalization dec_bn = tf.keras.layers.BatchNormalization(name='Decoder-Batchnorm-1')(dec_emb) # Set up the decoder, using `decoder_state_input` as initial state. decoder_gru = tf.keras.layers.GRU(latent_dim, return_state=True, return_sequences=True, name='Decoder-GRU') decoder_gru_output, _ = decoder_gru(dec_bn, initial_state=seq2seq_encoder_out) x = tf.keras.layers.BatchNormalization(name='Decoder-Batchnorm-2')(decoder_gru_output) # Dense layer for prediction decoder_dense = tf.keras.layers.Dense(num_decoder_tokens, activation='softmax', name='Final-Output-Dense') decoder_outputs = decoder_dense(x) ######################## #### Seq2Seq Model #### #seq2seq_decoder_out = decoder_model([decoder_inputs, seq2seq_encoder_out]) seq2seq_Model = tf.keras.Model([encoder_inputs, decoder_inputs], decoder_outputs) seq2seq_Model.compile(optimizer=tf.keras.optimizers.Nadam(lr=0.001), loss='sparse_categorical_crossentropy') ``` ** Examine Model Architecture Summary ** ``` #from seq2seq_utils import viz_model_architecture seq2seq_Model.summary() #viz_model_architecture(seq2seq_Model) ``` # Train Model ``` import numpy as np from keras.callbacks import CSVLogger, ModelCheckpoint script_name_base = 'tutorial_seq2seq' csv_logger = CSVLogger('{:}.log'.format(script_name_base)) model_checkpoint = ModelCheckpoint('{:}.epoch{{epoch:02d}}-val{{val_loss:.5f}}.hdf5'.format(script_name_base), save_best_only=True) batch_size = 1200 epochs = 3 history = seq2seq_Model.fit([encoder_input_data, decoder_input_data], np.expand_dims(decoder_target_data, -1), batch_size=batch_size, epochs=epochs, validation_split=0.12, callbacks=[csv_logger, model_checkpoint]) test_text = ['I am Tom'] ``` # See Results On Holdout Set ``` #max_len_title = 30 # get the encoder's features for the decoder tok1.fit_on_texts(test_text) raw_tokenized = tok1.texts_to_sequences(test_text) raw_tokenized = tf.keras.preprocessing.sequence.pad_sequences(raw_tokenized, maxlen=maxlen1) body_encoding = encoder_model.predict(raw_tokenized) latent_dim = seq2seq_Model.get_layer('Decoder-Word-Embedding').output_shape[-1] # Reconstruct the input into the decoder decoder_inputs = seq2seq_Model.get_layer('Decoder-Input').input dec_emb = seq2seq_Model.get_layer('Decoder-Word-Embedding')(decoder_inputs) dec_bn = seq2seq_Model.get_layer('Decoder-Batchnorm-1')(dec_emb) # Instead of setting the intial state from the encoder and forgetting about it, during inference # we are not doing teacher forcing, so we will have to have a feedback loop from predictions back into # the GRU, thus we define this input layer for the state so we can add this capability gru_inference_state_input = tf.keras.Input(shape=(latent_dim,), name='hidden_state_input') # we need to reuse the weights that is why we are getting this # If you inspect the decoder GRU that we created for training, it will take as input # 2 tensors -> (1) is the embedding layer output for the teacher forcing # (which will now be the last step's prediction, and will be _start_ on the first time step) # (2) is the state, which we will initialize with the encoder on the first time step, but then # grab the state after the first prediction and feed that back in again. gru_out, gru_state_out = seq2seq_Model.get_layer('Decoder-GRU')([dec_bn, gru_inference_state_input]) # Reconstruct dense layers dec_bn2 = seq2seq_Model.get_layer('Decoder-Batchnorm-2')(gru_out) dense_out = seq2seq_Model.get_layer('Final-Output-Dense')(dec_bn2) decoder_model = tf.keras.Model([decoder_inputs, gru_inference_state_input], [dense_out, gru_state_out]) # we want to save the encoder's embedding before its updated by decoder # because we can use that as an embedding for other tasks. original_body_encoding = body_encoding body_encoding.shape #tok2.word_index.update({'_start_': 0}) #tok2.word_index.update({'_end_':len(tok2.word_index)+1}) state_value = np.array(tok2.word_index['_start_']).reshape(1, 1) state_value decoded_sentence = [] stop_condition = False vocabulary_inv = dict((v, k) for k, v in tok2.word_index.items()) #vocabulary_inv[0] = "<PAD/>" #vocabulary_inv[1] = "unknown" while not stop_condition: #print(1) preds, st = decoder_model.predict([state_value, body_encoding]) #preds = preds[preds>0] # We are going to ignore indices 0 (padding) and indices 1 (unknown) # Argmax will return the integer index corresponding to the # prediction + 2 b/c we chopped off first two pred_idx = np.argmax(preds[:, :, 2:]) + 2 #print(np.argmax(preds[:, :, 2:])) # retrieve word from index prediction #pred_word_str = tok.id2token[pred_idx] pred_word_str = vocabulary_inv[pred_idx] #print(pred_idx) print(pred_word_str) if pred_word_str == '_end_' or len(decoded_sentence) >= maxlen2: stop_condition = True break decoded_sentence.append(pred_word_str) # update the decoder for the next word body_encoding = st state_value = np.array(pred_idx).reshape(1, 1) #print(state_value) ```
github_jupyter
``` %pylab inline %precision 6 %load_ext autoreload %autoreload 1 %aimport common import pandas as pd import sklearn as skl import sklearn # download common from https://github.com/Apogentus/common and add destination to it to PythonPath system variable from common.serialization import pickle_load, pickle_save from common.classes.Struct import Struct from common.feature_transformations import get_one_hot_encoding from common.functions import all_nums from common.visualize.colors import COLORS from common.visualize.distributions import * import scipy pd.options.display.max_colwidth=100 np.set_printoptions(linewidth=140,edgeitems=10) pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 1000) rcParams['figure.figsize'] = (8.0, 5.0) categories = ['alt.atheism', 'soc.religion.christian', 'comp.graphics', 'sci.med'] from sklearn.datasets import fetch_20newsgroups twenty_train = fetch_20newsgroups(subset='train', categories=categories, shuffle=True, random_state=42) twenty_train.target_names len(twenty_train.data), len(twenty_train.filenames) # clear headers until 'Lines: xxx' to get pure text contents import copy, re texts = copy.deepcopy(twenty_train.data) for doc_num in range(len(texts)): m=re.search('Lines: \d+\s+', texts[doc_num]) if m: # if match found i = m.span(0)[1] texts[doc_num] = texts[doc_num][i:] del twenty_train ``` #### Word counts ``` from sklearn.feature_extraction.text import CountVectorizer count_vect = CountVectorizer(stop_words='english',min_df=5) # wrod should appear at least 5 times in the training set X_train_counts = count_vect.fit_transform(texts) X_train_counts.shape ``` #### Word TF-IDFs ``` from sklearn.feature_extraction.text import TfidfTransformer tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts) X = tf_transformer.transform(X_train_counts) X.shape words2nums = count_vect.vocabulary_ nums2words = dict((num,word) for word,num in words2nums.items()) words2nums, nums2words U,S,VT = scipy.sparse.linalg.svds(X, k=50) X.shape # [documents x words] U.shape S.shape VT.shape ``` ### Extracting most close words ``` word_num = words2nums['doctor'] # try also: christian doctor treatment printer anxiety dists = sum( (VT-VT[:, word_num][:,newaxis])**2, 0) dists inds = argsort(dists) dists[inds] nearest_inds = inds[1:11] [nums2words[ind] for ind in nearest_inds] ``` ### Extracting most close documents ``` doc_num = 0 dists = sum( (U-U[doc_num,:][newaxis,:])**2, 1) inds = argsort(dists) nearest_inds = inds[1:6] print('ORIGINAL DOCUMENT:\n\n%s\n\n\n'%texts[doc_num][:600]) for num, ind in enumerate(nearest_inds,start=1): print('%s-TH MOST CLOSE DOCUMENT:\n\n%s\n\n\n'%(num, texts[ind][:600]) ) ```
github_jupyter
# ReGraph tutorial (NetworkX backend) ## Part 2: Rewriting hierarchies of graph ReGraph allows to create a hierarchies of graphs related by means of _homomorphisms_ (or _typing_). In the context of a hierarchy, if there exists a homomorphism $G \rightarrow T$, we say that the graph $G$ is typed by the graph $T$. Graph hierarchy is a DAG, where nodes are graphs and edges are homomorphisms. A homomorphism maps every node of $G$ to some node in $T$ (a type) in such a way that: - edges are preserved - attributes of both nodes and edges are preserved ``` from regraph import NXGraph, NXHierarchy, Rule from regraph import plot_graph, plot_instance, plot_rule %matplotlib inline ``` ### 1. Creating and modifying a hierarchy object Consider the following example of a simple graph hierarchy. The two graphs $G$ and $T$ are being created and added to the heirarchy. Afterwards a typing homomorphism between $G$ and $T$ is added, so that every node of $G$ is typed by some node in $T$. ``` # Define graph G g = NXGraph() g.add_nodes_from(["protein", "binding", "region", "compound"]) g.add_edges_from([("region", "protein"), ("protein", "binding"), ("region", "binding"), ("compound", "binding")]) # Define graph T t = NXGraph() t.add_nodes_from(["action", "agent"]) t.add_edges_from([("agent", "agent"), ("agent", "action")]) # Create a hierarchy simple_hierarchy = NXHierarchy() simple_hierarchy.add_graph("G", g, {"name": "Simple protein interaction"}) simple_hierarchy.add_graph("T", t, {"name": "Agent interaction"}) simple_hierarchy.add_typing( "G", "T", {"protein": "agent", "region": "agent", "compound": "agent", "binding": "action", } ) print(simple_hierarchy) ``` The method `get_graph` returns the graph object corresponding to the provided graph id. ``` type(simple_hierarchy.get_graph("T")) ``` The method `get_typing` returns the dictionary object corresponding to the provided hierarchy edge and representing the associated graph homomorphism. ``` simple_hierarchy.get_typing("G", "T") t_node_positions = plot_graph(simple_hierarchy.get_graph("T")) g_node_positions = plot_graph(simple_hierarchy.get_graph("G")) ``` ### 2. Rewriting of objects in a hierarchy ReGraph implements the rewriting technique called `sesqui-pushout rewriting` that allows to transform graphs by applying rules through their instances (matchings). Rewriting an individual graphs in a hierarchy may require an update of other graphs and typings in this hierarchy, such updates are called _propagation_ and are distinguished into two types: _backward_ and _forward_ propagation. __Backward propagation briefly__: - If some graph elements (nodes/edges or attributes) are removed from a graph in the hierarchy, then all the respective elements that are typed by them in the ancestor graphs **should** be removed. - If a graph node is cloned, then for every instance of this node (every node that is typed by the clonned node) in the ancestor graphs we either: (a) specify to which clone it corresponds or (b) clone it. __Forward propagation briefly__: - If some graph nodes are merged and these nodes are typed by different nodes in a descendant graph, the corresponding nodes in the descendant graph **should** be merged. - If a new graph element (node/edge or attribute) is added, then for all the descendent graphs in the hierarchy we either (a) select an existing element to type the added element or (b) add a new element to type the added element. For more details, please see [here](https://link.springer.com/chapter/10.1007/978-3-030-23611-3_9). ReGraph allows to rewrite individual graphs situated in the hierarchy using the method `rewrite` of `NXHierarchy`. The rewriting can be done in two modes: 1. _Strict rewriting_ rewriting that does not allow propagation. 2. _Not strict rewriting_ that allows propagation. The `rewrite` takes as the input the following parameters: - `graph_id`, ID of the graph in the hierarchy to rewrite, - `rule`, a rule object to apply, - `instance`, a dictionary containing an instance of the lhs of the rule in the graph subject to rewriting, by default, tries to construct identity morphism of the nodes of the pattern, - `p_typing`, a dictionary containing typing of graphs in the hierarchy by the interface of the rule, keys are IDs of hierarchy graphs, values are dictionaries containing the mapping of nodes from the hierarchy graphs to the inteface nodes (note that a node from a graph can be typed by a set of nodes in the interface of the rule, e.g. if we want to perform cloning of some types, etc). - `rhs_typing`, a dictionary containing typing of the rhs by graphs of the hierarchy, keys are ids of hierarchy graphs, values are dictionaries containing the mapping of nodes from the lhs to the nodes of the typing graph given by the respective key of the value (note that a node from the rhs can be typed by a set of nodes of some graph, e.g. if we want to perform merging of some types, etc), - `strict`, flag indicating if rewriting is strict, then any propagation is not allowed. #### 2.1. Strict rewriting Let us create a `Rule` object containing a rule we would like to apply. ``` lhs = NXGraph() lhs.add_nodes_from([1, 2]) lhs.add_edges_from([(1, 2)]) p = NXGraph() p.add_nodes_from([1, 2]) p.add_edges_from([]) rhs = NXGraph() rhs.add_nodes_from([1, 2, 3]) rhs.add_edges_from([(3, 1), (3, 2)]) # By default if `p_lhs` and `p_rhs` are not provided # to a rule, it tries to construct this homomorphisms # automatically by matching the names. In this case we # have defined lhs, p and rhs in such a way that that # the names of the matching nodes correspond rule = Rule(p, lhs, rhs) plot_rule(rule) ``` The created rule removes the edge `1->2`, adds the new node `3` and two edges `3->1` and `3->2`. Let us find instances of the created rule in the graph `G`. ``` instances = simple_hierarchy.find_matching("G", rule.lhs) print("Instances: ", instances) for instance in instances: plot_instance( simple_hierarchy.get_graph("G"), rule.lhs, instance, parent_pos=g_node_positions) #filename=("instance_example_%d.png" % i)) ``` Let us fix the desired instance: we would like to remove the edge from `protein` to `binding` and add some new node connecting them. ``` instance = { 1: "protein", 2: "binding" } ``` Let us try to apply the rule to the selected instance as is in the _strict_ rewriting mode. ``` try: rhs_instance = simple_hierarchy.rewrite("G", rule, instance, strict=True) except Exception as e: print("Error message: ", e) print("Type: ", type(e)) ``` We have failed to rewrite `G`, because we have not specified typing for the newly added node `3`. Let us try again, but this time we will prove such typing. ``` rhs_typing = { "T": {3: "agent"} } rhs_instance = simple_hierarchy.rewrite( "G", rule, instance, rhs_typing=rhs_typing, strict=True) print("Instance of the RHS in G", rhs_instance) plot_instance( simple_hierarchy.get_graph("G"), rule.rhs, rhs_instance, parent_pos=g_node_positions) ``` We will now create a rule that applied to `T` and that clones the node `agent` into two nodes. ``` lhs = NXGraph() lhs.add_nodes_from(["agent"]) rule = Rule.from_transform(lhs) _, rhs_clone = rule.inject_clone_node("agent") plot_rule(rule) instance = { "agent": "agent" } ``` We try to apply the created rule to the graph `T` in the strict mode. ``` try: rhs_instance = simple_hierarchy.rewrite("T", rule, instance, strict=True) except Exception as e: print("Error message: ", e) print("Type: ", type(e)) ``` We have failed to rewrite `T`, because we have not specified typing for instances of `agent` in $p$. Let us try again, but this time we will prove such typing. ``` p_typing = { "G": { 'protein': 'agent', 'region': 'agent', 'compound': rhs_clone, 3: 'agent' } } rhs_instance = simple_hierarchy.rewrite("T", rule, instance, p_typing=p_typing, strict=True) print("Instance of the RHS in G", rhs_instance) plot_instance( simple_hierarchy.get_graph("T"), rule.rhs, rhs_instance, parent_pos=t_node_positions) ``` Let us relabel nodes in `T`. ``` simple_hierarchy.relabel_graph_node('T', rhs_instance['agent'], 'organic_agent') simple_hierarchy.relabel_graph_node('T', rhs_instance[rhs_clone], 'non_organic_agent') plot_graph(simple_hierarchy.get_graph('T')) print(simple_hierarchy.get_typing("G", "T")) ``` #### 2.2. Rewriting and propagation To illustrate rewriting with propagation, let us consider a slighlty more sophisticated hierarchy. ``` hierarchy = NXHierarchy() colors = NXGraph() colors.add_nodes_from([ "green", "red" ]) colors.add_edges_from([ ("red", "green"), ("red", "red"), ("green", "green") ]) hierarchy.add_graph("colors", colors) shapes = NXGraph() shapes.add_nodes_from(["circle", "square"]) shapes.add_edges_from([ ("circle", "square"), ("square", "circle"), ("circle", "circle") ]) hierarchy.add_graph("shapes", shapes) quality = NXGraph() quality.add_nodes_from(["good", "bad"]) quality.add_edges_from([ ("bad", "bad"), ("bad", "good"), ("good", "good") ]) hierarchy.add_graph("quality", quality) g1 = NXGraph() g1.add_nodes_from([ "red_circle", "red_square", ]) g1.add_edges_from([ ("red_circle", "red_square"), ("red_circle", "red_circle"), ("red_square", "red_circle") ]) g1_colors = { "red_circle": "red", "red_square": "red", } g1_shapes = { "red_circle": "circle", "red_square": "square", } hierarchy.add_graph("g1", g1) hierarchy.add_typing("g1", "colors", g1_colors) hierarchy.add_typing("g1", "shapes", g1_shapes) g2 = NXGraph() g2.add_nodes_from([ "good_circle", "good_square", "bad_circle", ]) g2.add_edges_from([ ("good_circle", "good_square"), ("good_square", "good_circle"), ("bad_circle", "good_circle"), ("bad_circle", "bad_circle"), ]) g2_shapes = { "good_circle": "circle", "good_square": "square", "bad_circle": "circle" } g2_quality = { "good_circle": "good", "good_square": "good", "bad_circle": "bad", } hierarchy.add_graph("g2", g2) hierarchy.add_typing("g2", "shapes", g2_shapes) hierarchy.add_typing("g2", "quality", g2_quality) g3 = NXGraph() g3.add_nodes_from([ "good_red_circle", "bad_red_circle", "good_red_square", ]) g3.add_edges_from([ ("bad_red_circle", "good_red_circle"), ("good_red_square", "good_red_circle"), ("good_red_circle", "good_red_square") ]) g3_g1 = { "good_red_circle": "red_circle", "bad_red_circle": "red_circle", "good_red_square": "red_square" } g3_g2 = { "good_red_circle": "good_circle", "bad_red_circle": "bad_circle", "good_red_square": "good_square", } hierarchy.add_graph("g3", g3) hierarchy.add_typing("g3", "g1", g3_g1) hierarchy.add_typing("g3", "g2", g3_g2) for graph in hierarchy.graphs(): print("Graph ", graph) plot_graph(hierarchy.get_graph(graph)) print(hierarchy) ``` Some of the graphs in the hierarchy are now typed by multiple graphs, which is reflected in the types of nodes, as in the example below: ``` print("Node types in G3:\n") for node in hierarchy.get_graph("g3").nodes(): print(node, hierarchy.node_type("g3", node)) ``` #### NB: Rules as nodes of a hierarchy Having constructed a sophisticated rewriting rule typed by some nodes in the hierarchy one may want to store this rule and to be able to propagate any changes that happen in the hierarchy to the rule as well. ReGraph's `NXHierarchy` allows to add graph rewriting rules as nodes in the hierarchy. Rules in the hierarchy can be typed by graphs, but rule nodes are not allowed to have incoming edges, i.e. nothing can be typed by a rule. In the example below, a rule is added to the previously constructed hierarchy and typed by graphs `g1` and `g2`: ``` lhs = NXGraph() lhs.add_nodes_from([1, 2]) lhs.add_edges_from([(1, 2)]) p = NXGraph() p.add_nodes_from([1, 11, 2]) p.add_edges_from([(1, 2)]) rhs = NXGraph.copy(p) rhs.add_nodes_from([3]) p_lhs = {1: 1, 11: 1, 2: 2} p_rhs = {1: 1, 11: 11, 2: 2} r1 = Rule(p, lhs, rhs, p_lhs, p_rhs) hierarchy.add_rule("r1", r1, {"desc": "Rule 1: typed by two graphs"}) lhs_typing1 = {1: "red_circle", 2: "red_square"} rhs_typing1 = {3: "red_circle"} lhs_typing2 = {1: "good_circle", 2: "good_square"} rhs_typing2 = {3: "bad_circle"} hierarchy.add_rule_typing("r1", "g1", lhs_typing1, rhs_typing1) hierarchy.add_rule_typing("r1", "g2", lhs_typing2, rhs_typing2) plot_rule(hierarchy.get_rule('r1')) g1_lhs_typing, g1_rhs_typing = hierarchy.get_typing('r1', 'g1') g2_lhs_typing, g2_rhs_typing = hierarchy.get_typing('r1', 'g2') print("Typing of R1 by G1: ") print("\tLHS", g1_lhs_typing) print("\tP (is implicit)") print("\tRHS", g1_rhs_typing) print("Typing of R1 by G2: ") print("\tLHS", g2_lhs_typing) print("\tP (is implicit)") print("\tRHS", g2_rhs_typing) ``` #### 2.3. Rewriting and propagation We now show how graph rewriting can be performed in such an hierarchy. In the previous example we perfromed strict rewriting in a hierarchy, where no propagation was performed. The following example illustrates how the ReGraph propagates the changes made by rewriting on any level to all the graphs (as well as the rules) typed by the one target of rewriting. ``` lhs = NXGraph() lhs.add_nodes_from(["a", "b"]) lhs.add_edges_from([ ("a", "b"), ("b", "a") ]) p = NXGraph() p.add_nodes_from(["a", "a1", "b"]) p.add_edges_from([ ("a", "b"), ("a1", "b") ]) rhs = NXGraph.copy(p) rule = Rule( p, lhs, rhs, {"a": "a", "a1": "a", "b": "b"}, {"a": "a", "a1": "a1", "b": "b"}, ) plot_rule(rule) ``` We have created a rule that clones the node `a` and reconnects the edges between `a` and `b`. ``` instances = hierarchy.find_matching("shapes", lhs) print("Instances:") for instance in instances: print(instance) plot_instance(hierarchy.get_graph("shapes"), rule.lhs, instance) ``` We rewrite the graph `shapes` with the fixed instances (so, the node `circle` is cloned). ``` rhs_instances = hierarchy.rewrite("shapes", rule, {"a": "circle", "b": "square"}) ``` Observe the following plots, the cloning of circle was propagated to all the ancestors of `shapes`, because we didn't specify how to retype intances of `circle` for these ancestors using the `p_typing` parameter. This is an example of previously mentioned _backward propagation_. ``` for graph in hierarchy.graphs(): print("Graph ", graph) plot_graph(hierarchy.get_graph(graph)) ``` Even the rule `r1` was affected as the result of propagation, all its circle nodes were cloned. ``` plot_rule(hierarchy.get_rule('r1')) ``` Let us now consider a small example of _forward propagation_. We will create a rule that performs some additions and merges of nodes. ``` pattern = NXGraph() pattern.add_nodes_from(["a", "b"]) rule = Rule.from_transform(pattern) rhs_node = rule.inject_merge_nodes(["a", "b"]) rule.inject_add_node("c") rule.inject_add_edge("c", rhs_node) instance = { "a": "good_circle", "b": "bad_circle", } old_position = plot_instance(hierarchy.get_graph("g2"), rule.lhs, instance) plot_rule(rule) ``` Application of this rule will merge nodes `bad_circle` and `good_circle` in the graph `g2`. It with then add a new node and connect it with an edge to the merged node. Let us specify some typings of the new node in the RHS: we will set the new node to be typed as `circle` in the graph `shapes`. ``` rhs_typing = { "shapes": { "c": "circle" } } rhs_instance = hierarchy.rewrite("g2", rule, instance, rhs_typing=rhs_typing) ``` Observe the following graphs, as the resule of forward propagation nodes `good` and `bad` were merged in the graph `qualities`. In addition, a new node typing the node `c` in the rule was added to the graph `qualities`. ``` for graph in hierarchy.graphs(): print("Graph ", graph) plot_graph(hierarchy.get_graph(graph)) ``` ### 3. Serializing hierarchy object Because NetworkX graphs are in-memory objects, they are destroyed as soon as the Python application is no longer running. ReGraph provides some utils for serialization of `NXHierarchy` objects and implements the following methods for loading and exporting your hierarchy in JSON-format: - `NXHierarchy.to_json` creates a json representations of the hierarchy; - `NXHierarchy.from_json` loads an hierarchy from json representation (returns new `Hierarchy` object); - `NXHierarchy.export` exports the hierarchy to a file (json format); - `NXHierarchy.load` loads an hierarchy from a .json file (returns new object as well). ``` hierarchy_json = hierarchy.to_json() import json print(json.dumps(hierarchy_json, indent=" ")) new_hierarchy = NXHierarchy.from_json(hierarchy_json) new_hierarchy == hierarchy ```
github_jupyter
# Quality control of data analysis output When dealing with proteomics data, it is recommended to check the results for inconsistencies and correct application of the data analysis parameters. ### Pre-requisite: mass spectrometry basics In this workshop, we will focus on mass spectrometry (MS)-based proteomics, which is very frequently used. A mass spectrometer measures the mass-to-charge (m/z) ratio of ionized molecules (e.g., protein fragments/peptides obtained by trypsin digestion of cell extracts and ionized within the spectrometer), and outputs a m/z spectrum encompassing the m/z values of all peptides in a mixture (e.g., a cell extract). Then, this spectrum is compared with <b> predictions </b> of the m/z spectrum based on our <i> a priori </i> knowledge of the sample: specie, genomic background, .... etc, using bioinformatics tools, some of which will be used in this session. Therefore, the typical parameters involved in acquisition/analysis of MS data include: 1. The retention time (time spent by the solution between the injection in the spectrometer and the measurement) 2. The fragment tolerance: the expected accuracy of the ionization process (i.e., the error on charge measurement) 3. The precursor tolerance: it is the expected accuracy on mass measurements by the analyzer (i.e., the error on mass measurements). 4. Post-translational modifications: PTMs affect the mass-to-charge ratio and might explain unexpected peaks in the MS spectrum. Accounting for PTMs (or not!) in the analysis of a MS spectrum to identify peptides can be critical, and depends on the particular biological purpose of the experiment. Importantly, by restricting the mass tolerance windows, the number of candidate peptides scored against a MS spectrum are, generally, decreasing. The purpose of this workshop is to explore how the choice of these analysis parameters affects the output of a MS-based proteomics experiment, using a well controlled dataset. ### Description of the data We will use a dataset made of mass spectrometry measurements of samples encompassing a mix of yeast cell extracts (i.e., only yeast peptides), with a library of purified human proteins (termed UPS from now on), the latter being mixed at different concentrations prior to MS-data acquisition ("spike-in" experiment, details provided in https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4706616/).The UPS library consists in 48 human proteins. The data has been fully re-analyzed with different parameter settings, where parameters for the fragment ion tolerance, the precursor tolerance or the searched modifications were changed. For each analysis parameter set, the data has been analysed with two major new tools that generate reports for quality control: _[PTX-QC](https://cran.r-project.org/web/packages/PTXQC/index.html)_ This is an R package that allows multiple visualizations and metrics for data coming from e.g. MaxQuant or OpenMS workflows. _[pmultiqc](https://pypi.org/project/pmultiqc/)_ This Python library is a proteomics plugin for the MultiQC framework that reads output from the [ProteomicsLFQ](https://github.com/nf-core/proteomicslfq) pipeline Files for protein and peptide quantifications and the quality controls are provided in the folder _QC_Workshop_. Each subfolder corresponds to the data from a different data analysis of the same raw data files. The files _multiqc_report.html_ and _results/ptxqc/report....html_ contain summarized information from the pmultiqc and PTX-QC analysis tools, respectively. In the following, some Questions will require data generated by the pmultiqc analysis tool, and some other by the PTX-QC tool. ### Task I: Quality Control Browse the different reports (html files) and compare the different output between files and across data analyses. In order to see the interactive plots, you need to press the button "trust HTML". It might be easier to show the reports in different tabs of your browser (right-click on file and select "Open in New Browser Tab") #### Add your answers here (double-click here to edit the cell) ##### Question I: <ins> How does the Delta Mass distribution (experimental m/z - theoretical m/z) provided by the MultiQC package depends on the different mass tolerances? Explain particular trends you may observe when comparing distributions computed with different tolerances.</ins> _Answer_ ##### Question II: <ins>Which of the analyses provides the largest number of identified mass spectra/peptides/protein groups, and why? Same question for the lowest number of identified peptides. </ins> _Answer_ ##### Question III: <ins>What do the graphics representing peptides intensities indicate about the noise in peptide detection, or background signal? Would you pick an intensity threshold for peptide scoring, and if yes which threshold would you take? </ins> _Answer_ ##### Question IV: <ins>What are miscleavages? How many are found in the data analysis? What does it mean when you find many miscleaved peptides?</ins> _Answer_ ##### Question V: <ins>What most likely happened with the experiment when there are no identifications over a range of the retention time?</ins> _Answer_ ##### Question VI: <ins>Look at the number of peptides per protein. There is a considerable amount of one-hit-wonders (only one peptide per protein). Which minimum number for peptides per proteins would you accept? Discuss this in your group, considering different experimental contexts.</ins> _Answer_ ### Task II: Quantitative filtering We now will check the data by our own and investigate how filtering affects the coverage and interpretation. For each data analysis, there is a _out_msstats.csv_ file in the respective _results/proteomics_lfq/_ folder. You can now select one of these files by specifying the folder and test how the output changes when changing the different parameters. #### Read output file and pre-process data __The calculation in the following cell can take a little while!__ Click on the cell with the code below and press "shift-enter" ``` # Change the file path quant_table <- read.csv("original/results/proteomics_lfq/out_msstats.csv") # summarize peptides tpep_table <- by(quant_table, paste0(quant_table$PeptideSequence, quant_table$Run), function(x) c(x[1,c(1,2,7:9,11)], Intensity=sum(x$Intensity,na.rm=T))) pep_table <- do.call(rbind.data.frame, tpep_table) # histogram of intensities hist(log2(pep_table$Intensity), border=0, col="#333333", xlab="log2(Intensity)", 100, main="") print("DONE") ``` #### Filtering and visualization (identification) Here you can filter the data differently and check how the filtering affects the results. You can start by increasing the minimal number of peptides min_pep required to identify a protein, and check how the numbers decrease. Change the code below as described in the comments and run each of the cells with "shift-enter" ``` ## Filter peptide number per protein # Change this variable min_pep <- 2 # filter for number of peptides pep_table$comb <- paste0(pep_table$ProteinName, pep_table$Run) filtered_peps <- pep_table[pep_table$comb %in% names(which(table(paste0(pep_table$ProteinName, pep_table$Run)) >= min_pep)),] ## Filter for PTMs # Use either "none" (none), "Acetyl" (N-terminal acetylation) or "Oxidation" (Methionine oxidation) remove_ptm <- "Oxidation" filtered_peps <- filtered_peps[!grepl(remove_ptm, rownames(filtered_peps)),] # summary protein groups tprot_table <- by(filtered_peps, paste0(filtered_peps$ProteinName, filtered_peps$Run), function(x) c(x[1,c(1,3:6)], Intensity = sum(x$Intensity,na.rm=T), Number=length(x$Intensity))) prot_table <- do.call(rbind.data.frame, tprot_table) ``` ##### Count number of identifications per sample, including al proteins (yeast + UPS) ``` ## Plot peptides + proteins per sample par(mar=c(12,5,2,1)) barplot(unlist(by(filtered_peps, filtered_peps$Reference, nrow)), border=0, col="#883333", las=2, ylab="Number of all identified peptides") barplot(unlist(by(prot_table, prot_table$Reference, nrow)), border=0, col="#338833", las=2, ylab="Number of all identified protein groups") ``` #### Add your answers here (double-click here to edit the cell) ##### Question: <ins>How much do the numbers decrease when increasing the number of peptides needed to accept a protein or protein group?</ins> _Answer_ ##### Question: <ins> For which samples (i.e., UPS mix concentration) do we get more peptide/protein identifications? Comment on the sensitivity of the acquisition-analysis pipeline to yeast proteins in presence of an external UPS pool. </ins> _Answer_ Repeat counting for UPS proteins only. The UPS proteins are spiked in in different concentrations. The sample names contain information about the concentrations. ``` par(mar=c(12,5,2,1)) ups_peptides <- filtered_peps[grepl("UPS", filtered_peps$ProteinName),] ups_proteins <- prot_table[grepl("UPS", prot_table$ProteinName),] barplot(unlist(by(ups_peptides, ups_peptides$Reference, nrow)), border=0, col="#883333", las=2, ylab="Number of identified UPS peptides") barplot(unlist(by(ups_proteins, ups_proteins$Reference, nrow)), border=0, col="#338833", las=2, ylab="Number of identified UPS protein groups") par(mar=c(5.1,4.1,4.1,2)) ``` #### Add your answers here (double-click here to edit the cell) ##### Question: <ins>How does the detection of UPS proteins depend on the mix concentration? How would you define the detection limit (in amol)? </ins> _Answer_ ##### Question: <ins>How accurate would it be to use spectral counting for the quantification of all UPS proteins together?</ins> _Answer_ #### Visualization (Quantification) This cell creates matrices containing the quantifications from all samples. Proteins are quantified by taking the sum of the peptide intensities. ``` ## you might need to install these libraries via "install.packages(c("gplots", "lattice", "matrixStats"))" library(gplots) library(lattice) library(matrixStats) # Populated matrices with columns as samples quant_peps <- matrix(NA, dimnames=list(rows=unique(filtered_peps$PeptideSequence), cols=unique(filtered_peps$Reference)), nrow=length(unique(filtered_peps$PeptideSequence)), ncol=length(unique(filtered_peps$Reference))) for(i in colnames(quant_peps)) { tquant <- filtered_peps[filtered_peps$Reference == i, ] quant_peps[tquant$PeptideSequence, i] <- log2(tquant$Intensity) } quant_prots <- matrix(NA, dimnames=list(rows=unique(prot_table$ProteinName), cols=unique(prot_table$Reference)), nrow=length(unique(prot_table$ProteinName)), ncol=length(unique(prot_table$Reference))) for(i in colnames(quant_prots)) { tquant <- prot_table[prot_table$Reference == i, ] quant_prots[tquant$ProteinName, i] <- log2(tquant$Intensity) } ``` ##### Calculate the correlations between peptide abundances ``` ## Correlations on peptide level (how reproducibly are yeast peptides quantified?) cors <- cor(quant_peps, use="na.or.complete") hist(cors, main="Correlations between samples on peptide level", 100, border=0, xlab="Correlation") levelplot(cors, col.regions=colorpanel(100, "red","white","blue"), scales=list(x=list(rot=90)), main="Correlations between samples on peptide level") ``` #### Add your answers here (double-click here to edit the cell) ##### Question: <ins>Why do you see such a high similarity between all samples? Are there only biological reasons for that?</ins> _Answer_ ##### Question: <ins>Which factors contribute to the individual peptide intensities?</ins> _Answer_ ##### Determine quantitative changes of UPS proteins ``` # now only UPS proteins as they change between samples ups_quant_proteins <- quant_prots[grep("UPS", rownames(quant_prots)),] levelplot(t(ups_quant_proteins), col.regions=colorpanel(100, "red","white","blue"), scales=list(x=list(rot=90)), main="Measured UPS protein abundances") # comparing to real concentrations concentrations <- sub("[a-z].*","",unlist(strsplit(colnames(ups_quant_proteins), "_")) [seq(2, ncol(ups_quant_proteins)*3, 3)]) colors <- rainbow(50 ,alpha = 0.5) plot(concentrations, ups_quant_proteins[1,], type="p", xlab="Real concentration", ylab="Measured abundance", log="x", ylim=range(ups_quant_proteins, na.rm=T), pch=15, col=colors[1]) for(i in 2:nrow(ups_quant_proteins)) points(concentrations, ups_quant_proteins[i, ], pch=15, col=colors[i]) ``` #### Add your answers here (double-click here to edit the cell) ##### Question: <ins>What would be your preferred setting for the filtering and why?</ins> _Answer_ ##### Question: <ins>Why do the different UPS proteins have different abundances despite of having been loaded in equal amounts?</ins> _Answer_
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt import pandas as pd import itertools import random # from hmm import unsupervised_HMM from scipy import stats ``` # Load Data ``` import pickle with open('minimap_group_to_state_data.pickle', 'rb') as handle: r1_group_to_state_id_data = pickle.load(handle) with open('minimap_group_to_state_data_w_pid.pickle', 'rb') as handle: r1_group_to_state_id_data_w_pid = pickle.load(handle) with open('minimap_group_to_binary_state_data.pickle', 'rb') as handle: r1_group_to_binary_state_data = pickle.load(handle) with open('minimap_group_to_state_mapping.pickle', 'rb') as handle: r1_group_to_state_mapping = pickle.load(handle) with open('minimap_r2_group_to_state_data.pickle', 'rb') as handle: r2_group_to_state_id_data = pickle.load(handle) with open('minimap_r2_group_to_state_data_w_pid.pickle', 'rb') as handle: r2_group_to_state_id_data_w_pid = pickle.load(handle) with open('minimap_r2_group_to_binary_state_data.pickle', 'rb') as handle: r2_group_to_binary_state_data = pickle.load(handle) with open('minimap_r2_group_to_state_mapping.pickle', 'rb') as handle: r2_group_to_state_mapping = pickle.load(handle) ``` # Load HMM ``` # minimap_group_no_to_hmm_2.pickle with open('minimap_group_no_to_hmm.pickle', 'rb') as handle: group_hmm_dict = pickle.load(handle) ``` # Get New Groups ``` df_r2_anx = pd.read_csv('round2_minimap_data.csv') df_r2_anx = df_r2_anx.replace(r'^s*$', float('NaN'), regex = True) df_r2_anx = df_r2_anx.replace(r' ', float('NaN'), regex = True) df_r2_anx = df_r2_anx[df_r2_anx['anger_pep1'].notna()] df_r2_anx = df_r2_anx[df_r2_anx['anxiety_pep1'].notna()] mean_anger = np.mean([float(elem) for elem in df_r2_anx['anger_pep1'].to_numpy()]) mean_anx = np.mean([float(elem) for elem in df_r2_anx['anxiety_pep1'].to_numpy()]) r2_uid_to_anger_anx = {} r2_uid_to_group = {} for index, row in df_r2_anx.iterrows(): uid = row['uid'] anger = float(row['anger_pep1']) anx = float(row['anxiety_pep1']) if anger >= mean_anger: if anx >= mean_anx: group = 1 else: group = 3 else: if anx >= mean_anx: group = 2 else: group = 4 r2_uid_to_anger_anx[uid] = {'anger': anger, 'anxiety': anx, 'group': group} r2_uid_to_group[uid] = group ``` # Run HMM on Group 1 ``` group_number = 1 hmm = group_hmm_dict[group_number]['hmm'] new_A = group_hmm_dict[group_number]['new_A'] new_O = group_hmm_dict[group_number]['new_O'] new_hidden_state_to_old = group_hmm_dict[group_number]['new_hidden_state_to_old'] old_hidden_state_to_new = group_hmm_dict[group_number]['old_hidden_state_to_new'] state_id_to_state = group_hmm_dict[group_number]['state_id_to_state'] state_to_state_id = group_hmm_dict[group_number]['state_to_state_id'] old_hidden_state_to_mean_value = group_hmm_dict[group_number]['old_hidden_state_to_mean_value'] hidden_seqs = group_hmm_dict[group_number]['hidden_seqs'] r2_group_data = r2_group_to_binary_state_data[group_number] hmm_inputs = {} for pid in hidden_seqs: if pid not in r2_group_data: print("skipping ", pid) continue state_data = r2_group_data[pid] hmm_inputs[pid] = [state_to_state_id[s] for s in state_data] r2_hidden_seqs = {} r2_team_num_to_seq_probs = {} for j in hmm_inputs: viterbi_output, all_sequences_and_probs = hmm.viterbi_all_probs(hmm_inputs[j]) r2_team_num_to_seq_probs[j] = all_sequences_and_probs r2_hidden_seqs[j] = [int(x) for x in viterbi_output] team_id_to_new_hidden = {} for team_id in r2_hidden_seqs: team_id_to_new_hidden[team_id] = [old_hidden_state_to_new[x] for x in r2_hidden_seqs[team_id]] # Plot group_num_to_title = { 1: "High Anger, High Anxiety Teams", 2: "Low Anger, High Anxiety Teams", 3: "High Anger, Low Anxiety Teams", 4: "Low Anger, Low Anxiety Teams", } # for team_id in hidden_seqs: # for i in range(len(index_to_team_map)): fig, ax = plt.subplots(1, 1, figsize=(10,5)) legend_labels = [] # print("index_to_team_map = ", index_to_team_map) # print("team_id_map_to_new_hidden = ", team_id_map_to_new_hidden) for team_id in r2_hidden_seqs: # team_id = index_to_team[i] legend_labels.append(team_id) # if team_id not in team_id_map_to_new_hidden: # continue plt.plot(range(len(team_id_to_new_hidden[team_id])), team_id_to_new_hidden[team_id]) plt.legend(legend_labels) ``` # Compute test loss ``` A = np.array(hmm.A) O = np.array(hmm.O) average_loss = [] for team_id in hmm_inputs: seq = hmm_inputs[team_id] seq = np.array(seq) # recommendations = [] for t in range(1, len(seq)): partial_seq = seq[:t] viterbi_output, all_sequences_and_probs = hmm.viterbi_all_probs(partial_seq) current_hidden = int(viterbi_output[-1]) curr_obs_state = state_id_to_state[seq[t-1]] normalized_hidden_probs = A[current_hidden, :]/sum(A[current_hidden, :]) next_hidden_predicted, next_hidden_prob = np.argmax(normalized_hidden_probs), max(normalized_hidden_probs) valid_obs = [] for j in range(O.shape[1]): obs = state_id_to_state[j] if obs[3:] == curr_obs_state[0:3]: valid_obs.append(O[current_hidden, j]) else: valid_obs.append(0) if sum(valid_obs)>0: valid_obs /= sum(valid_obs) # next_obs_predicted_idx, next_obs_prob = np.argmax(O[current_hidden, :]), max(O[current_hidden, :]) # print("valid_obs", valid_obs) next_obs_predicted_idx, next_obs_prob = np.argmax(valid_obs), max(valid_obs) next_obs_predicted_state = state_id_to_state[next_obs_predicted_idx] true_next_obs_state = state_id_to_state[seq[t]] loss = np.array(next_obs_predicted_state[0:3]) - np.array(true_next_obs_state[0:3]) loss = sum([abs(elem) for elem in loss]) average_loss.append(loss) # group_to_loss[group_no] = average_loss print("avg loss", np.mean(average_loss)) print("std loss", np.std(average_loss)) ``` ## See if switching groups works better ``` def compute_loss_w_hmm(group_hmm_dict, group_number, pid): hmm = group_hmm_dict[group_number]['hmm'] new_A = group_hmm_dict[group_number]['new_A'] new_O = group_hmm_dict[group_number]['new_O'] new_hidden_state_to_old = group_hmm_dict[group_number]['new_hidden_state_to_old'] old_hidden_state_to_new = group_hmm_dict[group_number]['old_hidden_state_to_new'] state_id_to_state = group_hmm_dict[group_number]['state_id_to_state'] state_to_state_id = group_hmm_dict[group_number]['state_to_state_id'] old_hidden_state_to_mean_value = group_hmm_dict[group_number]['old_hidden_state_to_mean_value'] hidden_seqs = group_hmm_dict[group_number]['hidden_seqs'] state_data = r2_group_data[pid] hmm_inputs_pid = [] for s in state_data: if s in state_to_state_id: hmm_inputs_pid.append(state_to_state_id[s]) viterbi_output, all_sequences_and_probs = hmm.viterbi_all_probs(hmm_inputs_pid) viterbi_output = [int(x) for x in viterbi_output] new_hidden_for_pid = [old_hidden_state_to_new[x] for x in viterbi_output] A = np.array(hmm.A) O = np.array(hmm.O) average_loss = [] team_id = pid seq = hmm_inputs_pid seq = np.array(seq) average_loss = [] # recommendations = [] for t in range(1, len(seq)): partial_seq = seq[:t] viterbi_output, all_sequences_and_probs = hmm.viterbi_all_probs(partial_seq) current_hidden = int(viterbi_output[-1]) curr_obs_state = state_id_to_state[seq[t-1]] normalized_hidden_probs = A[current_hidden, :]/sum(A[current_hidden, :]) next_hidden_predicted, next_hidden_prob = np.argmax(normalized_hidden_probs), max(normalized_hidden_probs) valid_obs = [] for j in range(O.shape[1]): obs = state_id_to_state[j] if obs[3:] == curr_obs_state[0:3]: valid_obs.append(O[current_hidden, j]) else: valid_obs.append(0) if sum(valid_obs)>0: valid_obs /= sum(valid_obs) # next_obs_predicted_idx, next_obs_prob = np.argmax(O[current_hidden, :]), max(O[current_hidden, :]) # print("valid_obs", valid_obs) next_obs_predicted_idx, next_obs_prob = np.argmax(valid_obs), max(valid_obs) next_obs_predicted_state = state_id_to_state[next_obs_predicted_idx] true_next_obs_state = state_id_to_state[seq[t]] loss = np.array(next_obs_predicted_state[0:3]) - np.array(true_next_obs_state[0:3]) loss = sum([abs(elem) for elem in loss]) average_loss.append(loss) return average_loss group_number_to_improvements = {} group_number = 1 for group_number in [1,2,3,4]: r2_group_data = r2_group_to_binary_state_data[group_number] improvement_in_loss = [] for pid in r2_group_data: new_group_number = r2_uid_to_group[pid] loss_w_original_hmm = compute_loss_w_hmm(group_hmm_dict, group_number, pid) loss_w_new_hmm = compute_loss_w_hmm(group_hmm_dict, new_group_number, pid) # group_to_loss[group_no] = average_loss if new_group_number != group_number: improvement_in_loss.append(np.mean(loss_w_original_hmm) - np.mean(loss_w_new_hmm)) # print("team = ", pid) # print("avg loss", np.mean(loss_w_original_hmm)) # print("std loss", np.std(loss_w_original_hmm)) # print("avg new loss", np.mean(loss_w_new_hmm)) # print("std new loss", np.std(loss_w_new_hmm)) # print() group_number_to_improvements[group_number] = improvement_in_loss np.mean(improvement_in_loss) group_no_to_title = { 1: 'High Anger, High Anxiety', 2: 'Low Anger, High Anxiety', 3: 'High Anger, Low Anxiety', 4: 'Low Anger, Low Anxiety', } # Create lists for the plot groups = [group_no_to_title[group] for group in group_number_to_improvements] x_pos = np.arange(len(groups)) means = [np.mean(group_number_to_improvements[group]) for group in group_number_to_improvements] std_devs = [stats.sem(group_number_to_improvements[group]) for group in group_number_to_improvements] # Build the plot fig, ax = plt.subplots(figsize=(15,5)) ax.bar(x_pos, means, yerr=std_devs, align='center', alpha=0.5, ecolor='black', capsize=10) ax.set_ylabel('Improvement in L1 Loss in Predicting Next State') ax.set_xticks(x_pos) ax.set_xticklabels(groups) ax.set_title('Improvement in L1 Loss in Predicting Next State') ax.yaxis.grid(True) # Save the figure and show plt.tight_layout() plt.savefig('minimap_improvements_r2_with_ste_error_bars.png') plt.show() ``` # Losses With Original HMMs ``` group_number_to_improvements = {} group_number = 1 group_to_loss_original = {1:[], 2:[], 3:[], 4:[]} group_to_loss_new = {1:[], 2:[], 3:[], 4:[]} for group_number in [1,2,3,4]: r2_group_data = r2_group_to_binary_state_data[group_number] improvement_in_loss = [] original_losses = [] new_losses = [] for pid in r2_group_data: new_group_number = r2_uid_to_group[pid] loss_w_original_hmm = compute_loss_w_hmm(group_hmm_dict, group_number, pid) loss_w_new_hmm = compute_loss_w_hmm(group_hmm_dict, new_group_number, pid) # group_to_loss[group_no] = average_loss if new_group_number != group_number: improvement_in_loss.append(np.mean(loss_w_original_hmm) - np.mean(loss_w_new_hmm)) original_losses.extend(loss_w_original_hmm) new_losses.extend(loss_w_new_hmm) group_to_loss_new[new_group_number].extend(new_losses) group_number_to_improvements[group_number] = improvement_in_loss group_to_loss_original[group_number] = original_losses group_to_loss_new.keys() group_no_to_title = { 1: 'High Anger, High Anxiety', 2: 'Low Anger, High Anxiety', 3: 'High Anger, Low Anxiety', 4: 'Low Anger, Low Anxiety', } # Create lists for the plot groups = [group_no_to_title[group] for group in group_to_loss_new] x_pos = np.arange(len(groups)) means = [np.mean(group_to_loss_new[group]) for group in group_to_loss_new] std_devs = [stats.sem(group_to_loss_new[group]) for group in group_to_loss_new] # Build the plot fig, ax = plt.subplots(figsize=(15,5)) ax.bar(x_pos, means, yerr=std_devs, align='center', alpha=0.5, ecolor='black', capsize=10) ax.set_ylabel('L1 Loss in Predicting Next State') ax.set_xticks(x_pos) ax.set_xticklabels(groups) ax.set_title('L1 Loss in Predicting Next State With New-Group-assignment HMM') ax.yaxis.grid(True) # Save the figure and show plt.tight_layout() plt.savefig('minimap_l1_newhmm_r2_with_ste_error_bars.png') plt.show() group_no_to_title = { 1: 'High Anger, High Anxiety', 2: 'Low Anger, High Anxiety', 3: 'High Anger, Low Anxiety', 4: 'Low Anger, Low Anxiety', } # Create lists for the plot groups = [group_no_to_title[group] for group in group_to_loss_original] x_pos = np.arange(len(groups)) means = [np.mean(group_to_loss_original[group]) for group in group_to_loss_original] std_devs = [stats.sem(group_to_loss_original[group]) for group in group_to_loss_original] # Build the plot fig, ax = plt.subplots(figsize=(15,5)) ax.bar(x_pos, means, yerr=std_devs, align='center', alpha=0.5, ecolor='black', capsize=10) ax.set_ylabel('L1 Loss in Predicting Next State') ax.set_xticks(x_pos) ax.set_xticklabels(groups) ax.set_title('L1 Loss in Predicting Next State With Old-Group-assignment HMM') ax.yaxis.grid(True) # Save the figure and show plt.tight_layout() plt.savefig('minimap_l1_oldhmm_r2_with_ste_error_bars.png') plt.show() ```
github_jupyter
``` import os,path import pickle import datetime import numpy as np import pandas as pd from datetime import date, timedelta import matplotlib.pyplot as plt pd.set_option('display.float_format', lambda x: '%.4f' % x) ``` ## MAP@K Function ``` # https://www.kaggle.com/c/h-and-m-personalized-fashion-recommendations/discussion/306007 # https://github.com/benhamner/Metrics/blob/master/Python/ml_metrics/average_precision.py def apk(actual, predicted, k=10): """ Computes the average precision at k. This function computes the average prescision at k between two lists of items. Parameters ---------- actual : list A list of elements that are to be predicted (order doesn't matter) predicted : list A list of predicted elements (order does matter) k : int, optional The maximum number of predicted elements Returns ------- score : double The average precision at k over the input lists """ if len(predicted)>k: predicted = predicted[:k] score = 0.0 num_hits = 0.0 for i,p in enumerate(predicted): # print('items 1: ') if p in actual and p not in predicted[:i]: num_hits += 1.0 score += num_hits / (i+1.0) # print('num_hits: ',num_hits) # print('score: ',score) # print('final score:', score / min(len(actual), k)) # print('='*50) # remove this case in advance # if not actual: # return 0.0 try: return score / min(len(actual), k) except: return 0 def mapk(actual, predicted, k=10): """ Computes the mean average precision at k. This function computes the mean average prescision at k between two lists of lists of items. Parameters ---------- actual : list A list of lists of elements that are to be predicted (order doesn't matter in the lists) predicted : list A list of lists of predicted elements (order matters in the lists) k : int, optional The maximum number of predicted elements Returns ------- score : double The mean average precision at k over the input lists """ # print([apk(a,p,k) for a,p in zip(actual, predicted)]) return np.mean([apk(a,p,k) for a,p in zip(actual, predicted)]) ``` ## Load Data ``` path = '../data/processed' bird = pd.read_csv(os.path.join(path,'submission_bird.csv'))\ .rename(columns={'last_article':'recent_purchase','recomends':'prediction'})\ .drop(columns=['last_date','recent_purchase']) bird['prediction'] = bird['prediction'].apply(lambda x: ' '.join(['0'+i for i in x.split()])) bird.head() ``` ### Submission files ``` path = '../data/processed' # submit = pd.read_csv('submissions.csv',dtype=str) mew = pd.read_csv(os.path.join(path,'submissions (2).csv')) mew_v3 = pd.read_csv(os.path.join(path,'submissions_v3.csv')) # {'factors': 500, 'iterations': 3, 'regularization': 0.01} mew_v4 = pd.read_csv(os.path.join(path,'submissions_v4.csv')) # {'factors': 500, 'iterations': 3, 'regularization': 0.01} & filter_already_liked_items # mew_v5 = pd.read_csv(os.path.join(path,'submissions_v5.csv')) # past 1 year {'factors': 50, 'iterations': 15, 'regularization': 0.01} # mew_v6 = pd.read_csv(os.path.join(path,'submissions_v6.csv')) # all-time data {'factors': 50, 'iterations': 15, 'regularization': 0.01} top_l1m = pd.read_csv(os.path.join(path,'top_l1m.csv')) got = pd.read_csv(os.path.join(path,'recom_data_got2.csv')) non = pd.read_csv(os.path.join(path,'submission_full_v1_NON.csv')) bird = pd.read_csv(os.path.join(path,'submission_bird.csv'))\ .rename(columns={'last_article':'recent_purchase','recomends':'prediction'})\ .drop(columns=['last_date','recent_purchase']) bird['prediction'] = bird['prediction'].apply(lambda x: ' '.join(['0'+i for i in x.split()])) bird.head() ``` ### Index to Customer_id ``` # mapping index path = '../data/processed' infile = open(os.path.join(path,'index_to_cusId.pkl'),'rb') index_to_id_dict = pickle.load(infile) infile.close() ``` ### Transaction file ``` path = '../data/processed' trans = pd.read_pickle(os.path.join(path,'transactions.pkl')) trans["customer_id"] = trans["customer_id"].map(index_to_id_dict) trans.head() ``` ## 7-day target ``` start_dt = datetime.datetime(2020,9,15) end_dt = start_dt + timedelta(7) trans = trans[(trans.t_dat > start_dt) & (trans.t_dat <= end_dt)] print('Min date: ', trans.t_dat.min()) print('Max date: ', trans.t_dat.max()) print(f'Total Customers: {trans.customer_id.nunique()}') target = pd.DataFrame(trans.groupby(['customer_id'])['article_id'].apply(lambda x: list(set(x))))\ .reset_index()\ .rename(columns={'article_id':'actual'}) target['actual'] = target['actual'].apply(lambda x: ' '.join(x)) # weekly_purchased['weekly_purchased_products'] = weekly_purchased['weekly_purchased_products'].apply(lambda x: list(set(x))) target.head() ``` ## Evaluation ### Map Target ``` join_ = 'left' new_top_l1m = top_l1m.merge(target, on = 'customer_id',how=join_).fillna('') new_mew = mew.merge(target, on = 'customer_id',how=join_).fillna('') new_mew_v3 = mew_v3.merge(target, on = 'customer_id',how=join_).fillna('') new_mew_v4 = mew_v4.merge(target, on = 'customer_id',how=join_).fillna('') # new_mew_v5 = mew_v5.merge(target, on = 'customer_id',how=join_).fillna('') # new_mew_v6 = mew_v6.merge(target, on = 'customer_id',how=join_).fillna('') new_got = got.merge(target, on = 'customer_id',how=join_).fillna('') new_non = non.merge(target, on = 'customer_id',how=join_).fillna('') new_bird = bird.merge(target, on = 'customer_id',how=join_).fillna('') new_top_l1m['actual'] = new_top_l1m['actual'].apply(lambda x: x.split()) new_mew['actual'] = new_mew['actual'].apply(lambda x: x.split()) new_mew_v3['actual'] = new_mew_v3['actual'].apply(lambda x: x.split()) new_mew_v4['actual'] = new_mew_v4['actual'].apply(lambda x: x.split()) # new_mew_v5['actual'] = new_mew_v5['actual'].apply(lambda x: x.split()) # new_mew_v6['actual'] = new_mew_v6['actual'].apply(lambda x: x.split()) new_got['actual'] = new_got['actual'].apply(lambda x: x.split()) new_non['actual'] = new_non['actual'].apply(lambda x: x.split()) new_bird['actual'] = new_bird['actual'].apply(lambda x: x.split()) new_top_l1m['prediction'] = new_top_l1m['prediction'].apply(lambda x: x.split()) new_mew['prediction'] = new_mew['prediction'].apply(lambda x: x.split()) new_mew_v3['prediction'] = new_mew_v3['prediction'].apply(lambda x: x.split()) new_mew_v4['prediction'] = new_mew_v4['prediction'].apply(lambda x: x.split()) # new_mew_v5['prediction'] = new_mew_v5['prediction'].apply(lambda x: x.split()) # new_mew_v6['prediction'] = new_mew_v6['prediction'].apply(lambda x: x.split()) new_got['prediction'] = new_got['prediction'].apply(lambda x: x.split()) new_non['prediction'] = new_non['prediction'].apply(lambda x: x.split()) new_bird['prediction'] = new_bird['prediction'].apply(lambda x: x.split()) new_top_l1m.head() ``` ### MAP@12 ``` top_l1m_result = mapk(new_top_l1m['actual'],new_top_l1m['prediction'], k=12) mew_result = mapk(new_mew['actual'],new_mew['prediction'], k=12) mew_v3_result = mapk(new_mew_v3['actual'],new_mew_v3['prediction'], k=12) mew_v4_result = mapk(new_mew_v4['actual'],new_mew_v4['prediction'], k=12) # mew_v5_result = mapk(new_mew_v5['actual'],new_mew_v5['prediction'], k=12) # mew_v6_result = mapk(new_mew_v6['actual'],new_mew_v6['prediction'], k=12) got_result = mapk(new_got['actual'],new_got['prediction'], k=12) non_result = mapk(new_non['actual'],new_non['prediction'], k=12) bird_result = mapk(new_bird['actual'],new_bird['prediction'], k=12) print('mAP@12') print('top_l1m_result: {:.4%}'.format(top_l1m_result)) print('mew_v2_result: {:.4%}'.format(mew_result)) print('mew_v3_result: {:.4%}'.format(mew_v3_result)) print('mew_v4_result: {:.4%}'.format(mew_v4_result)) # print('mew_v5_result: {:.4%}'.format(mew_v5_result)) # print('mew_v6_result: {:.4%}'.format(mew_v6_result)) print('got_result: {:.4%}'.format(got_result)) print('non_result: {:.4%}'.format(non_result)) print('Pbird_result: {:.4%}'.format(bird_result)) numbers = "{:,}".format(len(new_top_l1m)*12) print(f'Total recommended items: {numbers}') print('Approx. puchased items:') print('\t - top_l1m: {:,}'.format(round(len(new_top_l1m)*12*top_l1m_result),0)) print('\t - mew_v2_result: {:,}'.format(round(len(new_top_l1m)*12*mew_result),0)) print('\t - mew_v3_result: {:,}'.format(round(len(new_top_l1m)*12*mew_v3_result),0)) print('\t - mew_v4_result: {:,}'.format(round(len(new_top_l1m)*12*mew_v4_result),0)) # print('\t - mew_v5_result: {:,}'.format(round(len(new_top_l1m)*12*mew_v5_result),0)) # print('\t - mew_v6_result: {:,}'.format(round(len(new_top_l1m)*12*mew_v6_result),0)) print('\t - got_result: {:,}'.format(round(len(new_top_l1m)*12*got_result),0)) print('\t - non_result: {:,}'.format(round(len(new_top_l1m)*12*non_result),0)) print('\t - bird_result: {:,}'.format(round(len(new_top_l1m)*12*bird_result),0)) print('Pbird_result: {:.4%}'.format(0.05)) ```
github_jupyter
``` # A body at a temperature of 50° F is placed outdoors where the temperature is 100° F. If after 5 minutes # the temperature of the body is 60° F, find (a) how long it will take the body to reach a temperature of # 75° F and (b) the temperature of the body after 20 minutes. # Differantial form of the problem ; dT/dt + k*T = 100 * k # We will look only numerical solutions. # The general solution of the problem ; T = -50 * exp(-k*t) + 100 import numpy as np import pandas as pd from scipy.integrate import odeint import matplotlib.pyplot as plt def temperature(T,t): k = 0.045 dTdt = k * (100 - T) return dTdt T0 = 50 t = np.linspace(0,25) T = odeint(temperature,T0,t) plt.title("Problem 7.9 ") plt.plot(t,y) plt.xlabel('Time') plt.ylabel('T(t)') plt.show() K = np.squeeze(np.asarray(T)) M = np.squeeze(np.asarray(t)) print(K) print(M) # We use the " np.squeeze(np.asarray(.)) " function to transform the results of the problem from matrices to numpy array # To reach specific heat boundaries , we have to arrange the range of the "np.linspace()" function import numpy as np import pandas as pd from scipy.integrate import odeint import matplotlib.pyplot as plt def temperature(T,t): k = 0.045 dTdt = k * (100 - T) return dTdt T0 = 72.47528715 t = np.linspace(13.26530612,15.4,) T = odeint(temperature,T0,t) plt.title("Problem 7.9 ") plt.plot(t,y) plt.xlabel('Time') plt.ylabel('T(t)') plt.show() Y = np.squeeze(np.asarray(T)) A = np.squeeze(np.asarray(t)) print(Y) print(A) # a) We could clearly see that after 15.4 minutes temperature will reach the 75 F import numpy as np import pandas as pd from scipy.integrate import odeint import matplotlib.pyplot as plt def temperature(T,t): k = 0.045 dTdt = k * (100 - T) return dTdt T0 = 79.57795551 t = np.linspace(19.89795918,20,) T = odeint(temperature,T0,t) plt.title("Problem 7.9 ") plt.plot(t,y) plt.xlabel('Time') plt.ylabel('T(t)') plt.show() Y = np.squeeze(np.asarray(T)) A = np.squeeze(np.asarray(t)) print(Y) print(A) # b) We could see easily after 20 minutes body temperature will reach the 79.6715154 F # Bonus differentiation plot f=plt.figure(figsize=(12,4)) axes1=f.add_axes([0.1,0.1,0.9,0.9]) axes2=f.add_axes([0.6,0.3,0.3,0.3]) axes1.plot(M,K,color="blue") axes2.plot(A,Y,color="red") axes1.set_xlabel("Time") axes2.set_xlabel("Time") axes1.set_ylabel("T(t)") axes2.set_ylabel("T(t)") axes1.set_title("Wide Range Differentiation") axes2.set_title("Low Range Differentiation") plt.show() ```
github_jupyter
<a href="https://colab.research.google.com/github/SaashaJoshi/quantum-computing/blob/master/google-cirq/Intro_to_Cirq.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ![Cirq](https://cirq.readthedocs.io/en/stable/_images/Cirq_logo_color.png) [Cirq](https://github.com/quantumlib/cirq) is a framework for writing quantum algorithms for noisy intermediate scale quantum (NISQ) devices. Roughly speaking, NISQ devices are those with O(100) qubits that can enact O(1000) gates. Because the resources for NISQ devices are so constrained we believe that a framework for writing programs on these devices needs to be aware of all of the architectural properties of the device on which the algorithm is written. This is in contrast to other frameworks where there is a clean separation between the abstract model being used and the details of the device. In this tutorial we will teach you the basics of writing quantum alogorithms in Cirq. --- >>[Installing Cirq](#scrollTo=rPgPbry6-mF3) >>[Qubits, Moments, Operations, and Circuits](#scrollTo=8A7a3jcql1l5) >>>[Create a Circuit](#scrollTo=VFwmWPf7D057) >>>[Building Circuits](#scrollTo=uaDb6B_jPgrb) >>>[Exercise: Create a circuit](#scrollTo=y9conKPAPn26) >>>>[Solution](#scrollTo=KnA4uBkwEw5-) >>[Simulations of a Circuit](#scrollTo=X15yPl_KQ20Z) >>>[Repetitions](#scrollTo=YLpiz0aN1Jd6) --- ## Installing Cirq To use Cirq one first needs to install Cirq. For the purpose of using this notebook, you can run pip install to install the latest release of Cirq. Different notebook execution systems exist but for most they have "run" button on a cell, which you can push, or shift+enter is often the shortcut to run the cell. Doing so in the following cell should install Cirq. ``` # Install Cirq !pip install cirq==0.5 --quiet ``` (Note: you may see an error about `albumentations` requiring an old `imgaug`. You can ignore this error.) Let's check that Cirq has been successfully installed by importing Cirq and priting out a diagram of the Google's Bristlecone device. ![Google's Bristecone chip](https://4.bp.blogspot.com/-b9akad6ismU/WpmyaJo-cYI/AAAAAAAACa8/mCqPBJxv5oUivy6Jq42FSOQYkeRlTmkiwCLcBGAs/s1600/image1.png) ``` import cirq import numpy as np import matplotlib print(cirq.google.Bristlecone) ``` The import ran without raising an error, and the output is in fact the grid of qubits for the Bristlecone device. Looks like the install worked! Be aware that Cirq is still alpha software, meaning **we are still making breaking changes all the time**. If you don't want your project to suddenly go from working to not working when we release a new version, you should depend on a *specific version* of Cirq and periodically bump that version to the latest one. For the purposes of this tutorial, we will use version of `0.5` (i.e. `cirq==0.5` in pip's version notation). --- ## Qubits, Moments, Operations, and Circuits In Cirq, circuits are represented either by a `Circuit` object or a `Schedule` object. `Schedule`s offer more control over quantum gates and circuits at the timing level. Conceptually: a `Circuit` is a collection of `Moment`s. A `Moment` is a collection of `Operation`s that all act during the same abstract time slice. An `Operation` is a an effect that operates on a specific subset of Qubits. The most common type of `Operation` is a `Gate` applied to several qubits (a "`GateOperation`"). The following diagram should help illustrate these concepts. ![Circuits, Moments, and Operations.](https://cirq.readthedocs.io/en/latest/_images/CircuitMomentOperation.png) ### Create a Circuit Let's create a `Circuit`. Note that in the previous cell we imported cirq, so we will assume that cirq has been imported through out the rest of this notebook. ``` a = cirq.NamedQubit("a") b = cirq.NamedQubit("b") c = cirq.NamedQubit("c") ops = [cirq.H(a), cirq.H(b), cirq.CNOT(b, c), cirq.H(b)] circuit = cirq.Circuit.from_ops(ops) print(circuit) ``` We can unpack this a bit and see all of the components for the circuit. The first thing we do is pick some qubits to use. There are many different types of qubits in Cirq, and you can define your own by inheriting from the `cirq.Qid` class. There's nothing inherently special or magical about these quantum id types such as `cirq.NamedQubit`. They simply identify what you wish to operate on, which is relevant when you are targeting a specific device. For example, if we were creating a circuit for the Bristlecone device we would use `cirq.GridQubit(5, 0)` to refer to the qubit in the left most position of the device. To keep these simple for now, we'll start with abstract qubits simply identified by a name such as "a". ``` a = cirq.NamedQubit("a") ``` Next we encounter of the object `cirq.H`, which is a Hadamard gate. `cirq.H` is an instance of the `cirq.HGate` class, which itself is a subclass of `Gate` (along with other classes). $$H = {1 \over \sqrt{2}} \left[ \begin{array}[cc] & 1 & 1 \\ 1 & -1 \end{array}\right]$$ We can use cirq to see this unitary matrix: ``` cirq.unitary(cirq.H) ``` `Gate` objects have the ability to applied "on" to one or more qubits. There are two ways to do this for gates, either using the `on` method or by directly calling the gate on the qubits as if the gate were a function and the qubits were arguments. For example to apply the `H` onto qubit `a` we can say ``` cirq.H.on(a) ``` or ``` cirq.H(a) ``` The result of those expressions is a `GateOperation` object, which is a type of `Operation`. In cirq we make a strong distinction between `Operation`s and `Gate`s. An `Operation` is associated with specific qubits and can be put in `Circuit`s. A `Gate` has unspecified qubits, and will produce an operation when given qubits. Once you have a collection of operations, you can construct a `Circuit` using the class method `Circuit.from_ops` (more on that in a minute): ``` circuit = cirq.Circuit.from_ops(ops) ``` The last thing we did in the example code was use the (surprisingly useful) ability to print the circuit as a text diagram. The diagram is visually helpful, but it doesn't really get into the internal details of how the `Circuit` is represented. A `Circuit` is made up of a sequence of `Moment` objects. And each `Moment` object is a list of non-overlapping `Operation`s. To see this internal structure, we can iterate over the `Moment`s in the `Circuit` while printing them out. ``` for i, moment in enumerate(circuit): print('Moment {}: {}'.format(i, moment)) ``` We can also just print the circuit's `repr`, which returns a somewhat more detailed (if less readable) expression. ``` print(repr(circuit)) ``` The usefulness of printing the `repr` is that it includes *all* the gory details. These details can be useful when debugging. The `repr` is also a valid python expression that evaluates to the circuit. For example, if we notice that a circuit generated in some complicated way triggers a bug in a simulator, copy-pasting the generated circuit's `repr` into a test, and then working from there, is a simple way to decouple the reproduction of the bug from the circuit generation code. ### Building Circuits Above we created the `Circuit` using `from_ops`. But there are many ways to construct and modify circuits, and each of these is useful in different contexts. Here are a few examples: 1. `from_ops`: This is the simplest way to make a circuit. Give this method some operations, and out pops a circuit. 2. `append`: `Circuit`s are mutable. You can start with an empty `c = cirq.Circuit()` and simply `c.append(operations)` to add on more and more operations 3. `insert`: Instead of appending, you can insert before a particular moment location (labeled by an integer index) 4. By using `Circuit`'s constructor, which takes a list of `Moment`s. Each `Moment` must be explicitly constructed with its own list of `Operation`s. This is tedious, but gives complete control over how the operations are layed out. One interesting, and extremely convenient, fact about `from_ops`, `append`, and `insert` is that they "auto flatten" whatever you give them. You *can* give them a list of operations, but you can also give them a list *of lists* of operations. Or a generator function that sometimes yields tuples of operations and other times yields individual operations. Or just a single operation (without a list around it). If it can recursively iterated into individual operations, `from_ops` and `append` and `insert` will take it. The main place where auto-flattening is useful is when you are generating a circuit's operations using generator functions. This is jumping a bit ahead of what we've explained, but basically auto-flattening means that generators producing operations for a circuit can simply `yield` sub-generators (instead of iterating over them and yielding their items): ``` def xor_swap(a, b): yield cirq.CNOT(a, b) yield cirq.CNOT(b, a) yield cirq.CNOT(a, b) def left_rotate(qubits): for i in range(len(qubits) - 1): a, b = qubits[i:i+2] yield xor_swap(a, b) line = cirq.LineQubit.range(5) print(cirq.Circuit.from_ops(left_rotate(line))) ``` You may have noticed that there is a hole in what we've explained so far. `from_ops` effectively takes a 1-dimensional sequence of operations, but the output is a 2-dimensional circuit (a list-of-lists-of-operations). There is a degree of freedom that hasn't been account for. Specifically: how does cirq choose the moment that each operation will be placed within? The answer is: it depends on the `InsertStrategy` you choose. There are currently four insertion strategies in Cirq: 1. `InsertStrategy.EARLIEST` (currently the default) 2. `InsertStrategy.NEW`, 3. `InsertStrategy.INLINE` 4. `InsertStrategy.NEW_THEN_INLINE` `InsertStrategy.EARLIEST` is defined as > `InsertStrategy.EARLIEST`: Scans backward from the insert > location until a moment with operations touching qubits affected by the > operation to insert is found. The operation is added into the moment just > after that location. For example, if we first create an `Operation` in a single moment, and then use `InsertStrategy.EARLIEST` the `Operation` can slide back to this first `Moment` if there is space An `InsertStrategy` defines how ``Operations`` are placed in a `Circuit` when requested to be inserted at a given location. Here a `location` is identified by the index of the `Moment` in the `Circuit` that operations should be placed before (in the case of `Circuit.append` this means inserting at the index `len(circuit)`; which is one more than the largets moment index and so represents the end of the circuit). ``` circuit = cirq.Circuit() circuit.append([cirq.CZ(a, b)]) circuit.append([cirq.H(a), cirq.H(b), cirq.H(c)]) print(circuit) ``` After creating the first moment with a `CZ` gate, the second append uses the `InsertStrategy.EARLIEST` strategy. The `H` on ``a`` and ``b`` cannot slide back, while the `H` on ``c`` can and so ends up in the first `Moment`. `InsertStrategy.EARLIEST` is the default strategy, the second most important strategy is `InsertStrategy.NEW_THEN_INLINE`: > `InsertStrategy.NEW_INLINE`: For the first operation, add it to a new > `Moment` the insertion point. Attempts to add the operation after the first > operation to insert into the moment just before the desired insert location. > But, if there's already an existing operation affecting any of the qubits > touched by the operation to insert, a new moment is created instead and this > `Moment` is the one that is subsequently used for insertions. As an example of this examine this code ``` circuit = cirq.Circuit() circuit.append([cirq.CZ(a, b)]) circuit.append([cirq.H(c), cirq.H(b), cirq.H(b), cirq.H(a)], ) print(circuit) ``` ### Exercise: Create a circuit Now that you've learned about `InsertStrategy`, here is an exercise to validate your understanding. Create, using the least number of appends the following circuit (note that printing a circuit in Cirq does not always print a moment by moment structure e.g. to avoid overlapping operations in the diagram, but here imagine that you want exactly the moments indicated by the spacing of the circuit.) ``` a: ───@───H───────────H───H─── │ b: ───@───────H───@───H─────── │ c: ───H───────────@─────────── ``` #### Solution ``` #@title a = cirq.NamedQubit('a') b = cirq.NamedQubit('b') c = cirq.NamedQubit('c') circuit = cirq.Circuit() circuit.append([cirq.CZ(a, b), cirq.H(c), cirq.H(a)] ) circuit.append([cirq.H(b), cirq.CZ(b, c), cirq.H(b), cirq.H(a), cirq.H(a)], strategy=cirq.InsertStrategy.NEW_THEN_INLINE) print(circuit) ``` ## Simulations of a Circuit Now that you know how to construct a `Circuit` in Cirq, let's use Cirq to simulate the circuit. Here is a simple circuit ``` def basic_circuit(measure=True): sqrt_x = cirq.X**0.5 cz = cirq.CZ yield sqrt_x(a), sqrt_x(b) yield cz(a, b) yield sqrt_x(a), sqrt_x(b) if measure: yield cirq.measure(a,b) circuit = cirq.Circuit.from_ops(basic_circuit()) print(circuit) ``` There are a few things to note here. One is that we have used a Python *generator*. Recall that in Python functions that have a `yield` are *generators*. Generators are functions that act as *iterators*. Above we see that we can iterate over ``basic_circuit()``. We see that when we do this each of the `yields` produces what was yielded, and here these are `Operations`, or lists of ``Operations``. But when we pass this iterator to the append method, something magical happens. `Circuit` is able to flatten all of these an pass them as one giant list to `Circuit.append` (this also works for `Circuit.insert`). > The above idea uses a concept we call an ``OP_TREE``. An ``OP_TREE`` is > not a class, but a contract. The basic idea is that, if the input can be > iteratively flattened into a list of operations, then the input is an > ``OP_TREE``. A very nice pattern emerges from this structure: define *generators* for sub-circuits, which can vary by size or `Operation` parameters. Now we can simulate this circuit. ``` simulator = cirq.Simulator() circuit = cirq.Circuit.from_ops(basic_circuit()) result = simulator.run(circuit) print('Measurement results') print(result) ``` Running this multiple times should result in different measurement results, since the above circuit produces a superposition over all computational basis states. Above we used the `run` method on the simulator. These methods mimic the actual hardware in that they don't give one access to unphysical objects like the wavefunction. If one wants to get the wave function, then the `simulate` methods can do this: ``` circuit = cirq.Circuit() circuit.append(basic_circuit(measure=False)) result = simulator.simulate(circuit, qubit_order=[a, b]) print('Wavefunction:') print(np.around(result.final_state, 3)) print('Dirac notation:') print(result.dirac_notation()) ``` Notice that we passed a `qubit_orde`r into the `simulate` method. This order helps define the order of the kronecker product used in the resulting `final_state` vector. The `qubit_order` argument is optional. When it is omitted, qubits are sorted ascending according to the ordering methods defined by their python class (for example `cirq.NamedQubit` sorts lexicographically by name). If there are multiple types of qubits in one circuit, the name of the type is used as a tie breaker. The simplest `qubit_order` value you can provide is a list of the qubits in the desired ordered. Any qubits from the circuit that are not in the list will be ordered using the default `__str__` ordering, but come after qubits that are in the list. Be aware that all qubits in the list are included in the simulation, even if they are not operated on by the circuit. The mapping from the order of the qubits to the order of the amplitudes in the wave function can be tricky to understand. Basically, it is the same as the ordering used by `numpy.kron`. > If wave function is array >>(0.1, 0.2, 0.3, 0.4) > then this is >> 0.1|00⟩ + 0.2|01⟩ + 0.3|10⟩ + 0.4|11⟩ >in Dirac notation. If the >> qubit order = [a, b] >then |00> means qubit a is in 0 and qubit b is in 0, |01> means > qubit a is 0 and qubit b is 1, etc. Another way to think about the qubit-to-amplitude ordering is as "for loop ordering": ``` for a in [0, 1]: for b in [0, 1]: print(a, b) ``` The first index (the outermost loop) is the slowest to vary. ### Repetitions The simulator `run` methods also take an option for repeating the circuit. If the measurements in the circuit are terminal, and all other operations are unitary, this simulator is optimized to not recompute the wavefunction before sampling from the circuit. So for example this code doesn't recompute the wave function but knows to sample from the final measurements ``` circuit = cirq.Circuit.from_ops(basic_circuit()) result = simulator.run(circuit, repetitions=1000) print(result.histogram(key='a,b')) ``` Here we have also demonstrated the use of the `histogram` method on the `result` which sums over all the different results for all of the different repetitions. The `histogram` method can also be given a `fold_func` argument, in order to group measurement results under some key before counting them up. For example, we can group by whether or not the two measurement results agreed: ``` print(result.histogram(key='a,b', fold_func=lambda e: 'agree' if e[0] == e[1] else 'disagree')) ```
github_jupyter
# Clustering the Beaded Helix Transtion from R- to L-Helix ## Import Libraries and Define CV Subroutines ``` import sys import numpy as np import matplotlib.pyplot as plt %matplotlib inline import pyemma from shapeGMM import gmm_shapes import MDAnalysis as md import time def weighted_cross_validate_cluster_scan(traj_data, n_train_frames, cluster_array = np.arange(2,9,1).astype(int), n_training_sets=10, n_attempts = 5): """ perform cross validation weighted shape-GMM for range of cluster sizes Inputs: traj_data (required) : float64 array with dimensions (n_frames, n_atoms,3) of molecular configurations n_train_frames (required) : int scalar dictating number of frames to use as training (rest is used for CV) cluster_array (default: [2..8]) : int array of cluster sizes - can be of any number but must be ints. Default is [2, 3, 4, 5, 6, 7, 8] n_training_sets (default: 10) : int scalar dictating how many training sets to choose. Default is 10 n_attempts (default: 5) : int scalar dictating how many attempts to perform shape-GMM on same set. Default is 5 Returns: weighted_train_log_lik : float64 array with dimensions (n_clusters, n_training_sets) containing log likelihoods for each training set weighted_predict_log_lik : float64 array with dimensions (n_clusters, n_training_sets) containing log likelihoods on each CV set """ # meta data from input array n_frames = traj_data.shape[0] # set parameters n_predict_frames = n_frames - n_train_frames print("Number of frames to train each model:", n_train_frames) print("Number of frames to predict each model:", n_predict_frames) print("Number of training sets:", n_training_sets) print("Number of clusters:", cluster_array.size) print("Number of attempts per set/cluster:", n_attempts) sys.stdout.flush() # open data files weighted_train_log_lik = np.empty((cluster_array.size,n_training_sets),dtype=np.float64) weighted_predict_log_lik = np.empty((cluster_array.size,n_training_sets),dtype=np.float64) # print log info print("%15s %15s %15s %19s %15s" % ("Training Set", "N Clusters", "Attempt", "Log Like per Frame","CPU Time (s)")) print("%84s" % ("------------------------------------------------------------------------------------")) # loop over training sets for training_set in range(n_training_sets): # shuffle trajectory data np.random.shuffle(traj_data) # create training and predict data train_data = traj_data[:n_train_frames] predict_data = traj_data[n_train_frames:] # loop over all number of clusters for cluster_index, cluster_size in enumerate(cluster_array): w_log_lik = [] w_objs = [] # for each n_clusters and training set, perform shape-GMM n_attempts times and take object with largest log likelihood for attempt in range(n_attempts): start_time = time.process_time() wsgmm = gmm_shapes.ShapeGMM(cluster_size,kabsch_thresh=1e-1,init_cluster_method='random',init_iter=5) wsgmm.fit_weighted(train_data) w_log_lik.append(wsgmm.log_likelihood) w_objs.append(wsgmm) elapsed_time = time.process_time()-start_time print("%15d %15d %15d %19.3f %15.3f" % (training_set+1, cluster_size, attempt+1, np.round(wsgmm.log_likelihood/wsgmm.n_frames,3), np.round(elapsed_time,3))) # determine maximum w_arg = np.argmax(w_log_lik) # save training log likes weighted_train_log_lik[cluster_index,training_set] = w_log_lik[w_arg] # save prediction log likes weighted_predict_log_lik[cluster_index,training_set] = w_objs[w_arg].predict_weighted(predict_data)[2] # convert to log likelihood per frame weighted_train_log_lik /= n_train_frames weighted_predict_log_lik /= n_predict_frames #return return weighted_train_log_lik, weighted_predict_log_lik # reorder cluster numbers based on populations in descending order def reorder_gmm_cluster_obj(sgmm_obj): # determine metadata based on clusters n_frames = sgmm_obj.n_frames cluster_ids, cluster_populations = np.unique(sgmm_obj.clusters,return_counts=True) n_clusters = cluster_ids.size print("Number of clusters:", n_clusters) print("Populations prior to reorder:", cluster_populations/n_frames) # determine sort key sort_key = np.argsort(cluster_populations)[::-1] sorted_cluster_ids = cluster_ids[sort_key] new_clusters = np.empty(n_frames,dtype=int) for frame in range(n_frames): new_clusters[frame] = np.argwhere(sorted_cluster_ids == sgmm_obj.clusters[frame]) cluster_ids, cluster_populations = np.unique(new_clusters,return_counts=True) print("Populations after reorder:", cluster_populations/n_frames) # repopulate object sgmm_obj.precisions = sgmm_obj.precisions[sort_key] sgmm_obj.lpdets = sgmm_obj.lpdets[sort_key] sgmm_obj.centers = sgmm_obj.centers[sort_key] sgmm_obj.weights = sgmm_obj.weights[sort_key] sgmm_obj.ln_weights = sgmm_obj.ln_weights[sort_key] sgmm_obj.clusters = new_clusters ``` ## Read trajectory ``` prmtopFileName = "helix_template.pdb" trajFileName = "helix_folding_eps6.0.dcd" coord = md.Universe(prmtopFileName,trajFileName) print("Number of atoms in trajectory:", coord.atoms.n_atoms) print("Number of frames in trajectory:", coord.trajectory.n_frames) # make atom selection atomSel = coord.select_atoms('all') print("Number of atoms in selection:", atomSel.n_atoms) # create traj data of selection trajData = np.empty((coord.trajectory.n_frames,atomSel.n_atoms,3),dtype=float) #loop traj for ts in coord.trajectory: trajData[ts.frame,:] = atomSel.positions ``` ## Perform Cross Validation Cluster Scan (can take a while) In this scan, it is possible for divide by zero errors to arise. This occurs when there are very few (maybe only 1) frame per cluster. We have only observed this for very simple systems such as this beaded helix example. ``` # define cluster array cluster_array = np.arange(2,8,1).astype(int) # run cluster CV scan weighted_train_log_lik, weighted_predict_log_lik = weighted_cross_validate_cluster_scan(trajData,2000,cluster_array = cluster_array, n_training_sets=5, n_attempts=10) # write to data files np.savetxt(weighted_train_filename,np.column_stack((cluster_array,weighted_train_log_lik))) np.savetxt(weighted_predict_filename,np.column_stack((cluster_array,weighted_predict_log_lik))) # load data from txt file if you don't want to run weighted_train_log_lik = np.loadtxt("weighted_train_2_7.dat")[:,1:] weighted_predict_log_lik = np.loadtxt("weighted_predict_2_7.dat")[:,1:] ``` ## Make Log Likelihood vs number of Cluster Plots ``` cluster_array = np.arange(2,8,1).astype(int) # create figure plt.figure(figsize=(10,10), dpi= 120, facecolor='w', edgecolor='k') # weighted SGMM weighted_train_mean = np.mean(weighted_train_log_lik,axis=1) weighted_train_std = np.std(weighted_train_log_lik,axis=1) plt.errorbar(cluster_array,weighted_train_mean,weighted_train_std,fmt='-o',lw=3,capsize=3,label="W-SGMM Training") lower, upper = pyemma.util.statistics.confidence_interval((weighted_train_log_lik).T.tolist(), conf=0.9) plt.fill_between(cluster_array, lower, upper, alpha=0.3) weighted_predict_mean = np.mean(weighted_predict_log_lik,axis=1) weighted_predict_std = np.std(weighted_predict_log_lik,axis=1) plt.errorbar(cluster_array,weighted_predict_mean,weighted_predict_std,fmt='--x',lw=3,capsize=3,label="W-SGMM Cross Validation") lower, upper = pyemma.util.statistics.confidence_interval((weighted_predict_log_lik).T.tolist(), conf=0.9) plt.fill_between(cluster_array, lower, upper, alpha=0.3) plt.grid(b=True, which='major', axis='both', color='#808080', linestyle='--') plt.ylabel("Log Likelihood per Frame",fontsize=16) plt.xlabel("Number of Clusters",fontsize=16) plt.tick_params(axis='both',labelsize=16) plt.legend(fontsize=14) plt.tight_layout() #plt.savefig("beaded_helix_log_likelihood_cv.png",dpi=300,transparent=True) ``` ## Run WSGMM for nClusters=3 ``` n_clusters = 3 delta = 1 n_attempts = 5 objs = [] log_likes = [] for i in range(n_attempts): wsgmm = gmm_shapes.ShapeGMM(n_clusters,kabsch_thresh=1e-1,init_cluster_method='random',init_iter=5) wsgmm.fit_weighted(trajData[1::delta]) print(i+1, wsgmm.log_likelihood/wsgmm.n_frames) objs.append(wsgmm) log_likes.append(wsgmm.log_likelihood) # select obj with max log likelihood per frame wsgmm = objs[np.argmax(log_likes)] # reorder object reorder_gmm_cluster_obj(wsgmm) #predict if you didn't train on entire data set entire_traj_clusters, entire_traj_traj, entire_traj_log_lik = wsgmm.predict_weighted(trajData[1:]) ``` ## Make 2D FE Plot with clusterings ``` from shapeGMM._traj_tools import weight_kabsch_dist_align mahaClusterCenters = np.empty((trajData[1::delta].shape[0],2),dtype=np.float32) for frame in range(trajData[1::delta].shape[0]): mahaClusterCenters[frame,0] = np.sqrt(weight_kabsch_dist_align(trajData[1::delta][frame],wsgmm.centers[0],wsgmm.precisions[0])) mahaClusterCenters[frame,1] = np.sqrt(weight_kabsch_dist_align(trajData[1::delta][frame],wsgmm.centers[1],wsgmm.precisions[1])) plt.figure(figsize=(10,10), dpi= 120, facecolor='w', edgecolor='k') x = mahaClusterCenters[:,0] y = mahaClusterCenters[:,1] H, xedges, yedges = np.histogram2d(x,y,bins=40,density=True) xcenters = (xedges[:-1] + xedges[1:]) / 2 ycenters = (yedges[:-1] + yedges[1:]) / 2 H = -np.log(H.T) xx, yy = np.meshgrid(xcenters, ycenters) plt.contour(xx,yy,H,cmap='binary') plt.ylabel("Mahalanobis Distance to Cluster 2",fontsize=16) plt.xlabel("Mahalanobis Distance to Cluster 1",fontsize=16) plt.tick_params(axis='both',labelsize=16) plt.grid(b=True, which='major', axis='both', color='#808080', linestyle='--') plt.scatter(mahaClusterCenters[:,0],mahaClusterCenters[:,1],c=entire_traj_clusters)#,alpha=0.2) plt.tight_layout() plt.xlim(0,38) plt.ylim(0,38) plt.gca().set_aspect('equal') #plt.savefig("beaded_helix_2D_FE_w_clusters.eps",dpi=300) ```
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Generated-Datasets" data-toc-modified-id="Generated-Datasets-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Generated Datasets</a></span></li><li><span><a href="#Boston-Dataset" data-toc-modified-id="Boston-Dataset-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Boston Dataset</a></span></li><li><span><a href="#Simple-Multi-Regression" data-toc-modified-id="Simple-Multi-Regression-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Simple Multi Regression</a></span></li><li><span><a href="#Regularization" data-toc-modified-id="Regularization-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Regularization</a></span><ul class="toc-item"><li><span><a href="#Lasso-Regularlization" data-toc-modified-id="Lasso-Regularlization-4.1"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>Lasso Regularlization</a></span></li><li><span><a href="#Ridge-Regularization" data-toc-modified-id="Ridge-Regularization-4.2"><span class="toc-item-num">4.2&nbsp;&nbsp;</span>Ridge Regularization</a></span></li><li><span><a href="#ElasticNet-Regularization" data-toc-modified-id="ElasticNet-Regularization-4.3"><span class="toc-item-num">4.3&nbsp;&nbsp;</span>ElasticNet Regularization</a></span></li></ul></li></ul></div> # Regression with TensorFlow <a class="tocSkip"> ``` import tensorflow as tf print('TensorFlow:{}'.format(tf.__version__)) tf.set_random_seed(123) import numpy as np print('NumPy:{}'.format(np.__version__)) np.random.seed(123) import matplotlib.pyplot as plt import sklearn as sk print('Scikit Learn:{}'.format(sk.__version__)) from sklearn import model_selection as skms from sklearn import datasets as skds from sklearn import preprocessing as skpp ``` # Generated Datasets ``` X, y = skds.make_regression( n_samples=200, n_features=1, n_informative=1, n_targets=1, noise=20.0) if (y.ndim == 1): y = y.reshape(-1, 1) plt.figure(figsize=(14,8)) plt.plot(X,y,'b.') plt.title('Original Dataset') plt.show() X_train, X_test, y_train, y_test = skms.train_test_split( X, y, test_size=.4, random_state=123) num_outputs = y_train.shape[1] num_inputs = X_train.shape[1] x_tensor = tf.placeholder(dtype=tf.float32, shape=[None, num_inputs], name='x') y_tensor = tf.placeholder( dtype=tf.float32, shape=[None, num_outputs], name='y') w = tf.Variable( tf.zeros([num_inputs, num_outputs]), dtype=tf.float32, name='w') b = tf.Variable(tf.zeros([num_outputs]), dtype=tf.float32, name='b') model = tf.matmul(x_tensor, w) + b loss = tf.reduce_mean(tf.square(model - y_tensor)) mse = tf.reduce_mean(tf.square(model - y_tensor)) y_mean = tf.reduce_mean(y_tensor) total_error = tf.reduce_sum(tf.square(y_tensor - y_mean)) unexplained_error = tf.reduce_sum(tf.square(y_tensor - model)) rs = 1 - tf.div(unexplained_error, total_error) learning_rate = 0.001 optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) num_epochs = 1500 w_hat = 0 b_hat = 0 loss_epochs = np.empty(shape=[num_epochs], dtype=np.float32) mse_epochs = np.empty(shape=[num_epochs], dtype=np.float32) rs_epochs = np.empty(shape=[num_epochs], dtype=np.float32) mse_score = 0 rs_score = 0 with tf.Session() as tfs: tfs.run(tf.global_variables_initializer()) for epoch in range(num_epochs): feed_dict = {x_tensor: X_train, y_tensor: y_train} loss_val, _ = tfs.run([loss, optimizer], feed_dict=feed_dict) loss_epochs[epoch] = loss_val feed_dict = {x_tensor: X_test, y_tensor: y_test} mse_score, rs_score = tfs.run([mse, rs], feed_dict=feed_dict) mse_epochs[epoch] = mse_score rs_epochs[epoch] = rs_score w_hat, b_hat = tfs.run([w, b]) w_hat = w_hat.reshape(1) print('model : Y = {0:.8f} X + {1:.8f}'.format(w_hat[0], b_hat[0])) print('For test data : MSE = {0:.8f}, R2 = {1:.8f} '.format( mse_score, rs_score)) plt.figure(figsize=(14, 8)) plt.title('Original Data and Trained Model') x_plot = [np.min(X) - 1, np.max(X) + 1] y_plot = w_hat * x_plot + b_hat plt.axis([x_plot[0], x_plot[1], y_plot[0], y_plot[1]]) plt.plot(X, y, 'b.', label='Original Data') plt.plot(x_plot, y_plot, 'r-', label='Trained Model') plt.legend() plt.show() plt.figure(figsize=(14, 8)) plt.axis([0, num_epochs, 0, np.max(loss_epochs)]) plt.plot(loss_epochs, label='Loss on X_train') plt.title('Loss in Iterations') plt.xlabel('# Epoch') plt.ylabel('MSE') plt.axis([0, num_epochs, 0, np.max(mse_epochs)]) plt.plot(mse_epochs, label='MSE on X_test') plt.xlabel('# Epoch') plt.ylabel('MSE') plt.legend() plt.show() plt.figure(figsize=(14, 8)) plt.axis([0, num_epochs, np.min(rs_epochs), np.max(rs_epochs)]) plt.title('R-squared in Iterations') plt.plot(rs_epochs, label='R2 on X_test') plt.xlabel('# Epoch') plt.ylabel('R2') plt.legend() plt.show() ``` # Boston Dataset ``` boston=skds.load_boston() print(boston.DESCR) X=boston.data.astype(np.float32) y=boston.target.astype(np.float32) if (y.ndim == 1): y = y.reshape(-1,1) X = skpp.StandardScaler().fit_transform(X) X_train, X_test, y_train, y_test = skms.train_test_split( X, y, test_size=.4, random_state=123) print(X_train.shape) ``` # Simple Multi Regression ``` num_outputs = y_train.shape[1] num_inputs = X_train.shape[1] x_tensor = tf.placeholder(dtype=tf.float32, shape=[None, num_inputs], name='x') y_tensor = tf.placeholder( dtype=tf.float32, shape=[None, num_outputs], name='y') w = tf.Variable( tf.zeros([num_inputs, num_outputs]), dtype=tf.float32, name='w') b = tf.Variable(tf.zeros([num_outputs]), dtype=tf.float32, name='b') model = tf.matmul(x_tensor, w) + b loss = tf.reduce_mean(tf.square(model - y_tensor)) mse = tf.reduce_mean(tf.square(model - y_tensor)) y_mean = tf.reduce_mean(y_tensor) total_error = tf.reduce_sum(tf.square(y_tensor - y_mean)) unexplained_error = tf.reduce_sum(tf.square(y_tensor - model)) rs = 1 - tf.div(unexplained_error, total_error) learning_rate = 0.001 optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) num_epochs = 1500 loss_epochs = np.empty(shape=[num_epochs], dtype=np.float32) mse_epochs = np.empty(shape=[num_epochs], dtype=np.float32) rs_epochs = np.empty(shape=[num_epochs], dtype=np.float32) mse_score = 0.0 rs_score = 0.0 with tf.Session() as tfs: tfs.run(tf.global_variables_initializer()) for epoch in range(num_epochs): feed_dict = {x_tensor: X_train, y_tensor: y_train} loss_val, _ = tfs.run([loss, optimizer], feed_dict) loss_epochs[epoch] = loss_val feed_dict = {x_tensor: X_test, y_tensor: y_test} mse_score, rs_score = tfs.run([mse, rs], feed_dict) mse_epochs[epoch] = mse_score rs_epochs[epoch] = rs_score print('For test data : MSE = {0:.8f}, R2 = {1:.8f} '.format( mse_score, rs_score)) plt.figure(figsize=(14, 8)) plt.axis([0, num_epochs, 0, np.max(loss_epochs)]) plt.plot(loss_epochs, label='Loss on X_train') plt.title('Loss in Iterations') plt.xlabel('# Epoch') plt.ylabel('MSE') plt.axis([0, num_epochs, 0, np.max(mse_epochs)]) plt.plot(mse_epochs, label='MSE on X_test') plt.xlabel('# Epoch') plt.ylabel('MSE') plt.legend() plt.show() plt.figure(figsize=(14, 8)) plt.axis([0, num_epochs, np.min(rs_epochs), np.max(rs_epochs)]) plt.title('R-squared in Iterations') plt.plot(rs_epochs, label='R2 on X_test') plt.xlabel('# Epoch') plt.ylabel('R2') plt.legend() plt.show() ``` # Regularization ## Lasso Regularlization ``` num_outputs = y_train.shape[1] num_inputs = X_train.shape[1] x_tensor = tf.placeholder(dtype=tf.float32, shape=[None, num_inputs], name='x') y_tensor = tf.placeholder(dtype=tf.float32, shape=[None, num_outputs], name='y') w = tf.Variable(tf.zeros([num_inputs, num_outputs]), dtype=tf.float32, name='w') b = tf.Variable(tf.zeros([num_outputs]), dtype=tf.float32, name='b') model = tf.matmul(x_tensor, w) + b lasso_param = tf.Variable(0.8, dtype=tf.float32) lasso_loss = tf.reduce_mean(tf.abs(w)) * lasso_param loss = tf.reduce_mean(tf.square(model - y_tensor)) + lasso_loss learning_rate = 0.001 optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) mse = tf.reduce_mean(tf.square(model - y_tensor)) y_mean = tf.reduce_mean(y_tensor) total_error = tf.reduce_sum(tf.square(y_tensor - y_mean)) unexplained_error = tf.reduce_sum(tf.square(y_tensor - model)) rs = 1 - tf.div(unexplained_error, total_error) num_epochs = 1500 loss_epochs = np.empty(shape=[num_epochs], dtype=np.float32) mse_epochs = np.empty(shape=[num_epochs], dtype=np.float32) rs_epochs = np.empty(shape=[num_epochs], dtype=np.float32) mse_score = 0.0 rs_score = 0.0 with tf.Session() as tfs: tfs.run(tf.global_variables_initializer()) for epoch in range(num_epochs): feed_dict = {x_tensor: X_train, y_tensor: y_train} loss_val,_ = tfs.run([loss,optimizer], feed_dict) loss_epochs[epoch] = loss_val feed_dict = {x_tensor: X_test, y_tensor: y_test} mse_score,rs_score = tfs.run([mse,rs], feed_dict) mse_epochs[epoch] = mse_score rs_epochs[epoch] = rs_score print('For test data : MSE = {0:.8f}, R2 = {1:.8f} '.format( mse_score, rs_score)) plt.figure(figsize=(14, 8)) plt.axis([0, num_epochs, 0, np.max([loss_epochs, mse_epochs])]) plt.plot(loss_epochs, label='Loss on X_train') plt.plot(mse_epochs, label='MSE on X_test') plt.title('Loss in Iterations') plt.xlabel('# Epoch') plt.ylabel('Loss or MSE') plt.legend() plt.show() plt.figure(figsize=(14, 8)) plt.axis([0, num_epochs, np.min(rs_epochs), np.max(rs_epochs)]) plt.title('R-squared in Iterations') plt.plot(rs_epochs, label='R2 on X_test') plt.xlabel('# Epoch') plt.ylabel('R2') plt.legend() plt.show() ``` ## Ridge Regularization ``` num_outputs = y_train.shape[1] num_inputs = X_train.shape[1] x_tensor = tf.placeholder(dtype=tf.float32, shape=[None, num_inputs], name='x') y_tensor = tf.placeholder(dtype=tf.float32, shape=[None, num_outputs], name='y') w = tf.Variable(tf.zeros([num_inputs, num_outputs]), dtype=tf.float32, name='w') b = tf.Variable(tf.zeros([num_outputs]), dtype=tf.float32, name='b') model = tf.matmul(x_tensor, w) + b ridge_param = tf.Variable(0.8, dtype=tf.float32) ridge_loss = tf.reduce_mean(tf.square(w)) * ridge_param loss = tf.reduce_mean(tf.square(model - y_tensor)) + ridge_loss learning_rate = 0.001 optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) mse = tf.reduce_mean(tf.square(model - y_tensor)) y_mean = tf.reduce_mean(y_tensor) total_error = tf.reduce_sum(tf.square(y_tensor - y_mean)) unexplained_error = tf.reduce_sum(tf.square(y_tensor - model)) rs = 1 - tf.div(unexplained_error, total_error) num_epochs = 1500 loss_epochs = np.empty(shape=[num_epochs], dtype=np.float32) mse_epochs = np.empty(shape=[num_epochs], dtype=np.float32) rs_epochs = np.empty(shape=[num_epochs], dtype=np.float32) mse_score = 0.0 rs_score = 0.0 with tf.Session() as tfs: tfs.run(tf.global_variables_initializer()) for epoch in range(num_epochs): feed_dict = {x_tensor: X_train, y_tensor: y_train} loss_val, _ = tfs.run([loss, optimizer], feed_dict=feed_dict) loss_epochs[epoch] = loss_val feed_dict = {x_tensor: X_test, y_tensor: y_test} mse_score, rs_score = tfs.run([mse, rs], feed_dict=feed_dict) mse_epochs[epoch] = mse_score rs_epochs[epoch] = rs_score print('For test data : MSE = {0:.8f}, R2 = {1:.8f} '.format( mse_score, rs_score)) plt.figure(figsize=(14, 8)) plt.axis([0, num_epochs, 0, np.max([loss_epochs, mse_epochs])]) plt.plot(loss_epochs, label='Loss on X_train') plt.plot(mse_epochs, label='MSE on X_test') plt.title('Loss in Iterations') plt.xlabel('# Epoch') plt.ylabel('Loss or MSE') plt.legend() plt.show() plt.figure(figsize=(14, 8)) plt.axis([0, num_epochs, np.min(rs_epochs), np.max(rs_epochs)]) plt.title('R-squared in Iterations') plt.plot(rs_epochs, label='R2 on X_test') plt.xlabel('# Epoch') plt.ylabel('R2') plt.legend() plt.show() ``` ## ElasticNet Regularization ``` num_outputs = y_train.shape[1] num_inputs = X_train.shape[1] x_tensor = tf.placeholder(dtype=tf.float32, shape=[None, num_inputs], name='x') y_tensor = tf.placeholder(dtype=tf.float32, shape=[None, num_outputs], name='y') w = tf.Variable(tf.zeros([num_inputs, num_outputs]), dtype=tf.float32, name='w') b = tf.Variable(tf.zeros([num_outputs]), dtype=tf.float32, name='b') model = tf.matmul(x_tensor, w) + b ridge_param = tf.Variable(0.8, dtype=tf.float32) ridge_loss = tf.reduce_mean(tf.square(w)) * ridge_param lasso_param = tf.Variable(0.8, dtype=tf.float32) lasso_loss = tf.reduce_mean(tf.abs(w)) * lasso_param loss = tf.reduce_mean(tf.square(model - y_tensor)) + \ ridge_loss + lasso_loss learning_rate = 0.001 optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # mse and R2 functions mse = tf.reduce_mean(tf.square(model - y_tensor)) y_mean = tf.reduce_mean(y_tensor) total_error = tf.reduce_sum(tf.square(y_tensor - y_mean)) unexplained_error = tf.reduce_sum(tf.square(y_tensor - model)) rs = 1 - tf.div(unexplained_error, total_error) num_epochs = 1500 loss_epochs = np.empty(shape=[num_epochs], dtype=np.float32) mse_epochs = np.empty(shape=[num_epochs], dtype=np.float32) rs_epochs = np.empty(shape=[num_epochs], dtype=np.float32) mse_score = 0.0 rs_score = 0.0 with tf.Session() as tfs: tfs.run(tf.global_variables_initializer()) for epoch in range(num_epochs): feed_dict = {x_tensor: X_train, y_tensor: y_train} loss_val, _ = tfs.run([loss, optimizer], feed_dict=feed_dict) loss_epochs[epoch] = loss_val feed_dict = {x_tensor: X_test, y_tensor: y_test} mse_score, rs_score = tfs.run([mse, rs], feed_dict=feed_dict) mse_epochs[epoch] = mse_score rs_epochs[epoch] = rs_score print('For test data : MSE = {0:.8f}, R2 = {1:.8f} '.format( mse_score, rs_score)) plt.figure(figsize=(14, 8)) plt.axis([0, num_epochs, 0, np.max([loss_epochs, mse_epochs])]) plt.plot(loss_epochs, label='Loss on X_train') plt.plot(mse_epochs, label='MSE on X_test') plt.title('Loss in Iterations') plt.xlabel('# Epoch') plt.ylabel('Loss or MSE') plt.legend() plt.show() plt.figure(figsize=(14, 8)) plt.axis([0, num_epochs, np.min(rs_epochs), np.max(rs_epochs)]) plt.title('R-squared in Iterations') plt.plot(rs_epochs, label='R2 on X_test') plt.xlabel('# Epoch') plt.ylabel('R2') plt.legend() plt.show() ```
github_jupyter
(fourier_transforms)= # Fourier transforms ```{index} Fourier transforms ``` Fourier transforms are mathematical operations which when acted on a function, decomose it to its constituent frequencies. In the Fourier Series course, we have shown that a periodic function can be expressed as an infinite sum of sine and cosine functions. We have also shown that, through scaling laws, we can extend the period of the function to an arbitrary length. If the highest frequency in the Fourier Series is kept the same and we keep extending the period of the function, the sum will become longer and longer. In the limit where the period is expanded to infinity, the sum will become an integral, resulting to the definition of the *Fourier Transform*. The *Fourier Transform* of a function $f(t)$ to a new function $F(\omega)$ is defined as $$F(\omega) = \int_{-\infty}^{\infty}f(t)e^{-i{\omega}t}dt.$$ Using this definition, $f(t)$ is given by the Inverse Fourier Transform $$f(t) = \frac{1}{2{\pi}} \int_{-\infty}^{\infty}F(\omega)e^{i{\omega}t}d{\omega}.$$ Using these 2 expressions we can write $$f(t) = \frac{1}{2{\pi}} \int_{-\infty}^{\infty} \left[ \int_{-\infty}^{\infty}f(t)e^{-i{\omega}t}dt \right]e^{i{\omega}t}d{\omega}.$$ This is known as *Fourier's Integral Theorem*. This proves that *any* function can be represented as an infinite sum (integral) of sine and cosine functions, linking back to the Fourier Series. Note that this definition of the Fourier Transform is not unique. There are many different conventions for the Fourier Transform, but we will stick with this one for this course. ```{admonition} Notation We will represent the Fourier Transform operator using the calligraphic symbol $\mathcal{F}[f(t)]$, such that $$ \mathcal{F}[f(t)] = F(\omega).$$ Using this notation, we will represent the *Inverse* Fourier transform operator as $\mathcal{F}^{-1}[F(\omega)]$ such that $$ \mathcal{F}^{-1}[F(\omega)] = f(t).$$ ``` Lets look at an example, the top hat function defined as $$ {\Pi}_a(t) = \left\{ \begin{array}\\ 1/a & \ -a/2 < t < a/2, \\ 0 & \mbox{otherwise.}\\ \end{array} \right. $$ With a Fourier Transform of $$F(\omega) = sinc({\omega}/2),$$ let's plot the function and its Fourier Transform. ``` import numpy as np import matplotlib.pyplot as plt # Define the top hat function for a = 1 def top_hat(t): if -0.5 < t < 0.5: z = 1 else: z=0 return z t = np.linspace(-1, 1, 200) f_t = [] for i in t: y = top_hat(i) f_t.append(y) omega = np.fft.fftfreq(len(t),d=t[1] - t[0]) F_omega = np.fft.fft(f_t) plt.subplot(1,2,1) plt.plot(t, f_t) plt.title("Top hat function") plt.subplot(1,2,2) plt.title("Add title") plt.plot(omega, np.real(F_omega), '-') plt.tight_layout() plt.show() import numpy as np import matplotlib.pyplot as plt # Define the top hat function for a = 1 def top_hat(t): if -0.5 < t < 0.5: z = 1 else: z=0 return z t = np.linspace(-1, 1, 1000) omega = np.linspace(-50, 50, 1000) f_t = [] for i in t: y = top_hat(i) f_t.append(y) F_omega = [] for i in omega: z = np.sinc(i/(2 * np.pi)) F_omega.append(z) plt.subplot(1,2,1) plt.plot(t, f_t) plt.title("Top hat function") plt.subplot(1,2,2) plt.title("Add title") plt.plot(omega, F_omega) plt.tight_layout() plt.show() ``` ## Special functions and their Fourier Transforms This section will focus on useful functions and their Fourier Transforms. We will look at the Delta and Gaussian functions. ### Delta function ```{index} Delta function ``` The delta function, $\delta(t)$, is defined as $$ {\delta}(t) = \left\{ \begin{array}\\ 0 & \ t \neq 0, \\ \infty & t = 0.\\ \end{array} \right. $$ Carrying out the transform we see that Fourier Transform of the delta function is actually a constant, such that $$ \mathcal{F}[{\delta}(t)] = 1. $$ Let's plot the function and its transform. ``` from scipy import signal imp = signal.unit_impulse(100, 'mid') # creates the delta function t = np.linspace(-5, 5, 100) plt.subplot(1,2,1) plt.plot(t, imp) plt.title('Delta Function') FT_omega = np.fft.fftfreq(100, t[1] - t[0]) FT = np.fft.fft(imp) plt.subplot(1,2,2) plt.plot(FT_omega, abs(FT)) plt.title('Fourier Transform') plt.ylim([0,1.2]) plt.tight_layout() plt.show() ``` ### Gaussian function ```{index} Gaussian function ``` A Gaussian function is defined as $$ f(t) = exp\left(- \frac{t^2}{2{\sigma^2}}\right), $$ where $\sigma$ is the standard devation of the Gaussian. The Fourier transform of a gaussian is another Gaussian such that $$ \mathcal{F}[f(t)] = \sqrt{2\pi}\, exp\left(- \frac{\omega^2\sigma^2}{2}\right) $$ Thus, we can see that as the Gaussian function gets broader, its Fourier Transform gets narrower. To illustrate this, let's plot the Gaussian and its transform for two different standard deviations. ``` g1 = signal.gaussian(100, std = 10) t = np.linspace(-10, 10, len(g1)) plt.subplot(1,2,1) plt.plot(t, g1) plt.title('Gaussian Function, std = 10') FT_omega = np.fft.fftfreq(len(g1), t[1] - t[0]) FT = np.fft.fft(g1) FT_omega = np.fft.fftshift(FT_omega) FT = np.fft.fftshift(FT) plt.subplot(1,2,2) plt.plot(FT_omega, abs((FT))) plt.title('Fourier Transform') plt.tight_layout() plt.show() g2 = signal.gaussian(100, std = 1) t = np.linspace(-10,10, len(g2)) plt.subplot(1,2,1) plt.plot(t, g2) plt.title('Gaussian Function, std = 1') FT_omega = np.fft.fftfreq(len(g2), t[1] - t[0]) FT = np.fft.fft(g2) FT_omega = np.fft.fftshift(FT_omega) FT = np.fft.fftshift(FT) plt.subplot(1,2,2) plt.plot(FT_omega, abs((FT))) plt.title('Fourier Transform') plt.tight_layout() plt.show() ``` ## Properties of Fourier Transforms When determining the Fourier Transform of a function, there are a number of properties that can make our calculations easier or even allow us to identify the transform of the function in question as an already known transform. Thus, being familiar with the properties of the Fourier Transforms can be of great use when considering the transforms of specific functions. ### Even and odd functions In general, the Fourier Transform, $F(\omega)$, of a function $f(t)$ will be complex and thus can be written as $$F(\omega) = R(\omega) + iI(\omega).$$ We can show that the real part of the transform, $R(\omega)$, is related to the even part of the function and that the imaginary part of the transform, $iI(\omega)$, is related to the odd part of the function. Lets start with an even funtion, $f(t)$, and determine its Fourier Transform. $$F(\omega) = \int_{-\infty}^{\infty}f(t)e^{-i{\omega}t}dt,$$ $$F(\omega) = \int_{-\infty}^{0}f(t)e^{-i{\omega}t}dt + \int_{0}^{\infty}f(t)e^{-i{\omega}t}dt,$$ $$F(\omega) = - \int_{0}^{-\infty}f(t)e^{-i{\omega}t}dt + \int_{0}^{\infty}f(t)e^{-i{\omega}t}dt,$$ $$F(\omega) = - \int_{0}^{\infty}f(-t)e^{-i{\omega}t}d(-t) + \int_{0}^{\infty}f(t)e^{-i{\omega}t}dt.$$ Since $f(t)$ is even, we can use $f(t) = f(-t)$ resulting to $$F(\omega) = \int_{0}^{\infty}f(t)e^{i{\omega}t}dt + \int_{0}^{\infty}f(t)e^{-i{\omega}t}dt,$$ $$F(\omega) = \int_{0}^{\infty}f(t)\left[ e^{i{\omega}t} + e^{-i{\omega}t} \right]dt,$$ $$F(\omega) = 2\int_{0}^{\infty}f(t) \cos({\omega}t)dt.$$ Because $f(t)$ and $\cos({\omega}t)$ are both even functions, we can write $$F(\omega) = \int_{- \infty}^{\infty}f(t) \cos({\omega}t)dt.$$ Using a similar procedure, we can derive that for an odd function $f(t)$ the Fourier Transform becomes $$F(\omega) = -i \int_{- \infty}^{\infty}f(t) \sin({\omega}t)dt.$$ Thus, this proves that the Fourier transform of an even function, $e(t)$, is real, while the Fourier Transform of an odd function, $o(t)$, is imaginary. We can take this further, by considering each function, $f(t)$, as a sum of an even and an odd function, such that $f(t) = e(t) + o(t)$. As we stated earlier the Fourier transform of a function can be written as $F(\omega) = R(\omega) + iI(\omega)$. Using these results we can show that $$R(\omega) = \mathcal{F}[e(t)] = \int_{- \infty}^{\infty}f(t) \cos({\omega}t)dt,$$ $$iI(\omega) = \mathcal{F}[o(t)] = -i \int_{- \infty}^{\infty}f(t) \sin({\omega}t)dt.$$ ### Linearity and superposition A Fourier Transform is linear, meaning that for a function $f(t) = af_1(t) + bf_2(t)$ the Fourier Transform becomes $$ \mathcal{F}\left[f(t)\right] = \mathcal{F}\left[af_1(t) + bf_2(t)\right] = a\mathcal{F}\left[f_1(t)\right]) + b\mathcal{F}\left[f_2(t)\right]. $$ This can be easily proven by considering the definition of the Fourier Transform. Again, consider the Fourier Transform of $f(t) = af_1(t) + bf_2(t)$: $$\mathcal{F}[af_1(t) + bf_2(t)] = \int_{-\infty}^{\infty}[af_1(t) + bf_2(t)]e^{-i{\omega}t}dt,$$ $$\mathcal{F}[af_1(t) + bf_2(t)] = a\int_{-\infty}^{\infty}[f_1(t)]e^{-i{\omega}t}dt + b\int_{-\infty}^{\infty}[f_2(t)]e^{-i{\omega}t}dt,$$ $$\mathcal{F}[af_1(t) + bf_2(t)] = aF_1(\omega) + bF_2(\omega).$$ ### Reciprocal broadening/scaling Stretching a function by a factor $\alpha$, results in the Fourier Transform of the function to be compressed by the same factor. Consider the transform of a function, $f({\alpha}t)$ $$\mathcal{F}[f({\alpha}t)] = \int_{-\infty}^{\infty}f({\alpha}t)e^{-i{\omega}t}dt,$$ $$\mathcal{F}[f({\alpha}t)] = \frac{1}{|\alpha|}\int_{-\infty}^{\infty}f({\alpha}t)e^\frac{{-i{\omega}{\alpha}t}}{\alpha}d({\alpha}t),$$ $$\mathcal{F}[f({\alpha}t)] = \frac{1}{|\alpha|} F\left(\frac{\omega}{\alpha}\right).$$ This shows that as the functions gets broader, its transform not only becomes narrower but due to the $1\,/\,|\alpha|$ factor it also increases in amplitude. ### Translation Shifting a function by a certain amount results in a phase shift on the Fourier Transform: $$\mathcal{F}[f(t - t_0)] = e^{-i{\omega}t_0}F(\omega),$$ $$\mathcal{F}[f(t - t_0)e^{-it{\omega}_0}] = F({\omega} - {\omega}_0).$$ ### Derivatives and integrals Finding the transform of derivative simply translates to a multiplication of the original transform such that $$\mathcal{F}[f'(t)] = i{\omega}F(\omega).$$ To prove this we must differentiate the function, $f(t)$, and take its transform $$f(t) = \frac{1}{2{\pi}} \int_{-\infty}^{\infty}F(\omega)e^{i{\omega}t}d{\omega},$$ $$\mbox{differentiating, we get} \,\,\,\,\, f'(t) = \int_{-\infty}^{\infty}i{\omega}F(\omega)e^{i{\omega}t}d{\omega}.$$ This can be indentified as the inverse Fourier Transform, such that $$ f'(t) = \mathcal{F}^{-1}[i{\omega}F(\omega)].$$ Thus, taking the Fourier Transform of $f'(t)$ proves the above statement $$ \mathcal{F}[f'(t)] = \mathcal{F}[\mathcal{F}^{-1}[i{\omega}F(\omega)]],$$ $$\mathcal{F}[f'(t)] = i{\omega}F(\omega).$$ We can generalise this result to higher order derivatives. For the $nth$ derivative $$ \mathcal{F}[f'^{(n)}(t)] = (i{\omega})^{n}F(\omega).$$ This shows that differentiation magnifies *high frequencies* and shifts the phase of the transform by ${\pi}\,/\,{2}.$ Using this result we can show how integration affects the transform of a function $$\int \mathcal{F}[f(t)]dt = F(\omega) \, / \, i{\omega} \,\, + \text{constant}.$$ ## Convolution ```{index} Convolution ``` When discussing the nature of Fourier Series and Transforms, one needs to discuss *convolutions*. Simply stated, a convolution of two (or more) functions is defined as the integral over *all space* of the product of the *two* desired functions after one has been *reversed and shifted*. The convolution of two functions, $a(t)$ and $b(t)$ is denoted by $a(t) * b(t)$ and defined as $$a(t) * b(t) = \int_{-\infty}^{\infty} a(u) b(t-u)du$$ where $u$ is a dummy variable that dissapears when integrating. It should be noted that convolution is commutative, meaning that the ordering is not important. Convolution is perharps one of the most important tools for a scientist of any discipline. This can be illustrated via a simple example. Imagine you have just made measurements of the magnitude of a magnetic field at a particular direction using a magnetometer. Those measurements have an inherent error due to the precision of the instrument you used. This error will lead to the *smearing* of your outcome distribution, or in other words the true distribution has been *convolved* with the error function. Therefore in order to recover the original (true) distrubution of your measurements you need to use the *Convolution Theorem*, detailed below. ## The convolution theorem Let us examine what happens when we apply a Fourier Transform on a convolution of two functions. $$ \mathcal{F}[f_1(t) * f_2(t)] = \int_{-\infty}^{\infty} \left(\int_{-\infty}^{\infty} f_1(u) f_2(t-u)du\right) e^{-i{\omega}t}dt,$$ $$ \mathcal{F}[f_1(t) * f_2(t)] = \int_{-\infty}^{\infty} f_1(u)e^{-i{\omega}u}\left(\int_{-\infty}^{\infty} f_2(t-u)e^{-i{\omega}(t-u)}dt\right)du,$$ $$ \mathcal{F}[f_1(t) * f_2(t)] = \left(\int_{-\infty}^{\infty} f_1(u)e^{-i{\omega}u}du\right) \left(\int_{-\infty}^{\infty} f_2(s)e^{-i{\omega}(s)}ds\right),$$ $$ \mathcal{F}[f_1(t) * f_2(t)] = F_1(\omega) F_2(\omega), $$ where the splitting of the integrals comes from making the substitution $s = t - u$ and then noting that $u$ no longer appears on the inner integral. The final expression deduced above is known as *The Convolution Theorem*. It is one of the most important properties of Fourier Transfors which is evident as it the essence of Fourier Analysis. The Convolution Theorem states that the Fourier Transform of the convolution of two functions is equal to the product of the Fourier Transforms of each function. Looking at it in the opposite direction, the Fourier Transform of the product of two functions is given by the convolution of the Fourier Transform of those functions individually: $$ \mathcal{F}[f_1(t) f_2(t)] = \frac{1}{2\pi}F_1(\omega) * F_2(\omega). $$ Going back to the problem we discussed at the beginning of the notebook, we can now utilise the Convolution Theorem to understand how to retrieve the true distribution from a set of data that has been convolved with an error function. Simply, apply a Fourier transform on the resulting convolved distribution and divide it by the Fourier Transform of the known error function. The result will be the Fourier Transform of your true distribution. **ADD AN EXAMPLE OF HOW CONVOLUTION CAN BE USED IN CODE?** ## References - Material used in this notebook was based on the "Fourier Transforms" course by professor Carlo Contaldi provided by the Physics Department.
github_jupyter
# Exercise 3 - Quantum error correction ## Historical background Shor's algorithm gave quantum computers a worthwhile use case—but the inherent noisiness of quantum mechanics meant that building hardware capable of running such an algorithm would be a huge struggle. In 1995, Shor released another landmark paper: a scheme that shared quantum information over multiple qubits in order to reduce errors.[1] A great deal of progress has been made over the decades since. New forms of error correcting codes have been discovered, and a large theoretical framework has been built around them. The surface codes proposed by Kitaev in 1997 have emerged as the leading candidate, and many variations on the original design have emerged since then. But there is still a lot of progress to make in tailoring codes to the specific details of quantum hardware.[2] In this exercise we'll consider a case in which artificial 'errors' are inserted into a circuit. Your task is to design the circuit such that these additional gates can be identified. You'll then need to think about how to implement your circuit on a real device. This means you'll need to tailor your solution to the layout of the qubits. Your solution will be scored on how few entangling gates (the noisiest type of gate) that you use. ### References 1. Shor, Peter W. "Scheme for reducing decoherence in quantum computer memory." Physical review A 52.4 (1995): R2493. 1. Dennis, Eric, et al. "Topological quantum memory." Journal of Mathematical Physics 43.9 (2002): 4452-4505. ## The problem of errors Errors occur when some spurious operation acts on our qubits. Their effects cause things to go wrong in our circuits. The strange results you may have seen when running on real devices is all due to these errors. There are many spurious operations that can occur, but it turns out that we can pretend that there are only two types of error: bit flips and phase flips. Bit flips have the same effect as the `x` gate. They flip the $|0\rangle$ state of a single qubit to $|1\rangle$ and vice-versa. Phase flips have the same effect as the `z` gate, introducing a phase of $-1$ into superpositions. Put simply, they flip the $|+\rangle$ state of a single qubit to $|-\rangle$ and vice-versa. The reason we can think of any error in terms of just these two is because any error can be represented by some matrix, and any matrix can be written in terms of the matrices $X$ and $Z$. Specifically, for any single qubit matrix $M$, $$ M = \alpha I + \beta X + \gamma XZ + \delta Z, $$ for some suitably chosen values $\alpha$, $\beta$, $\gamma$ and $\delta$. So whenever we apply this matrix to some single qubit state $|\psi\rangle$ we get $$ M |\psi\rangle = \alpha |\psi\rangle + \beta X |\psi\rangle + \gamma XZ |\psi\rangle + \delta Z |\psi\rangle. $$ The resulting superposition is composed of the original state, the state we'd have if the error was just a bit flip, the state for just a phase flip and the state for both. If we had some way to measure whether a bit or phase flip happened, the state would then collapse to just one possibility. And our complex error would become just a simple bit or phase flip. So how do we detect whether we have a bit flip or a phase flip (or both). And what do we do about it once we know? Answering these questions is what quantum error correction is all about. ## An overly simple example One of the first quantum circuits that most people ever write is to create a pair of entangled qubits. In this journey into quantum error correction, we'll start the same way. ``` from qiskit import QuantumCircuit, Aer # Make an entangled pair qc_init = QuantumCircuit(2) qc_init.h(0) qc_init.cx(0,1) # Draw the circuit display(qc_init.draw('mpl')) # Get an output qc = qc_init.copy() qc.measure_all() job = Aer.get_backend('qasm_simulator').run(qc) job.result().get_counts() ``` Here we see the expected result when we run the circuit: the results `00` and `11` occurring with equal probability. But what happens when we have the same circuit, but with a bit flip 'error' inserted manually. ``` # Make bit flip error qc_insert = QuantumCircuit(2) qc_insert.x(0) # Add it to our original circuit qc = qc_init.copy() qc = qc.compose(qc_insert) # Draw the circuit display(qc.draw('mpl')) # Get an output qc.measure_all() job = Aer.get_backend('qasm_simulator').run(qc) job.result().get_counts() ``` Now the results are different: `01` and `10`. The two bit values have gone from always agreeing to always disagreeing. In this way, we detect the effect of the error. Another way we can detect it is to undo the entanglement with a few more gates. If there are no errors, we return to the initial $|00\rangle$ state. ``` # Undo entanglement qc_syn = QuantumCircuit(2) qc_syn.cx(0,1) qc_syn.h(0) # Add this after the error qc = qc_init.copy() qc = qc.compose(qc_syn) # Draw the circuit display(qc.draw('mpl')) # Get an output qc.measure_all() job = Aer.get_backend('qasm_simulator').run(qc) job.result().get_counts() ``` But what happens if there are errors one of the qubits? Try inserting different errors to find out. Here's a circuit with all the components we've introduced so far: the initialization `qc_init`, the inserted error in `qc_insert` and the final `qc_syn` which ensures that the final measurement gives a nice definite answer. ``` # Define an error qc_insert = QuantumCircuit(2) qc_insert.x(0) # Undo entanglement qc_syn = QuantumCircuit(2) qc_syn.cx(0,1) qc_syn.h(0) # Add this after the error qc = qc_init.copy() qc = qc.compose(qc_insert) qc = qc.compose(qc_syn) # Draw the circuit display(qc.draw('mpl')) # Get an output qc.measure_all() job = Aer.get_backend('qasm_simulator').run(qc) job.result().get_counts() ``` You'll find that the output tells us exactly what is going on with the errors. Both the bit and phase flips can be detected. The bit value on the left is `1` only if there is a bit flip (and so if we have inserted an `x(0)` or `x(1)`). The bit on the right similarly tells us there is a phase flip (an inserted `z(0)` or `z(1)`). This ability to detect and distinguish bit and phase flips is very useful. But it is not quite useful enough. We can only tell *what type* of errors are happening, but not *where*. Without more detail, it is not possible to figure out how to remove the effects of these operations from our computations. For quantum error correction we therefore need something bigger and better. It's your task to do just that! Here's a list of what you need to submit. Everything here is then explained by the example that follows. <div class="alert alert-block alert-success"> <b>Goal</b> Create circuits which can detect `x` and `z` errors on two qubits. You can come up with a solution of your own. Or just tweak the almost valid solution given below. </div> <div class="alert alert-block alert-danger"> <b>What to submit</b> * You need to supply two circuits: * `qc_init`: Prepares the qubits (of which there are at least two) in a desired initial state; * `qc_syn`: Measures a subset of the qubits. * The artificial errors to be inserted are `x` and `z` gates on two particular qubits. You need to pick the two qubits to be used for this (supplied as the list `error_qubits`). * There are 16 possible sets of errors to be inserted (including the trivial case of no errors). The measurement result of `qc_syn` should output a unique bit string for each. The grader will return the error message *'Please make sure the circuit is created to the initial layout.'* if this is not satisfied. * The grader will compile the complete circuit for the backend `ibmq_tokyo` (a retired device). To show that your solution is tailor made for the device, this transpilation should not change the number of `cx` gates. If it does, you will get the error message *'Please make sure the circuit is created to the initial layout.'* * To guide the transpilation, you'll need to tell the transpiler which qubits on the device should be used as which qubits in your circuit. This is done with an `initial_layout` list. * You may start with the example given below, which can become a valid answer with a few tweaks. </div> ## A better example: the surface code ``` from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, Aer, transpile import qiskit.tools.jupyter from qiskit.test.mock import FakeTokyo ``` In this example we'll use 5 qubits that we'll call code qubits. To keep track of them, we'll define a special quantum register. ``` code = QuantumRegister(5,'code') ``` We'll also have an additional four qubits we'll call syndrome qubits. ``` syn = QuantumRegister(4,'syn') ``` Similarly we define a register for the four output bits, used when measuring the syndrome qubits. ``` out = ClassicalRegister(4,'output') ``` We consider the qubits to be laid out as follows, with the code qubits forming the corners of four triangles, and the syndrome qubits living inside each triangle. ``` c0----------c1 | \ s0 / | | \ / | | s1 c2 s2 | | / \ | | / s3 \ | c3----------c4 ``` For each triangle we associate a stabilizer operation on its three qubits. For the qubits on the sides, the stabilizers are ZZZ. For the top and bottom ones, they are XXX. The syndrome measurement circuit corresponds to a measurement of these observables. This is done in a similar way to surface code stabilizers (in fact, this code is a small version of a surface code). <div class="alert alert-block alert-danger"> <b>Warning</b> You should remove the barriers before submitting the code as it might interfere with transpilation. It is given here for visualization only. </div> ``` qc_syn = QuantumCircuit(code,syn,out) # Left ZZZ qc_syn.cx(code[0],syn[1]) qc_syn.cx(code[2],syn[1]) qc_syn.cx(code[3],syn[1]) #qc_syn.barrier() # Right ZZZ #qc_syn.cx(code[1],syn[2]) #qc_syn.cx(code[2],syn[2]) #qc_syn.cx(code[4],syn[2]) qc_syn.swap(code[2],syn[3]) qc_syn.cx(code[1],syn[2]) qc_syn.cx(syn[3],syn[2]) qc_syn.cx(code[4],syn[2]) qc_syn.swap(code[2],syn[3]) #qc_syn.barrier() # Top XXX qc_syn.h(syn[0]) qc_syn.cx(syn[0],code[0]) qc_syn.cx(syn[0],code[1]) qc_syn.cx(syn[0],code[2]) qc_syn.h(syn[0]) #qc_syn.barrier() # Bottom XXX qc_syn.h(syn[3]) qc_syn.cx(syn[3],code[2]) qc_syn.cx(syn[3],code[3]) qc_syn.cx(syn[3],code[4]) qc_syn.h(syn[3]) #qc_syn.barrier() # Measure the auxilliary qubits qc_syn.measure(syn,out) qc_syn.draw('mpl') ``` The initialization circuit prepares an eigenstate of these observables, such that the output of the syndrome measurement will be `0000` with certainty. ``` qc_init = QuantumCircuit(code,syn,out) qc_init.h(syn[0]) qc_init.cx(syn[0],code[0]) qc_init.cx(syn[0],code[1]) qc_init.cx(syn[0],code[2]) qc_init.cx(code[2],syn[0]) qc_init.h(syn[3]) qc_init.cx(syn[3],code[2]) qc_init.cx(syn[3],code[3]) qc_init.cx(syn[3],code[4]) qc_init.cx(code[4],syn[3]) #qc_init.barrier() qc_init.draw('mpl') ``` Let's check that is true. ``` qc = qc_init.compose(qc_syn) display(qc.draw('mpl')) job = Aer.get_backend('qasm_simulator').run(qc) job.result().get_counts() ``` Now let's make a circuit with which we can insert `x` and `z` gates on our two code qubits. For this we'll need to choose which of the 5 code qubits we have will correspond to the two required for the validity condition. For this code we need to choose opposite corners. ``` error_qubits = [0,4] ``` Here 0 and 4 refer to the positions of the qubits in the following list, and hence are qubits `code[0]` and `code[4]`. ``` qc.qubits ``` To check that the code does as we require, we can use the following function to create circuits for inserting artificial errors. Here the errors we want to add are listed in `errors` as a simple text string, such as `x0` for an `x` on `error_qubits[0]`. ``` def insert(errors,error_qubits,code,syn,out): qc_insert = QuantumCircuit(code,syn,out) if 'x0' in errors: qc_insert.x(error_qubits[0]) if 'x1' in errors: qc_insert.x(error_qubits[1]) if 'z0' in errors: qc_insert.z(error_qubits[0]) if 'z1' in errors: qc_insert.z(error_qubits[1]) return qc_insert ``` Rather than all 16 possibilities, let's just look at the four cases where a single error is inserted. ``` for error in ['x0','x1','z0','z1']: qc = qc_init.compose(insert([error],error_qubits,code,syn,out)).compose(qc_syn) job = Aer.get_backend('qasm_simulator').run(qc) print('\nFor error '+error+':') counts = job.result().get_counts() for output in counts: print('Output was',output,'for',counts[output],'shots.') ``` Here we see that each bit in the output is `1` when a particular error occurs: the leftmost detects `z` on `error_qubits[1]`, then the next detects `x` on `error_qubits[1]`, and so on. <div class="alert alert-block alert-danger"> <b>Attention</b> The correct ordering of the output is important for this exercise. Please follow the order as given below: 1. The leftmost output represents `z` on `code[1]`. 2. The second output from left represents `x` on `code[1]`. 3. The third output from left represents `x` on `code[0]`. 4. The rightmost output represents `z` on `code[0]`. </div> When more errors affect the circuit, it becomes hard to unambiguously tell which errors occurred. However, by continuously repeating the syndrome readout to get more results and analysing the data through the process of decoding, it is still possible to determine enough about the errors to correct their effects. These kinds of considerations are beyond what we will look at in this challenge. Instead we'll focus on something simpler, but just as important: the fewer errors you have, and the simpler they are, the better your error correction will be. To ensure this, your error correction procedure should be tailor-made to the device you are using. In this challenge we'll be considering the device `ibmq_tokyo`. Though the real version of this was retired some time ago, it still lives on as one of the mock backends. ``` # Please use the backend given here backend = FakeTokyo() backend ``` As a simple idea of how our original circuit is laid out, let's see how many two-qubit gates it contains. ``` qc = qc_init.compose(qc_syn) qc = transpile(qc, basis_gates=['u','cx']) qc.num_nonlocal_gates() ``` If we were to transpile it to the `ibmq_tokyo` backend, remapping would need to occur at the cost of adding for two-qubit gates. ``` qc1 = transpile(qc,backend,basis_gates=['u','cx'], optimization_level=3) qc1.num_nonlocal_gates() ``` We can control this to an extent by looking at which qubits on the device would be best to use as the qubits in the code. If we look at what qubits in the code need to be connected by two-qubit gates in `qc_syn`, we find the following required connectivity graph. ``` c0....s0....c1 : : : : : : s1....c2....s2 : : : : : : c3....s3....c4 ``` No set of qubits on `ibmq_tokyo` can provide this, but certain sets like 0,1,2,5,6,7,10,11,12 come close. So we can set an `initial_layout` to tell the transpiler to use these. ``` initial_layout = [0,7,6,10,16,1,5,12,11] ``` These tell the transpiler which qubits on the device to use for the qubits in the circuit (for the order they are listed in `qc.qubits`). So the first five entries in this list tell the circuit which qubits to use as the code qubits and the next four entries in this list are similarly for the syndrome qubits. So we use qubit 0 on the device as `code[0]`, qubit 2 as `code[1]` and so on. Now let's use this for the transpilation. ``` qc2 = transpile(qc,backend,initial_layout=initial_layout, basis_gates=['u','cx'], optimization_level=3) qc2.num_nonlocal_gates() ``` Though transpilation is a random process, you should typically find that this uses less two-qubit gates than when no initial layout is provided (you might need to re-run both transpilation code multiple times to see it as transpilation is a random process). Nevertheless, a properly designed error correction scheme should not need any remapping at all. It should be written for the exact device used, and the number of two-qubit gates should remain constant with certainty. This is a condition for a solution to be valid. So you'll not just need to provide an `initial_layout`, but also design your circuits specifically for that layout. But that part we leave up to you! ``` # Check your answer using following code from qc_grader import grade_ex3 grade_ex3(qc_init,qc_syn,error_qubits,initial_layout) # Submit your answer. You can re-submit at any time. from qc_grader import submit_ex3 submit_ex3(qc_init,qc_syn,error_qubits,initial_layout) ``` ## Additional information **Created by:** James Wootton, Rahul Pratap Singh **Version:** 1.0.0
github_jupyter
&emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&ensp; [Home Page](../START_HERE.ipynb) [Previous Notebook](03-Cudf_Exercise.ipynb) &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; [1](01-Intro_to_cuDF.ipynb) [2](02-Intro_to_cuDF_UDFs.ipynb) [3](03-Cudf_Exercise.ipynb) [4] # Applying CuDF: Workbook Welcome to fourth cuDF tutorial notebook! This is a practical example that utilizes cuDF and cuPy, geared primarily for new users. The purpose of this tutorial is to introduce new users to a data science processing pipeline using RAPIDS on real life datasets. We will be working on a data science problem: US Accidents Prediction. This is a countrywide car accident dataset, which covers 49 states of the USA. The accident data are collected from February 2016 to June 2020, using two APIs that provide streaming traffic incident (or event) data. These APIs broadcast traffic data captured by a variety of entities, such as the US and state departments of transportation, law enforcement agencies, traffic cameras, and traffic sensors within the road-networks. Currently, there are about 3.5 million accident records in this dataset. ## What should I do? Given below is a complete data science preprocessing pipeline for the dataset using Pandas and Numpy libraries. Using the methods and techniques from the previous notebooks, you have to convert this pipeline to a a RAPIDS implementation, using CuDF and CuPy. Don't forget to time your code cells and compare the performance with this original code, to understand why we are using RAPIDS. If you get stuck in the middle, feel free to refer to this sample solution. ## Here is the list of exercises in the lab where you need to modify code: - <a href='#ex1'>Exercise 1</a><br> Loading the dataset from a csv file and store in a CuDF dataframe. - <a href='#ex2'>Exercise 2</a><br> Creating kernel functions to run the given function optimally on a GPU. The first step is downloading the dataset and putting it in the data directory, for using in this tutorial. Download the dataset here, and place it in (host/data) folder. Now we will import the necessary libraries. ``` import os import cudf import numpy as np import cupy as cp import math np.random.seed(12) ``` <a id='ex1'></a> First we need to load the dataset from the csv into CuDF dataframes, for the preprocessing steps. If you need help, refer to the Getting Data In and Out module from this [notebook](01-Intro_to_cuDF.ipynb/). ``` #Modify the code in this cell # Use cudf to read csv %time df = print(df) ``` First we will analyse the data and observe patterns that can help us process the data better for feeding to the machine learning algorithms in the future. By using the describe, we will generate the descriptive statistics for all the columns. Descriptive statistics include those that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values. ``` df.describe() ``` We will check the size of the dataset that is to be processed using the len function. ``` len(df) ``` You will notice that the dataset has 3513616 rows and takes quite a lot of time to read from the file. As we go ahead with the preprocessing, computations will require more time to execute, and that's where the RAPIDS comes to the rescue! Now we use the info function to check the datatype of all the columns in the dataset. ``` df.info() ``` We will also check the number of missing values in the dataset, so that we can drop or fill in the missing values. ``` df.isna().sum() ``` There are many columns with null values, and we will fill them with random values or the mean from the column. We will drop some text columns, as we are not doing any natural language processing right now, but feel free to explore them on your own. We will also drop the columns with too many Nans as filling them will throw our accuracy. ``` df = df.drop(columns = ['ID','Start_Time','End_Time','Street','Side','Description','Number','City','Country','Zipcode','Timezone','Airport_Code','Weather_Timestamp','Wind_Chill(F)','Wind_Direction','Wind_Speed(mph)','Precipitation(in)']) #Here we are filling the TMC with mean. df['TMC'] = df['TMC'].fillna(df['TMC'].mean()) df['End_Lat'] = df['End_Lat'].fillna(df['End_Lat'].mean()) df['End_Lng'] = df['End_Lng'].fillna(df['End_Lng'].mean()) df['Temperature(F)'] = df['Temperature(F)'].fillna(df['Temperature(F)'].mean()) df['Humidity(%)'] = df['Humidity(%)'].fillna(df['Humidity(%)'].mean()) df['Pressure(in)'] = df['Pressure(in)'].fillna(df['Pressure(in)'].mean()) df['Visibility(mi)'] = df['Visibility(mi)'].fillna(df['Visibility(mi)'].mean()) df['Humidity(%)'] = df['Humidity(%)'].fillna(df['Humidity(%)'].mean()) df['Pressure(in)'] = df['Pressure(in)'].fillna(df['Pressure(in)'].mean()) df['Visibility(mi)'] = df['Visibility(mi)'].fillna(df['Visibility(mi)'].mean()) df['Weather_Condition'] = df['Weather_Condition'].fillna('Fair') df['Sunrise_Sunset'] = df['Sunrise_Sunset'].fillna('Day') df['Civil_Twilight'] = df['Civil_Twilight'].fillna('Day') df['Nautical_Twilight'] = df['Nautical_Twilight'].fillna('Day') df['Astronomical_Twilight'] = df['Astronomical_Twilight'].fillna('Day') df['Weather_Condition'] = df['Weather_Condition'].fillna('Fair') ``` Now all the columns contain no Nan values and we can go ahead with the preprocessing. <a id='ex2'></a> As you have observed in the dataset we have the start and end coordinates,so let us apply Haversine distance formula to get the accident coverage distance. Take note of how these functions use the row-wise operations, something that we have learnt before. If you need help while creating the user defined functions refer to this [notebook](02-Intro_to_cuDF_UDFs.ipynb). ``` from math import cos, sin, asin, sqrt, pi, atan2 from numba import cuda #Modify the code in this cell def haversine_distance_kernel(Start_Lat, Start_Lng, End_Lat, End_Lng, out): for i, (x_1, y_1, x_2, y_2) in enumerate(zip(Start_Lat, Start_Lng, End_Lat, End_Lng)): #Perform the computations here and store the final value in out[i] out[i] = #Modify the code in this cell %%time #Add the arguments to the apply_rows function for the haversine distance kernel df = df.apply_rows() ``` Wow! The code segment that previously took 7 minutes to compute, now gets executed in less than a second! ``` #Modify the code in this cell def haversine_distance_kernel(Start_Lat, Start_Lng, End_Lat, End_Lng, out): for i, (x_1, y_1, x_2, y_2) in enumerate(zip(Start_Lat, Start_Lng, End_Lat, End_Lng)): #Perform the computations here and store the final value in out[i] out[i] = #Modify the code in this cell %%time #Add the arguments to the apply_chunks function for the haversine distance kernel outdf = df.apply_chunks() ``` Save the dataframe in a csv for future use, and make sure you refer to our sample solution and compared your code's performance with it. ``` df.head() df = df.dropna() df.to_csv("../../data/data_proc.csv") ``` # Conclusion Thus we have successfully used CuDF and CuPy to process the accidents dataset, and converted the data to a form more suitable to apply machine learning algorithms. In the extra labs for future labs in CuML we will be using this processed dataset. You must have observed the parallels between the RAPIDS pipeline and traditional pipeline while writing your code. Try to experiment with the processing and making your code as efficient as possible. # References - Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, and Rajiv Ramnath. “A Countrywide Traffic Accident Dataset.”, 2019. - Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, Radu Teodorescu, and Rajiv Ramnath. "Accident Risk Prediction based on Heterogeneous Sparse Data: New Dataset and Insights." In proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, ACM, 2019. - If you need to refer to the dataset, you can download it [here](https://www.kaggle.com/sobhanmoosavi/us-accidents). <center><a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a></center><br /> - This dataset is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. ## Licensing This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0). [Previous Notebook](03-Cudf_Exercise.ipynb) &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; [1](01-Intro_to_cuDF.ipynb) [2](02-Intro_to_cuDF_UDFs.ipynb) [3](03-Cudf_Exercise.ipynb) [4] &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&ensp; [Home Page](../START_HERE.ipynb)
github_jupyter
![](../graphics/solutions-microsoft-logo-small.png) # R for Data Professionals ## 01 Overview and Setup In this course you'll cover the basics of the R language and environment from a Data Professional's perspective. While you will learn the basics of R itself, you'll quickly cover topics that have a lot more depth available. In each section you'll get more references to go deeper, which you should follow up on. Also watch for links within the text - click on each one to explore that topic. The code sections of this course are as much a part of your learning as these overview files. You'll get not only assignments but explanations in the R code in those exercises. Make sure you check out the **00 Pre-Requisites** page before you start. You'll need all of the items loaded there before you can proceed with the course. You'll cover these topics in the course: <p style="border-bottom: 1px solid lightgrey;"></p> <dl> <dt>Course Outline</dt> <dt>1 - Overview and Course Setup <i>(This section)</i></dt> <dt>2 - Programming Basics</dt> <dt>3 Working with Data</dt> <dt>4 Deployment and Environments</dt> <dl> <p style="border-bottom: 1px solid lightgrey;"></p> <h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/cortanalogo.png"> Overview</h2> There are many "distributions" of R. The most common installation is from the CRAN - the Comprehensive R Network. The distribution you will use in this course is installed when you install SQL Server 2016 or higher with ML Services (or R Services in the earlier versions) is called Microsoft R Open (MRO), and its base code is from the CRAN distribution. MRO replaces a couple of libraries (more about those later) and adds a few to increase the speed, capabilities and features of standard CRAN R. You have a few ways of working with R: - The Interactive Interpreter (Type `R` if it is in your path) - Writing code and running it in some graphical environment (Such as VSCode, Visual Studio, RGUI, R-Studio, etc.) - Calling an `.R` script file from the `R` command When you're in command-mode, you'll see that the code works more like a scripting language. Programming-mode looks like a standard programming language environment - you'll normally use that within an Integrated Programming Environment (IDE). In any case, R is an "interpreted" language, meaning you write code that R then runs through a series of steps before it returns a result. <p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/aml-logo.png"><b>Activity: Verify Your Installation and Configure R</b></p> Open the **01_OverviewAndCourseSetup.R** file and run the code you see there. The exercises will be marked out using comments: `<TODO> - 01` ``` # 01_OverviewAndCourseSetup.R # Purpose: Initial Course Setup and displaying versions # Author: Buck Woody # Credits and Sources: Inline # Last Updated: 27 June 2018 # Check the R Version and Information # <TODO> - Fix this code so that it runs print(version) # EOF: 01_OverviewAndCourseSetup.R ``` <p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/thinking.jpg"><b>For Further Study</b></p> - The Official R Documentation: https://mran.microsoft.com/rro - The R tutorial (current as of the publication of this course) is in your ./assets folder as a file called `R-intro.pdf`. Next, Continue to *02 Programming Basics*
github_jupyter
``` import pandas as pd import os os.environ["CUDA_VISIBLE_DEVICES"] = "4" import torch import numpy as np import pickle as pk from tqdm import tqdm_notebook from sklearn.metrics import cohen_kappa_score from fastai.vision import * from torch.nn import functional as F from utils import * current_time = get_BJ_time() print(current_time) import random def seed_everything(seed): random.seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.deterministic = True SEED = 2019 seed_everything(SEED) deployment_dir = "../output/inference" def qk(y_pred, y): k = torch.tensor(cohen_kappa_score(torch.round(y_pred), y, weights='quadratic'), device='cuda:0') k[k != k] = 0 k[torch.isinf(k)] = 0 return k df_2019_cv = pd.read_csv('../input/aptos-data-split/df_2019_cv.csv') df_2019_cv.head() test_df = pd.read_csv('../input/aptos2019-blindness-detection/sample_submission.csv') ``` # Feature Extraction ## Train logits ### b3 ``` b3_models = ["efficientnet-b3_0901_16-45-51_stage2_f1", "efficientnet-b3_0901_16-45-51_stage2_f2", "efficientnet-b3_0901_16-45-51_stage2_f3", "efficientnet-b3_0901_16-45-51_stage2_f4", "efficientnet-b3_0901_16-45-51_stage2_f5"] b3_train_logits_list = [] for i, m in enumerate(b3_models): fold = i + 1 learn = load_learner(deployment_dir, "{}.pkl".format(m)) val_df = df_2019_cv[df_2019_cv["is_valid{}".format(fold)]] learn.data.add_test(ImageList.from_df(val_df, '../input/aptos2019-blindness-detection', cols="id_code", folder='train_images_ben_preprocessing_sigmaX10', suffix='.png')) logits,_ = learn.get_preds(DatasetType.Test) logits = logits.numpy() b3_train_logits_list.append(logits) np.save("../output/stacking/{}_logits.npy".format(m), logits) print(logits.shape) ``` ### b4 ``` b4_models = ["efficientnet-b4_0820_01-09-57_stage2_f1", "efficientnet-b4_0820_01-09-57_stage2_f2", "efficientnet-b4_0820_01-09-57_stage2_f3", "efficientnet-b4_0820_01-09-57_stage2_f4", "efficientnet-b4_0821_00-02-25_stage2_f5"] b4_train_logits_list = [] for i, m in enumerate(b4_models): fold = i + 1 learn = load_learner(deployment_dir, "{}.pkl".format(m)) val_df = df_2019_cv[df_2019_cv["is_valid{}".format(fold)]] learn.data.add_test(ImageList.from_df(val_df, '../input/aptos2019-blindness-detection', cols="id_code", folder='train_images_ben_preprocessing_sigmaX10', suffix='.png')) logits,_ = learn.get_preds(DatasetType.Test) logits = logits.numpy() b4_train_logits_list.append(logits) np.save("../output/stacking/{}_logits.npy".format(m), logits) print(logits.shape) ``` ### b5 ``` b5_models = ["efficientnet-b5_0820_01-32-30_stage2_f1", "efficientnet-b5_0903_01-03-41_stage2_f2", "efficientnet-b5_0820_22-13-07_stage2_f3", "efficientnet-b5_0821_01-30-37_stage2_f4", "efficientnet-b5_0821_00-26-51_stage2_f5"] b5_train_logits_list = [] for i, m in enumerate(b5_models): fold = i + 1 learn = load_learner(deployment_dir, "{}.pkl".format(m)) val_df = df_2019_cv[df_2019_cv["is_valid{}".format(fold)]] learn.data.add_test(ImageList.from_df(val_df, '../input/aptos2019-blindness-detection', cols="id_code", folder='train_images_ben_preprocessing_sigmaX10', suffix='.png')) logits,_ = learn.get_preds(DatasetType.Test) logits = logits.numpy() b5_train_logits_list.append(logits) np.save("../output/stacking/{}_logits.npy".format(m), logits) print(logits.shape) ``` ## Test Feature ### Average #### b3 ``` b3_test_logits_list = [] for m in b3_models: learn = load_learner(deployment_dir, "{}.pkl".format(m)) learn.data.add_test(ImageList.from_df(test_df, '../input/aptos2019-blindness-detection', folder='test_images_ben_preprocessing_sigmaX10', suffix='.png')) logits,_ = learn.get_preds(DatasetType.Test) logits = logits.numpy() b3_test_logits_list.append(logits) np.save("../output/stacking/{}_logits_test.npy".format(m), logits) print(logits.shape) ``` #### b4 ``` b4_test_logits_list = [] for m in b4_models: learn = load_learner(deployment_dir, "{}.pkl".format(m)) learn.data.add_test(ImageList.from_df(test_df, '../input/aptos2019-blindness-detection', folder='test_images_ben_preprocessing_sigmaX10', suffix='.png')) logits,_ = learn.get_preds(DatasetType.Test) logits = logits.numpy() b4_test_logits_list.append(logits) np.save("../output/stacking/{}_logits_test.npy".format(m), logits) print(logits.shape) ``` #### b5 ``` b5_test_logits_list = [] for m in b5_models: learn = load_learner(deployment_dir, "{}.pkl".format(m)) learn.data.add_test(ImageList.from_df(test_df, '../input/aptos2019-blindness-detection', folder='test_images_ben_preprocessing_sigmaX10', suffix='.png')) logits,_ = learn.get_preds(DatasetType.Test) logits = logits.numpy() b5_test_logits_list.append(logits) np.save("../output/stacking/{}_logits_test.npy".format(m), logits) print(logits.shape) ``` # Train Stage 2 model on OOF ``` from sklearn.model_selection import GridSearchCV from sklearn.metrics import make_scorer def qk_np(y, y_pred): k = cohen_kappa_score(np.round(y_pred), y, weights='quadratic') return k score = make_scorer(qk_np, greater_is_better=True) b3_train_logits_list = [] for m in b3_models: logits = np.load("../output/stacking/{}_logits.npy".format(m)) b3_train_logits_list.append(logits) print(logits.shape) b4_train_logits_list = [] for m in b4_models: logits = np.load("../output/stacking/{}_logits.npy".format(m)) b4_train_logits_list.append(logits) print(logits.shape) b5_train_logits_list = [] for m in b5_models: logits = np.load("../output/stacking/{}_logits.npy".format(m)) b5_train_logits_list.append(logits) print(logits.shape) X_train = np.concatenate([np.concatenate(b3_train_logits_list, axis=0), np.concatenate(b4_train_logits_list, axis=0), np.concatenate(b5_train_logits_list, axis=0)], axis=1) y_train = [] n_fold = 5 for i in range(1, n_fold+1): label_t = df_2019_cv[df_2019_cv["is_valid{}".format(i)]]["diagnosis"].tolist() y_train += label_t print(X_train.shape) ``` ## LightGBM ``` import lightgbm as lgb estimator = lgb.LGBMRegressor(random_state=SEED) param_grid = { 'max_depth': [3, 5], # 'max_depth': [5], # 'learning_rate': [0.05], 'learning_rate': [0.01, 0.05, 0.1], 'feature_fraction': [0.6, 0.7, 0.8, 0.9, 0.95], # 'feature_fraction': [0.7], 'bagging_fraction': [0.6, 0.7, 0.8, 0.9, 0.95], # 'bagging_fraction': [0.7], # 'bagging_freq': [8], 'bagging_freq': [5, 6, 8], 'lambda_l1': [0, 0.1, 0.4], # 'lambda_l1': [0], # 'lambda_l2': [15], 'lambda_l2': [0, 10, 15, 20], # 'cat_smooth': [1], 'cat_smooth': [1, 10, 15], } gbm = GridSearchCV(estimator, param_grid, cv=5, n_jobs=-1, scoring=score, verbose=1) gbm.fit(X_train, y_train) print('Best parameters found by grid search are:', gbm.best_params_) gbm.cv_results_ print(gbm.best_score_, qk_np(y_train, xlf.predict(X_train))) model_save_name = "lightgbm-{}".format(current_time) with open(os.path.join(deployment_dir, model_save_name+".pkl"), "wb") as f: pk.dump(gbm.best_estimator_, f) print(model_save_name) ``` ## XGBoost ``` import xgboost as xgb estimator_xgb = xgb.XGBRegressor(n_jobs=8, random_state=SEED) parameters = { 'max_depth': [3], 'learning_rate': [0.1], # 'learning_rate': [0.01, 0.02, 0.05, 0.1, 0.15], 'min_child_weight': [20], # 'min_child_weight': [0, 2, 5, 10, 20], 'max_delta_step': [2], # 'max_delta_step': [0, 0.2, 0.6, 1, 2], 'subsample': [0.8], # 'subsample': [0.6, 0.7, 0.8, 0.85, 0.95], 'colsample_bytree': [0.7], # 'colsample_bytree': [0.5, 0.6, 0.7, 0.8, 0.9], 'reg_alpha': [0], # 'reg_alpha': [0, 0.25, 0.5, 0.75, 1], 'reg_lambda': [0.6], # 'reg_lambda': [0.2, 0.4, 0.6, 0.8, 1], 'scale_pos_weight': [0.8] # 'scale_pos_weight': [0.2, 0.4, 0.6, 0.8, 1] } xlf = GridSearchCV(estimator_xgb, parameters, cv=5, n_jobs=16, scoring=score, verbose=1) xlf.fit(X_train, y_train) print('Best parameters found by grid search are:', xlf.best_params_) xlf.cv_results_ print(xlf.best_score_, qk_np(y_train, xlf.predict(X_train))) model_save_name = "xgboost-{}".format(current_time) with open(os.path.join(deployment_dir, model_save_name+".pkl"), "wb") as f: pk.dump(xlf.best_estimator_, f) print(model_save_name) ``` ## SVR ``` from sklearn.svm import SVR # svr = SVR(gamma=0.0001, C=100) estimator_svr = SVR() tuned_parameters = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4], 'C': [1, 10, 100, 1000]}, {'kernel': ['linear'], 'C': [1, 10, 100, 1000]}] svr = GridSearchCV(estimator_svr, tuned_parameters, cv=5, n_jobs=16, scoring=score, verbose=1) svr.fit(X_train, y_train) print('Best parameters found by grid search are:', svr.best_params_) svr.cv_results_ print(svr.best_score_, qk_np(y_train, svr.predict(X_train))) model_save_name = "svr-{}".format(current_time) with open(os.path.join(deployment_dir, model_save_name+".pkl"), "wb") as f: # with open(os.path.join(deployment_dir, "svr-0903_05-26-03.pkl"), "wb") as f: pk.dump(svr.best_estimator_, f) # pk.dump(svr, f) print(model_save_name) ``` ## CatBoost ``` from catboost import CatBoostRegressor estimator_cb = CatBoostRegressor(random_seed=SEED) params = { 'depth':[3,1,2,6,4,5], # 'iterations':[500], 'iterations':[250,500,750,1000], # 'learning_rate':[0.2], 'learning_rate':[0.01,0.1,0.2,0.3], 'l2_leaf_reg':[3,1,5,10], 'border_count':[100,128, 200, 254, 300] } cb = GridSearchCV(estimator_cb, params, cv=5, n_jobs=16, scoring=score, verbose=1) cb.fit(X_train, y_train) print('Best parameters found by grid search are:', cb.best_params_) cb.cv_results_ print(cb.best_score_, qk_np(y_train, cb.predict(X_train))) model_save_name = "cb-{}".format(current_time) with open(os.path.join(deployment_dir, model_save_name+".pkl"), "wb") as f: pk.dump(cb.best_estimator_, f) print(model_save_name) ``` # Test ``` b3_test_logits_list = [] for m in b3_models: logits = np.load("../output/stacking/{}_logits_test.npy".format(m)) b3_test_logits_list.append(logits) print(logits.shape) b4_test_logits_list = [] for m in b4_models: logits = np.load("../output/stacking/{}_logits_test.npy".format(m)) b4_test_logits_list.append(logits) print(logits.shape) b5_test_logits_list = [] for m in b5_models: logits = np.load("../output/stacking/{}_logits_test.npy".format(m)) b5_test_logits_list.append(logits) print(logits.shape) ``` ## LightGBM ``` # model_save_name = "lightgbm-0903_05-26-03" b3_test_avg_feats = np.average(b3_test_logits_list, axis=0) b4_test_avg_feats = np.average(b4_test_logits_list, axis=0) b5_test_avg_feats = np.average(b5_test_logits_list, axis=0) X_test = np.concatenate([b3_test_avg_feats, b4_test_avg_feats, b5_test_avg_feats], axis=1) y_pred = gbm.predict(X_test) y_pred = np.round(y_pred) test_df.diagnosis = y_pred.astype(int) test_df.hist() plt.show() submition_filename = "../output/submission/{}-5-fold_avg_logits_test.csv".format(model_save_name) test_df.to_csv(submition_filename, index=False) print(submition_filename) # 5 test feature then avg results = [] for b3, b4, b5 in zip(b3_test_logits_list, b4_test_logits_list, b5_test_logits_list): X_test = np.concatenate([b3, b4, b5], axis=1) res = gbm.predict(X_test) results.append(res) avg_res_gbm = np.average(results, axis=0) np.save("../output/submission/{}-5-fold_logits_avg_test_logits.npy".format(model_save_name), avg_res_gbm) y_pred = np.round(avg_res_gbm) test_df.diagnosis = y_pred.astype(int) test_df.hist() plt.show() submition_filename = "../output/submission/{}-5-fold_logits_avg_test.csv".format(model_save_name) test_df.to_csv(submition_filename, index=False) print(submition_filename) ``` ## XGBoost ``` model_save_name = "xgboost-0903_05-26-03" b3_test_avg_feats = np.average(b3_test_logits_list, axis=0) b4_test_avg_feats = np.average(b4_test_logits_list, axis=0) b5_test_avg_feats = np.average(b5_test_logits_list, axis=0) X_test = np.concatenate([b3_test_avg_feats, b4_test_avg_feats, b5_test_avg_feats], axis=1) y_pred = xlf.predict(X_test) y_pred = np.round(y_pred) test_df.diagnosis = y_pred.astype(int) test_df.hist() plt.show() submition_filename = "../output/submission/{}-5-fold_avg_logits_test.csv".format(model_save_name) test_df.to_csv(submition_filename, index=False) print(submition_filename) # 5 test feature then avg results = [] for b3, b4, b5 in zip(b3_test_logits_list, b4_test_logits_list, b5_test_logits_list): X_test = np.concatenate([b3, b4, b5], axis=1) res = xlf.predict(X_test) results.append(res) avg_res_xlf = np.average(results, axis=0) np.save("../output/submission/{}-5-fold_logits_avg_test_logits.npy".format(model_save_name), avg_res_xlf) y_pred = np.round(avg_res_xlf) test_df.diagnosis = y_pred.astype(int) test_df.hist() plt.show() submition_filename = "../output/submission/{}-5-fold_logits_avg_test.csv".format(model_save_name) test_df.to_csv(submition_filename, index=False) print(submition_filename) ``` ## SVR ``` model_save_name = "svr-0903_05-26-03" b3_test_avg_feats = np.average(b3_test_logits_list, axis=0) b4_test_avg_feats = np.average(b4_test_logits_list, axis=0) b5_test_avg_feats = np.average(b5_test_logits_list, axis=0) X_test = np.concatenate([b3_test_avg_feats, b4_test_avg_feats, b5_test_avg_feats], axis=1) y_pred = svr.predict(X_test) y_pred = np.round(y_pred) test_df.diagnosis = y_pred.astype(int) test_df.hist() plt.show() submition_filename = "../output/submission/{}-5-fold_avg_logits_test.csv".format(model_save_name) test_df.to_csv(submition_filename, index=False) print(submition_filename) # 5 test feature then avg results = [] for b3, b4, b5 in zip(b3_test_logits_list, b4_test_logits_list, b5_test_logits_list): X_test = np.concatenate([b3, b4, b5], axis=1) res = svr.predict(X_test) results.append(res) avg_res_svr = np.average(results, axis=0) np.save("../output/submission/{}-5-fold_logits_avg_test_logits.npy".format(model_save_name), avg_res_svr) y_pred = np.round(avg_res_svr) test_df.diagnosis = y_pred.astype(int) test_df.hist() plt.show() submition_filename = "../output/submission/{}-5-fold_logits_avg_test.csv".format(model_save_name) test_df.to_csv(submition_filename, index=False) print(submition_filename) ``` ## CatBoost ``` b3_test_avg_feats = np.average(b3_test_logits_list, axis=0) b4_test_avg_feats = np.average(b4_test_logits_list, axis=0) b5_test_avg_feats = np.average(b5_test_logits_list, axis=0) X_test = np.concatenate([b3_test_avg_feats, b4_test_avg_feats, b5_test_avg_feats], axis=1) y_pred = cb.predict(X_test) y_pred = np.round(y_pred) test_df.diagnosis = y_pred.astype(int) test_df.hist() plt.show() submition_filename = "../output/submission/{}-5-fold_avg_logits_test.csv".format(model_save_name) test_df.to_csv(submition_filename, index=False) print(submition_filename) # 5 test feature then avg results = [] for b3, b4, b5 in zip(b3_test_logits_list, b4_test_logits_list, b5_test_logits_list): X_test = np.concatenate([b3, b4, b5], axis=1) res = cb.predict(X_test) results.append(res) avg_res_svr = np.average(results, axis=0) np.save("../output/submission/{}-5-fold_logits_avg_test_logits.npy".format(model_save_name), avg_res_svr) y_pred = np.round(avg_res_svr) test_df.diagnosis = y_pred.astype(int) test_df.hist() plt.show() submition_filename = "../output/submission/{}-5-fold_logits_avg_test.csv".format(model_save_name) test_df.to_csv(submition_filename, index=False) print(submition_filename) ``` # Correlation Analysis ``` np.corrcoef([avg_res_gbm, avg_res_xlf, avg_res_svr]) ```
github_jupyter
## enron emails to data.world Convert the enron email data into something easier to use in data.world. data cleansing based *roughly* on: https://www.kaggle.com/zichen/d/wcukierski/enron-email-dataset/explore-enron/code ### Labels The recores with `labeled` set were labelled by [CMU students](https://www.cs.cmu.edu/~./enron/). There are up to 12 categories per email: * Cat_[1-12]_level_1 = top-level category * Cat_[1-12]_level_2 = second-level category * Cat_[1-12]_level_weight = frequency with which this category was assigned to this message Here are the categories: * 1 Coarse genre * 1.1 Company Business, Strategy, etc. (elaborate in Section 3 [Topics]) * 1.2 Purely Personal * 1.3 Personal but in professional context (e.g., it was good working with you) * 1.4 Logistic Arrangements (meeting scheduling, technical support, etc) * 1.5 Employment arrangements (job seeking, hiring, recommendations, etc) * 1.6 Document editing/checking (collaboration) * 1.7 Empty message (due to missing attachment) * 1.8 Empty message * 2 Included/forwarded information * 2.1 Includes new text in addition to forwarded material * 2.2 Forwarded email(s) including replies * 2.3 Business letter(s) / document(s) * 2.4 News article(s) * 2.5 Government / academic report(s) * 2.6 Government action(s) (such as results of a hearing, etc) * 2.7 Press release(s) * 2.8 Legal documents (complaints, lawsuits, advice) * 2.9 Pointers to url(s) * 2.10 Newsletters * 2.11 Jokes, humor (related to business) * 2.12 Jokes, humor (unrelated to business) * 2.13 Attachment(s) (assumed missing) * 3 Primary topics (if coarse genre 1.1 is selected) * 3.1 regulations and regulators (includes price caps) * 3.2 internal projects -- progress and strategy * 3.3 company image -- current * 3.4 company image -- changing / influencing * 3.5 political influence / contributions / contacts * 3.6 california energy crisis / california politics * 3.7 internal company policy * 3.8 internal company operations * 3.9 alliances / partnerships * 3.10 legal advice * 3.11 talking points * 3.12 meeting minutes * 3.13 trip reports * 4 Emotional tone (if not neutral) * 4.1 jubilation * 4.2 hope / anticipation * 4.3 humor * 4.4 camaraderie * 4.5 admiration * 4.6 gratitude * 4.7 friendship / affection * 4.8 sympathy / support * 4.9 sarcasm * 4.10 secrecy / confidentiality * 4.11 worry / anxiety * 4.12 concern * 4.13 competitiveness / aggressiveness * 4.14 triumph / gloating * 4.15 pride * 4.16 anger / agitation * 4.17 sadness / despair * 4.18 shame * 4.19 dislike / scorn ``` import os, sys, email import numpy as np import pandas as pd from boto.s3.key import Key import boto import zipfile from subprocess import check_output print(check_output(["ls", "../input"]).decode("utf8")) # Read the data into a DataFrame emails_df = pd.read_csv('../input/emails.csv') print(emails_df.shape) emails_df.head() # A single message looks like this print(emails_df['message'][0]) ## Helper functions def get_text_from_email(msg, max_word_len=30): '''To get the content from email objects''' parts = [] for part in msg.walk(): if part.get_content_type() == 'text/plain': payload = part.get_payload() payload = ' '.join(filter(lambda x: len(x) < max_word_len, payload.split())) parts.append( payload ) return ''.join(parts) def split_email_addresses(line): '''To separate multiple email addresses''' if line: addrs = line.split(',') addrs = frozenset(map(lambda x: x.strip(), addrs)) else: addrs = None return addrs # Parse the emails into a list email objects messages = list(map(email.message_from_string, emails_df['message'])) # Get fields from parsed email objects keys = messages[0].keys() for key in keys: emails_df[key] = [doc[key] for doc in messages] # Parse content from emails emails_df['content'] = list(map(get_text_from_email, messages)) # Split multiple email addresses emails_df['From'] = emails_df['From'].map(split_email_addresses) emails_df['To'] = emails_df['To'].map(split_email_addresses) # Extract the root of 'file' as 'user' emails_df['user'] = emails_df['file'].map(lambda x:x.split('/')[0]) # cleanup del messages emails_df.drop('message', axis=1, inplace=True) emails_df.head() print('shape of the dataframe:', emails_df.shape) # Find number of unique values in each columns for col in emails_df.columns: print(col, emails_df["content"].nunique()) print("content length: {}".format(emails_df["content"].map(len).max())) # Set index and drop columns with two few values emails_df = emails_df.set_index('Message-ID')\ .drop(['file', 'Mime-Version', 'Content-Type', 'Content-Transfer-Encoding'], axis=1) # Parse datetime emails_df['Date'] = pd.to_datetime(emails_df['Date'], infer_datetime_format=True) emails_df.dtypes def save_to_s3(file_name): s3 = boto.connect_s3() b = s3.get_bucket('brianray') k = Key(b) k.key = file_name k.set_contents_from_filename(file_name) k.set_acl('public-read') return k.generate_url(expires_in=0, query_auth=False) def zipit(file_name): zip_file_name = "{}.zip".format(file_name) zf = zipfile.ZipFile(zip_file_name, 'w', zipfile.ZIP_DEFLATED) try: zf.write(file_name) finally: zf.close() return zip_file_name import glob list_found = {} cats = [] for path in glob.glob("enron_with_categories/*/*.txt"): batch, filename = path.split("/")[1:] contents = open(path, "r").read() try: email_parsed = email.message_from_string(contents) list_found[email_parsed['Message-ID']] = [x.split(',') for x in open(path.replace(".txt", ".cats")).read().split()] except Exception as e: print("error: {}".format(e)) for x in range(12): x += 1 emails_df['Cat_{}_level_1'.format(x)] = None emails_df['Cat_{}_level_2'.format(x)] = None emails_df['Cat_{}_weight'.format(x)] = None emails_df.columns emails_df['labeled'] = False for item, val in list_found.items(): emails_df.loc[item, 'labeled'] = True i = 0 for lev1, lev2, weight in val: i += 1 emails_df.loc[item, 'Cat_{}_level_1'.format(i)] = lev1 emails_df.loc[item, 'Cat_{}_level_2'.format(i)] = lev2 emails_df.loc[item, 'Cat_{}_weight'.format(i)] = weight emails_df.columns emails_df[emails_df['labeled'] == True] len(emails_df.columns) emails_df.reset_index(level=0, inplace=True) emails_df.head() filename = "enron_05_17_2015_with_labels_v2.csv" emails_df.to_csv(filename) save_to_s3(zipit(filename)) chunks = emails_df.groupby(np.arange(len(emails_df)) // 100000) for i, chunk in chunks: name = "enron_05_17_2015_with_labels_v2_100K_chunk_{}_of_{}.csv".format(i+1, len(chunks)) chunk.to_csv(name) print(save_to_s3(zipit(name))) ```
github_jupyter
# State preparation with circuit optimization We want to create a circuit that produces the Bell state $\vert\Phi^+\rangle = \dfrac{\vert00\rangle + \vert11\rangle}{\sqrt 2}$. We already know that this state can be produced by a circuit containing a Hadamard gate on the first qubit followed by a CNOT gate [[1](https://en.wikipedia.org/wiki/Bell_state#Creating_Bell_states)], but we would like to replace the Hadamard gate with rotation gates. In the following we will use [PennyLane](https://pennylane.readthedocs.io/), a cross-platform Python library for quantum machine learning, because it abstracts away several implementation details (especially when it comes to using different quantum simulators and frameworks such as pyQuil/Forest). PennyLane ships with its own version of NumPy, enriched to include [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) - which lets the gradients on the circuits be computed in a more efficient way using clever transformations. In the following, I will also highlight my contributions to the library: the `SquaredErrorLoss` class and the ongoing work on a Hamiltonian decomposition method. Please check the README file for instructions on how to prepare the environment and execute this notebook. ### Importing the library ``` import pennylane as qml from pennylane import numpy as np from pennylane.qnn.cost import SquaredErrorLoss ``` ### Circuit definition First of all we create the "ansatz" circuit, a base structure consisting of a list of gates applied to specific wires. By doing this we can reuse the same configuration in different circuits, for instance if all we want to change is the measurement. Since we want to tackle the general case, we put both kinds of the allowed rotation gates on both qubits; then, we also add a CNOT gate. We will therefore have 4 different parameters, each corresponding to a different angle as a parameter for each rotation gate. ``` def circuit(angles, **kwargs): qml.RX(angles[0], wires=0) qml.RY(angles[1], wires=0) qml.RX(angles[2], wires=1) qml.RY(angles[3], wires=1) qml.CNOT(wires=[0, 1]) ``` In order to create a circuit, we need to include a _device_ and a _measurement_. For starting out we can use PennyLane's "default qubit" device, which can simulate a real quantum device with any number of qubits; we also set `analytic=False` and `shots=1000` to simulate a more realistic device. _Other devices can be used with the `pennylane-forest` plugin, which can be installed as explained in the README file. Once the plugin is installed and the QVM is running, any `forest.qvm` device can be uncommented and used in place of `default.qubit`. The `forest.qpu` device instead requires access to a real QPU._ ``` dev = qml.device('default.qubit', wires=2, analytic=False, shots=1000) # For a more realistic QVM simulation (needs `pennylane-forest`, `qvm` and `quilc`, see README) # dev = qml.device('forest.qvm', device='2q-qvm', shots=10, noisy=True) # For Aspen-8 simulation (needs `pennylane-forest`, `qvm` and `quilc`, see README) # dev = qml.device('forest.qvm', device='Aspen-8', shots=10, noisy=True) # For a real QPU # dev = qml.device('forest.qpu', device='Aspen-8', shots=100) # For a list of all the devices supported by the Forest SDK: # # from pyquil import list_quantum_computers # sorted(list_quantum_computers()) # To check the capabilities of a device: # # dev.capabilities() ``` We then create a list with one `PauliZ` _observable_ that we will use for the measurement. ``` observables = [qml.PauliZ(0)] ``` Now we can initialize some parameters (the gates' angles) and "execute" the circuit, using the expectation value of the observables as a measurement. The `measure` argument default for `qml.map` is already `'expval'`, so here it is added just for clarity. ``` params = [0.2, 0.8, 0.4, 0.1] qnode = qml.map(circuit, observables, dev, measure='expval') print(qnode(params)) ``` The problem of measuring only one qubit is that, in our case where we have an entangled state, such measurement would affect the other qubit as well by "assigning" it the same value. For this reason, we need to measure the 2 qubits together instead. We can use different combinations of observables: - combined observables: `[qml.PauliZ(0), qml.PauliZ(1)]` (returns two values); - tensor product observables: `[qml.PauliZ(0) @ qml.PauliZ(1)]` (returns one value); - generic Hermitian observables: `[qml.Hermitian(observable, wires=[0, 1])]` (returns one value). The latter is the most flexible, and since it is supported not only by [PennyLane](https://pennylane.readthedocs.io/en/stable/code/api/pennylane.Hermitian.html) but also by other frameworks (e.g. [Forest](http://docs.rigetti.com/en/stable/apidocs/pauli.html#pyquil.paulis.PauliSum) supports sums of Pauli operators, and [Qiskit](https://qiskit.org/textbook/ch-gates/fun-matrices.html#Pauli-decomposition) explains how to do it), we can make use of it. We start with defining our target state $\vert\psi\rangle = \dfrac{1}{\sqrt 2} \left[\begin{matrix}1\\0\\0\\1\end{matrix}\right]$: ``` psi = 1. / np.sqrt(2) * np.array([[1, 0, 0, 1]]).T ``` Now we construct the observable as the outer product $O = \vert\psi\rangle \langle\psi\vert$ of the state with itself: ``` obs = psi @ psi.conj().T print(obs) ``` Why do we "like" this matrix? Because, when used for measurements, it will "boost" the states that have close values for their first and last element. This matrix is actually the projector onto the \$\psi\$ state with eigenvalues 0 and 1. ``` eigenvalues, eigenvectors = np.linalg.eig(obs) print(eigenvalues) ``` The eigenstate corresponding to the eigenvalue 1 is our desired state: ``` eig_1 = np.array([eigenvectors[:, 0]]).T print(eig_1) print(np.allclose(eig_1, psi)) ``` Therefore, we need to optimize its expectation value to be 1. In fact, we have that obviously $\langle\psi\vert O \vert\psi\rangle = 1$: ``` print(psi.T @ obs @ psi) ``` and a vector $\tilde\psi$ close to $\psi$ will have a measurement close to 1: ``` psi_tilde = psi - 0.001 # For normalization psi_tilde[1:3] = np.sqrt(0.5 - psi_tilde[0] ** 2) assert np.allclose(np.linalg.norm(psi_tilde), 1.0) print(psi_tilde.T @ obs @ psi_tilde) ``` Let's also verify whether the observable matrix is Hermitian by checking that it is equal to its own adjunct, i.e. $M = M^\dagger$: ``` print(np.allclose(obs, obs.conj().T)) ``` We can also check whether the matrix can be decomposed as a linear combination of Pauli operators tensor products: $O = \dfrac{1}{4}((I \otimes I) + (X \otimes X) - (Y \otimes Y) + (Z \otimes Z))$ This step has to be performed manually, but my contribution for a decomposition method is [in progress](https://github.com/XanaduAI/pennylane/pull/671). ``` # Manually-defined Pauli operators. They can also be derived from PennyLane observables, # e.g. PauliX = qml.PauliX(0).matrix identity = np.eye(2) pauliX = np.array([[0, 1], [1, 0]]) pauliY = np.array([[0, -1j], [1j, 0]]) pauliZ = np.array([[1, 0], [0, -1]]) decomp = 0.25 * np.sum([ np.kron(identity, identity), np.kron(pauliX, pauliX), -np.kron(pauliY, pauliY), np.kron(pauliZ, pauliZ) ], axis=0) # Is the decomposed matrix the same as the observable matrix we created before? print(np.allclose(obs, decomp)) ``` Now we are ready to run the circuit again using the new observable: ``` observables = [qml.Hermitian(obs, wires=[0, 1])] qnode = qml.map(circuit, observables, dev, measure='expval') print(qnode(params)) ``` ### Optimization Since the circuit optimization has to be performed via gradient descent, we need first of all a good choice for the initial parameters. We can try a few different ones, including: - all parameters equal to 0; - all parameters randomly chosen; - parameters initialized to some chosen "sensible" defaults. ``` params_init_method = 'chosen' if params_init_method == 'zero': params = np.array([0.] * 4) elif params_init_method == 'random': import random random.seed(0) params = np.random.normal(0., np.pi, 4) elif params_init_method == 'chosen': params = np.array([np.pi / 4] * 4) else: raise ValueError('{} initialization method does not exist'.format(weight_init_method)) ``` Then, we need to decide how many optimization steps we will run: ``` steps = 1000 ``` We also create an additional array that will collect the learned parameters per round: ``` params_history = np.zeros((4, steps)) ``` We define the cost function as the square of the difference between the value of the observable (which can be between 0 and 1) and 1; this means that the closer the observable gets to one, the smaller the cost becomes. My contribution is the `SquaredErrorLoss` class, which gives an easy way to calculate the loss given a target. ``` loss = SquaredErrorLoss(circuit, observables, dev) def cost(params): return loss(params, target=[1]) ``` Finally, we define the optimizer that we will use in the optimization process. We can start with the simplest `GradientDescentOptimizer`, and optionally try a more advanced optimizer such as the `AdamOptimizer`: ``` # opt = qml.GradientDescentOptimizer(stepsize=0.1) opt = qml.AdamOptimizer(stepsize=0.1) ``` We are ready to kick off the experiments! It might take a while with the default parameters :) ``` %%time for i in range(steps): params = opt.step(cost, params) if i == 0: print(f'\tCost after step {i:4d}: {cost(params)[0]: .7f} ({params})') elif (i + 1) % 50 == 0: print(f'\tCost after step {i+1:4d}: {cost(params)[0]: .7f} ({params})') params_history[:, i] = params result = qml.map(circuit, observables, dev)(params) print('Optimized parameters: {}'.format(params)) print('Result: {}'.format(result)) # The state cannot be seen when using a real device try: print('Output state: {}'.format(np.round(dev.state, decimals=3))) except NotImplementedError: print('Cannot see state when using device ' + dev.name) ``` ### Evaluation Let's check the results of the last step of the last optimization: ``` print(params_history[:, -1]) ``` We can see that each row shows values around $\left[\begin{matrix}0&\dfrac{\pi}{2}&0&0\end{matrix}\right]$, corresponding to the following rotations: - $R_x(\phi) = \left[\begin{matrix}cos(\phi/2)&-i sin(\phi/2)\\-i sin(\phi/2)&cos(\phi/2)\end{matrix}\right] \implies R_x(0) = \left[\begin{matrix}1&0\\0&1\end{matrix}\right]$ - $R_y(\phi) = \left[\begin{matrix}cos(\phi/2)&-sin(\phi/2)\\sin(\phi/2)&cos(\phi/2)\end{matrix}\right] \implies R_y(0) = \left[\begin{matrix}1&0\\0&1\end{matrix}\right], R_y(\pi/2) = \dfrac{1}{\sqrt{2}}\left[\begin{matrix}1&-1\\1&1\end{matrix}\right]$ This means that only a $\dfrac{\pi}{2}$ Y-rotation should be applied on the first qubit, while the second qubit should be left unchanged. ### Visualization In this section we will see the calculated values across runs for every parameter. We will plot the points using `matplotlib`. ``` import matplotlib.pyplot as plt %matplotlib inline # TODO: can be simplified fig, axs = plt.subplots(2, 2, figsize=(15,15)) axs[0, 0].plot(params_history[0, :]) axs[0, 0].set_title('RX(0)') axs[0, 1].plot(params_history[1, :]) axs[0, 1].set_title('RY(0)') axs[1, 0].plot(params_history[2, :]) axs[1, 0].set_title('RX(1)') axs[1, 1].plot(params_history[3, :]) axs[1, 1].set_title('RY(1)') ```
github_jupyter
**Initialization** *Setting up Fast.ai Environment* ``` !curl -s https://course.fast.ai/setup/colab | bash %reload_ext autoreload %autoreload 2 %matplotlib inline ``` **Downloading the Dependencies** ``` from fastai.basics import * from fastai.tabular import * ``` **Data Preparation** ``` path = Config.data_path() path.mkdir(parents=True, exist_ok=True) path.ls() cd data train_df = pd.read_pickle(path/"train_clean") train_df.head().T n = len(train_df) n ``` **Experimenting with Sample** ``` idx = np.random.permutation(range(n))[:2000] idx.sort() small_train_df = train_df.iloc[idx[:1000]] small_test_df = train_df.iloc[idx[1000:]] small_cont_vars = ["CompetitionDistance", "Mean_Humidity"] small_cat_vars = ["Store", "DayOfWeek", "PromoInterval"] small_train_df = small_train_df[small_cont_vars + small_cat_vars + ["Sales"]] small_test_df = small_test_df[small_cont_vars + small_cat_vars + ["Sales"]] small_train_df.head() small_test_df.head() categorify = Categorify(small_cat_vars, small_cont_vars) categorify(small_train_df) categorify(small_test_df, test=True) small_test_df.head(10) small_train_df.PromoInterval.cat.categories small_train_df["PromoInterval"].cat.codes[:10] fill_missing = FillMissing(small_cat_vars, small_cont_vars) fill_missing(small_train_df) fill_missing(small_test_df, test=True) small_train_df[small_train_df["CompetitionDistance_na"] == True] ``` ### **Preparing Full-Dataset** ``` train_df = pd.read_pickle(path/"train_clean") test_df = pd.read_pickle(path/"test_clean") len(train_df), len(test_df) procs = [FillMissing, Categorify, Normalize] cat_vars = ['Store', 'DayOfWeek', 'Year', 'Month', 'Day', 'StateHoliday', 'CompetitionMonthsOpen', 'Promo2Weeks', 'StoreType', 'Assortment', 'PromoInterval', 'CompetitionOpenSinceYear', 'Promo2SinceYear', 'State', 'Week', 'Events', 'Promo_fw', 'Promo_bw', 'StateHoliday_fw', 'StateHoliday_bw', 'SchoolHoliday_fw', 'SchoolHoliday_bw'] cont_vars = ['CompetitionDistance', 'Max_TemperatureC', 'Mean_TemperatureC', 'Min_TemperatureC', 'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h', 'Mean_Wind_SpeedKm_h', 'CloudCover', 'trend', 'trend_DE', 'AfterStateHoliday', 'BeforeStateHoliday', 'Promo', 'SchoolHoliday'] dep_var = "Sales" df = train_df[cat_vars + cont_vars + [dep_var, "Date"]].copy() test_df["Date"].min(), test_df["Date"].max() cut = train_df["Date"][(train_df["Date"] == train_df["Date"][len(test_df)])].index.max() cut valid_idx = range(cut) df[dep_var].head() data = (TabularList.from_df(df, path=path, cat_names=cat_vars, cont_names=cont_vars, procs=procs) .split_by_idx(valid_idx) .label_from_df(cols=dep_var, label_cls=FloatList, log=True) .add_test(TabularList.from_df(test_df, path=path, cat_names=cat_vars, cont_names=cont_vars)) .databunch()) ``` ### **Model** ``` max_log_y = np.log(np.max(train_df["Sales"])*1.2) y_range = torch.tensor([0, max_log_y], device=defaults.device) learn = tabular_learner(data, layers=[1000, 500], ps=[0.001, 0.01], emb_drop=0.04, y_range=y_range, metrics=exp_rmspe) learn.model len(data.train_ds.cont_names) learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(5, 1e-02, wd=0.2) learn.save("model-1") learn.recorder.plot_losses(skip_start=10000) learn.recorder.plot_lr() ``` **Submission** ``` test_preds = learn.get_preds(DatasetType.Test) test_df["Sales"] = np.exp(test_preds[0].data).numpy().T[0] test_df[["Id", "Sales"]] = test_df[["Id", "Sales"]].astype("int") test_df[["Id", "Sales"]].to_csv("rossmann_submission.csv", index=False) ```
github_jupyter
``` %load_ext autoreload %autoreload 2 %matplotlib inline import sys import pathlib try: import matplotlib_views as views except ModuleNotFoundError: cwd = pathlib.Path().resolve().parent sys.path.append(str(cwd)) import matplotlib_views as views from matplotlib_views import histograms import numpy as np import matplotlib.pyplot as plt ``` ## Create some fake data ``` def gauss_2d(mu, sigma): x = np.random.normal(mu, sigma) y = np.random.normal(mu, sigma) return (x, y) def generate_random_points(mu, sigma, n_points=1_000): """ Return array of X and Y values around mu and sigma """ values = [gauss_2d(mu, sigma) for _ in range(n_points)] values = np.asarray(values) values = values.T return values values1 = generate_random_points(1, 20) + 10 values2 = generate_random_points(3, 50) + 40 values3 = generate_random_points(3, 10, n_points=500) + 100 xvalues = list(values1[0]) + list(values2[0]) + list(values3[0]) yvalues = list(values1[1]) + list(values2[1]) + list(values3[1]) ``` ## Create figure ``` views.set_global_style() fig, ax = views.get_plot() histograms.two_dimensional_hex(ax, xvalues, yvalues) views.fix_borders(ax) ax.set_ylabel("Desc [unit]") ax.set_xlabel("Desc [unit]") pass ``` ## Create histogram figure, but with KDE ``` from scipy.stats import gaussian_kde from matplotlib.ticker import NullFormatter # Define margins and relative dimensions left, width = 0.1, 0.65 bottom, height = 0.1, 0.65 bottom_h = left_h = left + width + 0.02 # Define layout rect_scatter = [left, bottom, width, height] rect_histx = [left, bottom_h, width, 0.1] rect_histy = [left_h, bottom, 0.1, height] # Define axis and figures fig = plt.figure(figsize=(12, 12)) ax_scatter = fig.add_axes(rect_scatter) ax_histx = fig.add_axes(rect_histx) ax_histy = fig.add_axes(rect_histy) # Fill in scatterplot / histogram histograms.two_dimensional_hex(ax_scatter, xvalues, yvalues) # Fill in KDE min_x, max_x, min_y, max_y = views.get_tick_limits(ax_scatter) bins = np.linspace(min_x, max_x, 300) gaussian_kernel = gaussian_kde(xvalues) values = gaussian_kernel(bins) ax_histx.plot(bins, values, "k", linewidth=1.0) bins = np.linspace(min_y, max_y, 300) gaussian_kernel = gaussian_kde(yvalues) values = gaussian_kernel(bins) ax_histy.plot(values, bins, "k", linewidth=1.0) # Fix borders nullfmt = NullFormatter() ax_histx.xaxis.set_major_formatter(nullfmt) ax_histy.yaxis.set_major_formatter(nullfmt) views.fix_borders(ax_histx, visibles=[False, False, False, False]) views.fix_borders(ax_histy, visibles=[False, False, False, False]) ax_histx.set_xticks([]) ax_histx.set_yticks([]) ax_histy.set_xticks([]) ax_histy.set_yticks([]) # Fix border of scatterplot views.fix_borders(ax_scatter) # Set labels ax_scatter.set_xlabel("Desc [unit]") ax_scatter.set_ylabel("Desc [unit]") pass ```
github_jupyter
# Skip-gram word2vec In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation. ## Readings Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material. * A really good [conceptual overview](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) of word2vec from Chris McCormick * [First word2vec paper](https://arxiv.org/pdf/1301.3781.pdf) from Mikolov et al. * [NIPS paper](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) with improvements for word2vec also from Mikolov et al. * An [implementation of word2vec](http://www.thushv.com/natural_language_processing/word2vec-part-1-nlp-with-deep-learning-with-tensorflow-skip-gram/) from Thushan Ganegedara * TensorFlow [word2vec tutorial](https://www.tensorflow.org/tutorials/word2vec) ## Word embeddings When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. ![one-hot encodings](assets/one_hot_encoding.png) To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit. ![lookup](assets/lookup_matrix.png) Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an **embedding lookup** and the number of hidden units is the **embedding dimension**. <img src='assets/tokenize_lookup.png' width=500> There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well. Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called **Word2Vec** uses the embedding layer to find vector representations of words that contain semantic meaning. ## Word2Vec The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram. <img src="assets/word2vec_architectures.png" width="500"> In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts. First up, importing packages. ``` import time import numpy as np import tensorflow as tf import utils ``` Load the [text8 dataset](http://mattmahoney.net/dc/textdata.html), a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the `data` folder. Then you can extract it and delete the archive file to save storage space. ``` from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import zipfile dataset_folder_path = 'data' dataset_filename = 'text8.zip' dataset_name = 'Text8 Dataset' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(dataset_filename): with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar: urlretrieve( 'http://mattmahoney.net/dc/text8.zip', dataset_filename, pbar.hook) if not isdir(dataset_folder_path): with zipfile.ZipFile(dataset_filename) as zip_ref: zip_ref.extractall(dataset_folder_path) with open('data/text8') as f: text = f.read() ``` ## Preprocessing Here I'm fixing up the text to make training easier. This comes from the `utils` module I wrote. The `preprocess` function coverts any punctuation into tokens, so a period is changed to ` <PERIOD> `. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it. ``` words = utils.preprocess(text) print(words[:30]) print("Total words: {}".format(len(words))) print("Unique words: {}".format(len(set(words)))) ``` And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list `int_words`. ``` vocab_to_int, int_to_vocab = utils.create_lookup_tables(words) int_words = [vocab_to_int[word] for word in words] ``` ## Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. Check out my solution to see how I did it. > **Exercise:** Implement subsampling for the words in `int_words`. That is, go through `int_words` and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to `train_words`. ``` from collections import Counter import random threshold = 1e-5 word_counts = Counter(int_words) total_count = len(int_words) freqs = {word: count/total_count for word, count in word_counts.items()} p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts} train_words = [word for word in int_words if random.random() < (1 - p_drop[word])] ``` ## Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From [Mikolov et al.](https://arxiv.org/pdf/1301.3781.pdf): "Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels." > **Exercise:** Implement a function `get_target` that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window. ``` def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' R = np.random.randint(1, window_size+1) start = idx - R if (idx - R) > 0 else 0 stop = idx + R target_words = set(words[start:idx] + words[idx+1:stop+1]) return list(target_words) ``` Here's a function that returns batches for our network. The idea is that it grabs `batch_size` words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory. ``` def get_batches(words, batch_size, window_size=5): ''' Create a generator of word batches as a tuple (inputs, targets) ''' n_batches = len(words)//batch_size # only full batches words = words[:n_batches*batch_size] for idx in range(0, len(words), batch_size): x, y = [], [] batch = words[idx:idx+batch_size] for ii in range(len(batch)): batch_x = batch[ii] batch_y = get_target(batch, ii, window_size) y.extend(batch_y) x.extend([batch_x]*len(batch_y)) yield x, y ``` ## Building the graph From [Chris McCormick's blog](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/), we can see the general structure of our network. ![embedding_network](./assets/skip_gram_net_arch.png) The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the `inputs` and `labels` placeholders like normal. > **Exercise:** Assign `inputs` and `labels` using `tf.placeholder`. We're going to be passing in integers, so set the data types to `tf.int32`. The batches we're passing in will have varying sizes, so set the batch sizes to [`None`]. To make things work later, you'll need to set the second dimension of `labels` to `None` or `1`. ``` train_graph = tf.Graph() with train_graph.as_default(): inputs = tf.placeholder(tf.int32, [None], name='inputs') labels = tf.placeholder(tf.int32, [None, None], name='labels') ``` ## Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. > **Exercise:** Tensorflow provides a convenient function [`tf.nn.embedding_lookup`](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup) that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use `tf.nn.embedding_lookup` to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using [tf.random_uniform](https://www.tensorflow.org/api_docs/python/tf/random_uniform). ``` n_vocab = len(int_to_vocab) n_embedding = 200 # Number of embedding features with train_graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs) ``` ## Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called ["negative sampling"](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf). Tensorflow has a convenient function to do this, [`tf.nn.sampled_softmax_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss). > **Exercise:** Below, create weights and biases for the softmax layer. Then, use [`tf.nn.sampled_softmax_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss) to calculate the loss. Be sure to read the documentation to figure out how it works. ``` # Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(n_vocab)) # Calculate the loss using negative sampling loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab) cost = tf.reduce_mean(loss) optimizer = tf.train.AdamOptimizer().minimize(cost) ``` ## Validation This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings. ``` with train_graph.as_default(): ## From Thushan Ganegedara's implementation valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent valid_examples = np.array(random.sample(range(valid_window), valid_size//2)) valid_examples = np.append(valid_examples, random.sample(range(1000,1000+valid_window), valid_size//2)) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # We use the cosine distance: norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True)) normalized_embedding = embedding / norm valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset) similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding)) # If the checkpoints directory doesn't exist: !mkdir checkpoints epochs = 10 batch_size = 1000 window_size = 10 with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: iteration = 1 loss = 0 sess.run(tf.global_variables_initializer()) for e in range(1, epochs+1): batches = get_batches(train_words, batch_size, window_size) start = time.time() for x, y in batches: feed = {inputs: x, labels: np.array(y)[:, None]} train_loss, _ = sess.run([cost, optimizer], feed_dict=feed) loss += train_loss if iteration % 100 == 0: end = time.time() print("Epoch {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Avg. Training loss: {:.4f}".format(loss/100), "{:.4f} sec/batch".format((end-start)/100)) loss = 0 start = time.time() if iteration % 1000 == 0: # note that this is expensive (~20% slowdown if computed every 500 steps) sim = similarity.eval() for i in range(valid_size): valid_word = int_to_vocab[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k+1] log = 'Nearest to %s:' % valid_word for k in range(top_k): close_word = int_to_vocab[nearest[k]] log = '%s %s,' % (log, close_word) print(log) iteration += 1 save_path = saver.save(sess, "checkpoints/text8.ckpt") embed_mat = sess.run(normalized_embedding) ``` Restore the trained network if you need to: ``` with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) embed_mat = sess.run(embedding) ``` ## Visualizing the word vectors Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out [this post from Christopher Olah](http://colah.github.io/posts/2014-10-Visualizing-MNIST/) to learn more about T-SNE and other ways to visualize high-dimensional data. ``` %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt from sklearn.manifold import TSNE viz_words = 500 tsne = TSNE() embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :]) fig, ax = plt.subplots(figsize=(14, 14)) for idx in range(viz_words): plt.scatter(*embed_tsne[idx, :], color='steelblue') plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7) ```
github_jupyter
## Figure S16-S17. Pre-/co-eruptive deformation of the 2011 & 2017 Shinmoe-dake eruption ``` %matplotlib inline import os from cartopy import crs as ccrs from matplotlib import pyplot as plt from mintpy import view plt.rcParams.update({'font.size': 12}) work_dir = os.path.expanduser('~/Papers/2021_Kirishima/figs_src/obs') os.chdir(work_dir) print('Go to directory:', work_dir) def plot_asc_desc_maps(fnames, titles, dem_opt, out_fig, figsize=[9, 4]): # options for view.py opt = dem_opt opt += ' -c jet -v -5 5 -u cm --noverbose ' opt += ' --notitle --fontsize 12 --ref-size 3 --nocbar --alpha 0.9 ' opt += ' --lalo-step 0.2 ' #opt += ' --scalebar 0.2 0.13 0.04 --scalebar-pad 0.05 --noverbose ' # plot fig, axs = plt.subplots(nrows=1, ncols=2, figsize=figsize, subplot_kw=dict(projection=ccrs.PlateCarree())) axs = axs.flatten() # plot asc track ax = axs[0] cmd = 'view.py {f} phase {o} --lalo-loc 1 0 0 1 '.format(f=fnames[0], o=opt) data, atr, inps = view.prep_slice(cmd) ax, inps, im, cbar = view.plot_slice(ax, data, atr, inps) ax.set_title('asc track\n'+titles[0]) # plot desc track ax = axs[1] cmd = 'view.py {f} phase {o} --lalo-loc 0 0 0 1 '.format(f=fnames[1], o=opt) data, atr, inps = view.prep_slice(cmd) ax, inps, im, cbar = view.plot_slice(ax, data, atr, inps) ax.set_title('desc track\n'+titles[1]) # axis format fig.tight_layout() #plt.annotate('ALOS-1\nasc 424', xy=(0.9, 0.70), xycoords='figure fraction') #plt.annotate('ALOS-1\ndesc 73', xy=(0.9, 0.20), xycoords='figure fraction') # colorbar cax = fig.add_axes([1.0, 0.3, 0.015, 0.4]) cbar = plt.colorbar(im, cax=cax, orientation='vertical', ticks=[-5, 0, 5]) cbar.ax.tick_params(labelsize=12) cbar.set_label('Line-of-Sight\ndisplacement [cm]', fontsize=12) # output if out_fig: plt.savefig(out_fig, bbox_inches='tight', transparent=True, dpi=600) print('save figure to file', out_fig) plt.show() ``` ### The 2017 Shinmoe-dake eruption: ash/tephra deposit ``` # input files data_dir = os.path.expanduser('~/Papers/2021_Kirishima/figs_src/data/2017ShinmoeEruption') dem_file = os.path.join(data_dir, 'gsi10m.dem.wgs84') dem_opt = ' --dem {} --dem-noshade --contour-step 100 --contour-smooth 0.0 --shade-az 45 '.format(dem_file) fname1 = os.path.join(data_dir, 'Alos2AT131_20161206_20171219.unw') fname2 = os.path.join(data_dir, 'Alos2DT23_20161031_20171211.unw') titles = ['6 Dec 2016 - 19 Dec 2017', '31 Oct 2016 - 11 Dec 2017'] out_fig = os.path.join(work_dir, 'Shinmoe2017Co.png') plot_asc_desc_maps(fnames=[fname1, fname2], titles=titles, dem_opt=dem_opt, out_fig=out_fig, figsize=[6, 3.5]) ``` ### The 2011 Shinmoe-dake eruption: pre-eruptive inflation ``` # input files data_dir = os.path.expanduser('~/Papers/2021_Kirishima/figs_src/data/2011ShinmoeEruption') dem_file = os.path.join(data_dir, 'gsi30m.dem.wgs84') dem_opt = ' --dem {} --dem-noshade --contour-step 200 --contour-smooth 1.0 '.format(dem_file) fname1 = os.path.join(data_dir, 'AlosAT424_20100102_20101120.unw') fname2 = os.path.join(data_dir, 'AlosDT73_20091130_20110118.unw') titles = ['2 Jan - 20 Nov 2010', '30 Nov 2009 - 18 Jan 2011'] out_fig = os.path.join(work_dir, 'Shinmoe2011Pre.png') plot_asc_desc_maps(fnames=[fname1, fname2], titles=titles, dem_opt=dem_opt, out_fig=out_fig) ``` ### The 2011 Shinmoe-dake eruption: co-eruptive deflation + ash/tephra deposit ``` # input files data_dir = os.path.expanduser('~/Papers/2021_Kirishima/figs_src/data/2011ShinmoeEruption') dem_file = os.path.join(data_dir, 'gsi30m.dem.wgs84') dem_opt = ' --dem {} --dem-noshade --contour-step 200 --contour-smooth 1.0 '.format(dem_file) fname1 = os.path.join(data_dir, 'AlosAT424_20101120_20110220.unw') fname2 = os.path.join(data_dir, 'AlosDT73_20110118_20110305.unw') titles = ['20 Nov 2010 - 20 Feb 2011', '18 Jan - 5 Mar 2011'] out_fig = os.path.join(work_dir, 'Shinmoe2011Co.png') plot_asc_desc_maps(fnames=[fname1, fname2], titles=titles, dem_opt=dem_opt, out_fig=out_fig) ```
github_jupyter
# Numbers ``` %matplotlib inline import matplotlib.pyplot as plt import numpy as np ``` ## The `ndarray`: Vectors, matrices and tenosrs dtype, shape, strides ### Vector ``` x = np.array([1,2,3]) x type(x) x.dtype x.shape x.strides ``` ### Matrix ``` x = np.array([[1,2,3], [4,5,6]], dtype=np.int32) x x.dtype x.shape x.strides ``` ### Tensor ``` x = np.arange(24).reshape((2,3,4)) x ``` ## Creating `ndarray`s ### From a file ``` %%file numbers.txt a,b,c # can also skip headers 1,2,3 4,5,6 np.loadtxt('numbers.txt', dtype='int', delimiter=',', skiprows=1, comments='#') ``` ### From Python lists or tuples ``` np.array([ [1,2,3], [4,5,6] ]) ``` ### From ranges arange, linspace, logspace ``` np.arange(1, 7).reshape((2,3)) np.linspace(1, 10, 4) np.logspace(0, 4, 5, dtype='int') ``` ### From a function `fromfunciton` ``` np.fromfunction(lambda i, j: i*3 + j + 1, (2,3)) np.fromfunction(lambda i, j: (i-2)**2 + (j-2)**2, (5,5), dtype='int') ``` #### How to visualize `fromfunction` ``` j = np.repeat([np.arange(5)], 5, axis=0) i = j.T i j (i-2)**2 + (j-2)**2 ``` #### Using element-wise functions in `fromfunction` ``` np.fromfunction(lambda i, j: np.where(i==j,0, -1), (5,5)) np.fromfunction(lambda i, j: np.where(i<j, 1, np.where(i==j,0, -1)), (5,5)) np.fromfunction(lambda i, j: np.minimum(i,j), (5,5), dtype='int') np.fromfunction(lambda i, j: np.maximum(i,j), (5,5), dtype='int') ``` ### From special constructors zeros, ones, eye, diag ``` np.zeros((2,3)) np.ones((2,3)) np.eye(3) np.eye(3, 4) np.eye(4, k=-1) np.diag([1,2,3,4]) np.diag([1,2,3,4], k=1) ``` ### From random variables #### Convenience functions rand, randn ``` np.random.rand(2,3) np.random.randn(2,3) ``` #### Distributions uniform, normal, randint, poisson, multinomial, multivariate_ normal ``` np.random.uniform(0, 1, (2,3)) np.random.normal(0, 1, (2,3)) np.random.randint(0, 10, (4,5)) np.random.poisson(10, (4,5)) np.random.multinomial(n=5, pvals=np.ones(5)/5, size=8) np.random.multivariate_normal(mean=[10,20,30], cov=np.eye(3), size=4) ``` ## Indexing ``` x = np.arange(20).reshape((4,5)) x ``` ### Extracing a scalar ``` x[1,1] ``` ### Extracting a vector ``` x[1] ``` ### Using slices ``` x[1,:] x[:,1] x[1:3,1:3] ``` ### Using slices with strides ``` x[::2,::2] ``` ### Extrcting blocks with arbitrary row and column lists (fancy indexing) `np.ix_` ``` x[:, [0,3]] ``` Warning: Fancy indexing can only be used for 1 dimension at a time. In the example below, `numpy` treats the arguments as *paired* coordinates, and returns the values at (0,0) and (2,3). ``` x[[0,2],[0,3]] ``` Use the helper `np.ix_` to extract arbitrary blocks. ``` x[np.ix_([0,2], [0,3])] ``` ### A slice is a view, not a copy ``` x y = x[1:-1, 1:-1] y y *= 10 y x ``` Use the copy method to convert a view to a copy ``` z = x[1:-1, 1:-1].copy() z z[:] = 0 z x ``` ### Boolean indexing ``` x[x % 2 == 0] x [x > 3] ``` ### Functions that return indexes ``` idx = np.nonzero(x) idx x[idx] idx = np.where(x > 3) idx x[idx] ``` ## Margins and the `axis` argument ``` x ``` The 0th axis has 4 items, the 1st axis has 5 items. ``` x.shape x.mean() ``` ### Marginalizing out the 0th axis = column summaries ``` x.mean(axis=0) ``` ### Marginalizing out the 1st axis = row summaries ``` x.mean(axis=1) ``` Note marginalizing out the last axis is a common default. ``` x.mean(axis=-1) ``` ### Marginalization works for higher dimensions in the same way ``` x = np.random.random((2,3,4)) x x.shape x.mean(axis=0).shape x.mean(axis=1).shape x.mean(axis=2).shape x.mean(axis=(0,1)).shape x.mean(axis=(0,2)).shape x.mean(axis=(1,2)).shape ``` ## Broadcasting Broadcasting is what happens when `numpy` tries to perform binary operations on two arrays with different shapes. In general, shapes are *promoted* to make the arrays compatible using the following rule - For each axis from highest to lowest - If both dimensions are the same, do nothing - If one of the dimensions is 1 or None and the other is $k$, promote to $k$ - Otherwise print error message ``` x = np.zeros((3,2)) x.shape x ``` Shapes are compatible ``` y = np.ones(2) y.shape x + y ``` Shapes are compatible ``` y = np.ones((1,2)) y.shape x + y ``` Shapes are incompatible but can be made compaible by adding empty dimension ``` y = np.ones(3) y.shape try: x + y except ValueError as e: print(e) y[:, None].shape x + y[:, None] ``` Shapes are incompatible ``` y = np.ones((2,2)) y.shape try: x + y except ValueError as e: print(e) ``` ### More examples of broadcasting ``` x1 = np.arange(12) x1 x1 * 10 x2 = np.random.randint(0,10,(3,4)) x2 x2 * 10 x2.shape ``` ### Column-wise broadcasting ``` mu = np.mean(x2, axis=0) mu.shape x2 - mu (x2 - mu).mean(axis=0) ``` ### Row wise broadcasting ``` mu = np.mean(x2, axis=1) mu.shape try: x2 - mu except ValueError as e: print(e) ``` ### We can add a "dummy" axis using None or `np.newaxis` ``` mu[:, None].shape x2 - mu[:, None] x2 - mu[:, np.newaxis] np.mean(x2 - mu[:, None], axis=1) ``` #### Reshaping works too ``` x2 - mu.reshape((-1,1)) ``` #### Exercise in broadcasting Creating a 12 by 12 multiplication table ``` x = np.arange(1, 13) x[:,None] * x[None,:] ``` Scaling to have zero mean and unit standard devation for each feature. ``` x = np.random.normal(10, 5,(3,4)) x ``` Scaling column-wise ``` (x - x.mean(axis=0))/x.std(axis=0) ``` Scaling row-wise ``` (x - x.mean(axis=1)[:, None])/x.std(axis=1)[:, None] ``` ## Combining `ndarray`s ``` x1 = np.zeros((3,4)) x2 = np.ones((3,5)) x3 = np.eye(4) x1 x2 x3 ``` ### Binding rows when number of columns is the same ``` np.r_[x1, x3] ``` ### Binding columns when number of rows is the same ``` np.c_[x1, x2] ``` ### You can combine more than 2 at a time ``` np.c_[x1, x2, x1] ``` ### Stacking ``` np.vstack([x1, x3]) np.hstack([x1, x2]) np.dstack([x2, 2*x2, 3*x2]) ``` ### Generic stack with axis argument ``` np.stack([x2, 2*x2, 3*x2], axis=0) np.stack([x2, 2*x2, 3*x2], axis=1) np.stack([x2, 2*x2, 3*x2], axis=2) ``` ### Repetition and tiling #### For a vector ``` x = np.array([1,2,3]) np.repeat(x, 3) np.tile(x, 3) ``` #### For a matrix ``` x = np.arange(6).reshape((2,3)) x np.repeat(x, 3) np.repeat(x, 3, axis=0) np.repeat(x, 3, axis=1) np.tile(x, (3,2)) ``` ## Splitting `ndarray`s ``` x = np.arange(32).reshape((4,8)) x np.split(x, 4) np.split(x, 4, axis=1) ``` ## Vectorization ### Example 1 The operators and functions (ufuncs) in Python are vectorized, and will work element-wise over all entries in an `ndarray`. ``` xs = np.zeros(10, dtype='int') for i in range(10): xs[i] = i**2 xs xs = np.arange(10)**2 xs ``` Using ufuncs ``` np.sqrt(xs) np.log1p(xs) ``` ### Example 2 Scalar product. ``` n = 10 xs = np.random.rand(n) ys = np.random.rand(n) s = 0 for i in range(n): s += xs[i] * ys[i] s np.dot(xs, ys) xs @ ys ``` ### Example 3 \begin{align} y_0 &= \alpha + \beta_1 x_1 + \beta_2 x_2 \\ y_1 &= \alpha + \beta_1 x_1 + \beta_2 x_2 \\ y_2 &= \alpha + \beta_1 x_1 + \beta_2 x_2 \\ \end{align} ``` m = 3 n = 2 alpha = np.random.rand(1) betas = np.random.rand(n,1) xs = np.random.rand(m,n) alpha betas xs ``` ### Using loops ``` ys = np.zeros((m,1)) for i in range(m): ys[i] = alpha for j in range(n): ys[i] += betas[j] * xs[i,j] ys ``` ### Removing inner loop ``` ys = np.zeros((m,1)) for i in range(m): ys[i] = alpha + xs[i,:].T @ betas ys ``` ### Removing all loops ``` ys = alpha + xs @ betas ys ``` ### Alternative approach The calculaiton with explicit intercepts and coefficients is common in deep learning, where $\alpha$ is called the bias ($b$) and $\beta$ are called the weights ($w$), and each equation is $y[i] = b + w[i]*x[i]$. It is common in statisiics to use an augmented matrix in which the first column is all ones, so that all that is needed is a single matrix multiplicaiotn. ``` X = np.c_[np.ones(m), xs] X alpha betas betas_ = np.concatenate([[alpha], betas]) betas_ ys = X @ betas_ ys ``` ### Simulating diffusion ``` w = 100 h = 100 x = np.zeros((w+2,h+2), dtype='float') x[(w//2-1):(w//2+2), (h//2-1):(h//2+2)] = 1 wts = np.ones(5)/5 for i in range(41): if i % 10 == 0: plt.figure() plt.imshow(x[1:-1, 1:-1], interpolation='nearest') center = x[1:-1, 1:-1] left = x[:-2, 1:-1] right = x[2:, 1:-1] bottom = x[1:-1, :-2] top = x[1:-1, 2:] nbrs = np.dstack([center, left, right, bottom, top]) x = np.sum(wts * nbrs, axis=-1) ```
github_jupyter
``` import pandas import json, os, sys, csv import datetime import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt sns.set(style="dark", rc={"axes.facecolor": (0, 0, 0, 0)}) ROOT = "/Users/kdobolyi/Documents/GitHub/hindsight2020/hindsight/paper_results/" def format_date(date): if '/' not in str(date): return 0 pieces = date.split("/") if len(pieces) != 3 or int(pieces[2]) < 20: return 0 weeks = datetime.date(int("20" + pieces[2]), int(pieces[0]), int(pieces[1])).isocalendar()[1] if pieces[2] == "20": return weeks else: return weeks + 52 def cleanQ(string, questions): for q in questions: if q in string: return q def makeChart(df_mini): # Initialize the FacetGrid object pal = sns.cubehelix_palette(10, rot=-.25, light=.7) g = sns.FacetGrid(df_mini, row="question_short", hue="question_short", aspect=15, height=.5, palette=pal) # Draw the densities in a few steps g.map(sns.kdeplot, "week", clip_on=False, alpha=1, linewidth=1.5) g.map(sns.kdeplot, "week", clip_on=False, color="b", lw=2) g.map(plt.axhline, y=0, lw=2, clip_on=False) # Define and use a simple function to label the plot in axes coordinates def label(x, color, label): ax = plt.gca() ax.text(0, .2, label, fontweight="bold", color=color, ha="left", va="center", transform=ax.transAxes) g.map(label, "week") # Set the subplots to overlap g.fig.subplots_adjust(hspace=-0.15) # Remove axes details that don't play well with overlap g.set_titles("") g.set(yticks=[]) g.despine(bottom=True, left=True) questions = ['1. Infectious Dose', '2. Transmissibility', '3. Host Range', '4. Incubation Period', '5. Clinical Presentation', '6. Protective Immunity', '7. Clinical Diagnosis', '8. Medical Treatments', '9. Vaccines', '10. Non-pharmaceutical Interventions (NPIs)', '11. Environmental Stability', '12. Decontamination', '13. PPE', '14. Forensics', '15. Genomics', '16. Forecasting'] df = pandas.read_csv(ROOT + "hindsight_results_combined_citations_DHSdates.csv",sep=',') df['week'] = df['matched_date'].apply(lambda x: format_date(x)) df['question_short'] = df['question'].apply(lambda x: cleanQ(x, questions)) # show all the data in the dataframe, including nonsense matches makeChart(df) # show the sentences that were related to ground_truth df_012 = df[df['score'].isin([1,0,-1,-2])] makeChart(df_012) # show the just sentences that were strong matches to ground_truth df_01 = df[df['score'].isin([1,0,-1])] makeChart(df_01) # show just the related sentences that represent novel research (not cited by the CORD19 papers) df_novel_research = df[df['is_citation'] == 0.0] df_novel_research = df_novel_research[df_novel_research['score'].isin([1,0,-1, -2])] makeChart(df_novel_research) # show just the close matches sentences that represent novel research (not cited by the CORD19 papers) df_novel_research = df[df['is_citation'] == 0.0] df_novel_research = df_novel_research[df_novel_research['score'].isin([1,0,-1])] makeChart(df_novel_research) # what are the journals of the matched papers? # TODO: add in original journals -- are they the same, or different? df_novel_research['matched_journal'].value_counts() # what percent of the ground truth papers (which had a known CORD19 entry) did we find the original # paper with STS matching? ground_truth_IDs = {} def mapGroundTruth(ground_truth_paperID, matched_claim_paperID, ground_truth_IDs): if ground_truth_paperID not in ground_truth_IDs.keys(): ground_truth_IDs[ground_truth_paperID] = False if str(ground_truth_paperID) != 'nan' and matched_claim_paperID.strip() in ground_truth_paperID: ground_truth_IDs[ground_truth_paperID] = True df[['ground_truth_paperID', 'matched_claim_paperID']].apply(lambda x: mapGroundTruth(*x, ground_truth_IDs), axis=1) df_non_none = df[df['ground_truth_paperID'] != 'None'] citations = len(ground_truth_IDs.keys()) count = 0 #print(ground_truth_IDs) for g in ground_truth_IDs.keys(): if ground_truth_IDs[g] == True: count += 1 print(count * 100.0 / citations) # what percent of the ground truth papers (which had a known CORD19 entry) were we unable to match anything at # all, presumably because the claim was paraphrased to the point of unrecognizability? largestMatches = {} def largestMatch(score, ground_truth_paperID, largestMatches): if ground_truth_paperID not in largestMatches.keys(): largestMatches[ground_truth_paperID] = -3 if score > -3: largestMatches[ground_truth_paperID] = 0 df[['score','ground_truth_paperID']].apply(lambda x: largestMatch(*x, largestMatches), axis=1) count = 0 for k in largestMatches.keys(): if largestMatches[k] == -3: count += 1 print(count * 100.0 / len(largestMatches.keys())) # what do the papers DHS cited, that are in the CORD19 dataset, look like? dhs_citations = pandas.read_csv(ROOT + "citations_df.csv",sep=',') dhs_citations['date'] = dhs_citations['date'].apply(lambda x: str(x)) dhs_citations['week'] = dhs_citations['date'].apply(lambda x: format_date(x)) dhs_citations['question_short'] = dhs_citations['question'].apply(lambda x: cleanQ(x, questions)) makeChart(dhs_citations) # number of sentences that had a CORD19 paper in our dataset df['binary_has_ground_truth_paper'] = df['ground_truth_paperID'].apply(lambda x: 0 if len(str(x)) <10 else 1) df_sentences_with_ground_truth_papers = df[df['binary_has_ground_truth_paper'] == 1] df_sentences_with_ground_truth_papers = df_sentences_with_ground_truth_papers[['question','question_short','ground_truth', 'ground_truth_paper_date']] df_sentences_with_ground_truth_papers = df_sentences_with_ground_truth_papers.drop_duplicates() df_sentences_with_ground_truth_papers['ground_truth_paper_date'] = df_sentences_with_ground_truth_papers['ground_truth_paper_date'].apply(lambda x: str(x)) df_sentences_with_ground_truth_papers['week'] = df_sentences_with_ground_truth_papers['ground_truth_paper_date'].apply(lambda x: format_date(x)) df_sentences_with_ground_truth_papers['df_sentences_with_ground_truth_papers'] = df_sentences_with_ground_truth_papers['question'].apply(lambda x: cleanQ(x, questions)) print(len(df_sentences_with_ground_truth_papers)) makeChart(df_sentences_with_ground_truth_papers) # show the sentence pairs where a contradiction was seen (only labelled in csv for scores >= -2) df_contradictions = df[df['real_contradiction'] == 1] df_contradictions = df_contradictions[df_contradictions['score'].isin([1,0,-1,-2])] makeChart(df_contradictions) # TODO: visualize the hedging/uncertainty over time with lineplot import numpy as np import pandas as pd import random def processRow(question, uncertainty_old, uncertainty_new, matched_date, original_date, all_dates, rows): q1 = np.nan q2 = np.nan q3 = np.nan q4 = np.nan q5 = np.nan q6 = np.nan q7 = np.nan q8 = np.nan q9 = np.nan q10 = np.nan q11 = np.nan q12 = np.nan q13 = np.nan q14 = np.nan q15 = np.nan q16 = np.nan if uncertainty_old == 'C'and uncertainty_new == 'C': uncertainty = 4 elif uncertainty_old == 'C'and uncertainty_new == 'U': uncertainty = 3 elif uncertainty_old == 'U'and uncertainty_new == 'C': uncertainty = 2 elif uncertainty_old == 'U'and uncertainty_new == 'U': uncertainty = 1 if "10." in question: q10 = uncertainty elif "11." in question: q11 = uncertainty elif "12." in question: q12 = uncertainty elif "13." in question: q13 = uncertainty elif "14." in question: q14 = uncertainty elif "15." in question: q15 = uncertainty elif "16." in question: q16 = uncertainty elif "1." in question: q1 = uncertainty elif "2." in question: q2 = uncertainty elif "3." in question: q3 = uncertainty elif "4." in question: q4 = uncertainty elif "5." in question: q5 = uncertainty elif "6." in question: q6 = uncertainty elif "7." in question: q7 = uncertainty elif "8." in question: q8 = uncertainty elif "9." in question: q9 = uncertainty if len(str(matched_date)) < 4: return matched_date = format_date(matched_date) original_date = format_date(original_date) # if we couldn't find an original_date, the function above set it to zero # get rid of any sentence pairs where DHS claim comes after CORD19 claim if matched_date > original_date: return while matched_date in all_dates and matched_date != 0: matched_date = matched_date * (1 + random.uniform(-0.99, 1.0)) all_dates.append(matched_date) rows.append([matched_date, q1, q2, q3, q4, q5, q6, q7, q8, q9, q10, q11, q12, q13, q14, q15, q16]) all_dates = [] rows = [] df_certain_measure = df[df['score'].isin([1,0,-1,-2])] df_certain_measure_matched = df_certain_measure[['question', 'matched_claim', 'matched_date', 'uncertainty_DHS', 'uncertainty_matched_claim', 'ground_truth_paper_date']] df_certain_measure_matched.drop_duplicates(inplace=True) df_certain_measure_matched = df_certain_measure_matched ctr = len(df_certain_measure_matched) for row in df_certain_measure_matched[['question', 'uncertainty_DHS', 'uncertainty_matched_claim', 'matched_date', 'ground_truth_paper_date']].iterrows(): processRow(row[1]['question'], row[1]['uncertainty_DHS'], row[1]['uncertainty_matched_claim'], row[1]['matched_date'], row[1]['ground_truth_paper_date'], all_dates, rows) data = pd.DataFrame(rows, columns=['date', 'q1', 'q2', 'q3', 'q4', 'q5', 'q6', 'q7', 'q8', 'q9', 'q10', 'q11', 'q12', 'q13', 'q14', 'q15', 'q16']) #data = data[['date', 'q3']] #data = data.rolling(7).mean() key = {'q1':'1. Infectious Dose', 'q2':'2. Transmissibility', 'q3':'3. Host Range', 'q4':'4. Incubation Period', 'q5':'5. Clinical Presentation', 'q6':'6. Protective Immunity', 'q7':'7. Clinical Diagnosis', 'q8':'8. Medical Treatments', 'q9':'9. Vaccines', 'q10':'10. Non-pharmaceutical Interventions (NPIs)', 'q11':'11. Environmental Stability', 'q12':'12. Decontamination', 'q13':'13. PPE', 'q14':'14. Forensics', 'q15':'15. Genomics', 'q16':'16. Forecasting'} #print(data) for q in ['q1', 'q2', 'q3', 'q4', 'q5', 'q6', 'q7', 'q8', 'q9','q10', 'q11', 'q12', 'q13', 'q14', 'q15', 'q16']: print(key[q]) sns.scatterplot(data=data, x='date', y=q) plt.xlabel("CORD19 week") plt.ylabel("Certainty for " + key[q]) plt.yticks(range(5), ['','U-U','U-C', 'C-U', 'C-C'], rotation='vertical') plt.savefig(ROOT + "certainty_" + q + ".png") plt.show() ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt prof=[0.57, 0.45, 0.43, 0.4, 0.4, 0.45, 0.71, 1.2, 1.44, 1.29, 1.28, 1.31, 1.3, 1.32, 1.35, 1.44, 1.51, 1.41, 1.14, 0.99, 0.86, 0.85, 0.8, 0.7] plt.plot(prof) pop_day=500. pop_night=100. pop_avg=(pop_day+pop_night)/2 print(pop_avg) plt.plot([x*pop_avg for x in prof]) prof_norm=[x/(sum(prof)/len(prof)) for x in prof] print(sum(prof_norm)/len(prof_norm)) plt.plot([x*pop_avg for x in prof_norm]) prof_day=[] prof_night=[] night=[19,20,21,22,23,0,1,2,3,4,5,6,] day=[7,8,9,10,11,12,13,14,15,16,17,18] for i in day: prof_day.append(prof[i]) for i in night: prof_night.append(prof[i]) prof_day_norm=[x/(sum(prof_day)/len(prof_day)) for x in prof_day] print([x for x in prof_day_norm]) print(sum(prof_day_norm)/len(prof_day_norm)) prof_night_norm=[x/(sum(prof_night)/len(prof_night)) for x in prof_night] print([x for x in prof_night_norm]) print(sum(prof_night_norm)/len(prof_night_norm)) prof_all_norm=prof_night_norm+prof_night_norm all_day=night+day plt.plot(all_day,prof_all_norm) prof_all_norm [x/(sum(prof_all_norm)/len(prof_all_norm)) for x in prof_all_norm] prof_all_norm_rotate=prof_all_norm[5:]+prof_all_norm[0:5] plt.plot(prof_all_norm_rotate) prof_all_norm_rotate pop_day=100. pop_night=700. pop_final=prof_all_norm_rotate.copy() for i in range(0,24): if 8<=i<=19: pop_final[i]=pop_day*prof_all_norm_rotate[i] else: pop_final[i]=pop_night*prof_all_norm_rotate[i] prof_2=prof.copy() for i in range(0,24): if 5<=i<=8: prof_2[i]=prof[i]/(sum(prof[5:9])/(len(prof[5:9]))) elif 8<i<=17: prof_2[i]=prof[i]/(sum(prof[9:18])/(len(prof[9:18]))) elif 17<i<=20: prof_2[i]=prof[i]/(sum(prof[18:21])/(len(prof[18:21]))) else: prof_2[i]=prof[i]/((sum(prof)-sum(prof[5:21]))/(8)) pop_day=100. pop_night=500. pop_final=prof.copy() for i in range(0,24): if 5<=i<=8: pop_final[i]=(pop_night+pop_day)*0.5*prof_2[i] elif 8<i<=17: pop_final[i]=pop_day*prof_2[i] elif 17<i<=20: pop_final[i]=(pop_night+pop_day)*0.5*prof_2[i] else: pop_final[i]=pop_night*prof_2[i] plt.plot(pop_final) print((sum(pop_final[18:21]))/(len(pop_final[18:21]))) print((sum(pop_final[5:9]))/(len(pop_final[5:9]))) print((sum(pop_final)-sum(pop_final[5:21]))/(8)) print((sum(pop_final[9:18]))/(len(pop_final[9:18]))) prof_2 plt.plot(prof_2) time_zone=0 a=[x+time_zone for x in range(0,24)] rotated=[] for i in a: if i>24: rotated.append(i-24) else: rotated.append(i) b=[x for x in range(0,24)] new_prof=np.interp(rotated,b,prof_2) pop_day=100. pop_night=1000. pop_final=prof.copy() for i in range(0,24): j=i+time_zone if j>24: j=j-24 if 5<=j<=8: pop_final[i]=(pop_night+pop_day)*0.5*new_prof[i] elif 8<j<=17: pop_final[i]=pop_day*new_prof[i] elif 17<j<=20: pop_final[i]=(pop_night+pop_day)*0.5*new_prof[i] else: pop_final[i]=pop_night*new_prof[i] plt.plot(pop_final) new_prof plt.plot(new_prof) ```
github_jupyter
Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks. - Author: Sebastian Raschka - GitHub Repository: https://github.com/rasbt/deeplearning-models ``` %load_ext watermark %watermark -a 'Sebastian Raschka' -v -p torch ``` - Runs on CPU or GPU (if available) # Model Zoo -- Wasserstein Generative Adversarial Networks (GAN) Implementation of a very simple/rudimentary Wasserstein GAN using just fully connected layers. The Wasserstein GAN is based on the paper - Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein GAN. arXiv preprint arXiv:1701.07875. (https://arxiv.org/abs/1701.07875) The main differences to a regular GAN are annotated in the code. In short, the main differences are 1. Not using a sigmoid activation function and just using a linear output layer for the critic (i.e., discriminator). 2. Using label -1 instead of 1 for the real images; using label 1 instead of 0 for fake images. 3. Using Wasserstein distance (loss) for training both the critic and the generator. 4. After each weight update, clip the weights to be in range [-0.1, 0.1]. 5. Train the critic 5 times for each generator training update. ## Imports ``` import time import numpy as np import torch import torch.nn.functional as F from torchvision import datasets from torchvision import transforms import torch.nn as nn from torch.utils.data import DataLoader if torch.cuda.is_available(): torch.backends.cudnn.deterministic = True ``` ## Settings and Dataset ``` ########################## ### SETTINGS ########################## # Device device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Hyperparameters random_seed = 0 generator_learning_rate = 0.0005 discriminator_learning_rate = 0.0005 NUM_EPOCHS = 100 BATCH_SIZE = 128 LATENT_DIM = 50 IMG_SHAPE = (1, 28, 28) IMG_SIZE = 1 for x in IMG_SHAPE: IMG_SIZE *= x ## WGAN-specific settings num_iter_critic = 5 weight_clip_value = 0.01 ########################## ### MNIST DATASET ########################## # Note transforms.ToTensor() scales input images # to 0-1 range train_dataset = datasets.MNIST(root='data', train=True, transform=transforms.ToTensor(), download=True) test_dataset = datasets.MNIST(root='data', train=False, transform=transforms.ToTensor()) train_loader = DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, shuffle=True) test_loader = DataLoader(dataset=test_dataset, batch_size=BATCH_SIZE, shuffle=False) # Checking the dataset for images, labels in train_loader: print('Image batch dimensions:', images.shape) print('Image label dimensions:', labels.shape) break ``` ## Model ``` ########################## ### MODEL ########################## def wasserstein_loss(y_true, y_pred): return torch.mean(y_true * y_pred) class GAN(torch.nn.Module): def __init__(self): super(GAN, self).__init__() self.generator = nn.Sequential( nn.Linear(LATENT_DIM, 128), nn.LeakyReLU(inplace=True), #nn.Dropout(p=0.5), nn.Linear(128, IMG_SIZE), nn.Tanh() ) self.discriminator = nn.Sequential( nn.Linear(IMG_SIZE, 128), nn.LeakyReLU(inplace=True), #nn.Dropout(p=0.5), nn.Linear(128, 1), #nn.Sigmoid() # WGAN should have linear activation ) def generator_forward(self, z): img = self.generator(z) return img def discriminator_forward(self, img): pred = model.discriminator(img) return pred.view(-1) torch.manual_seed(random_seed) model = GAN() model = model.to(device) optim_gener = torch.optim.Adam(model.generator.parameters(), lr=generator_learning_rate) optim_discr = torch.optim.Adam(model.discriminator.parameters(), lr=discriminator_learning_rate) ``` ## Training ``` start_time = time.time() discr_costs = [] gener_costs = [] for epoch in range(NUM_EPOCHS): model = model.train() for batch_idx, (features, targets) in enumerate(train_loader): features = (features - 0.5)*2. features = features.view(-1, IMG_SIZE).to(device) targets = targets.to(device) # Regular GAN: # valid = torch.ones(targets.size(0)).float().to(device) # fake = torch.zeros(targets.size(0)).float().to(device) # WGAN: valid = -(torch.ones(targets.size(0)).float()).to(device) fake = torch.ones(targets.size(0)).float().to(device) ### FORWARD AND BACK PROP # -------------------------- # Train Generator # -------------------------- # Make new images z = torch.zeros((targets.size(0), LATENT_DIM)).uniform_(-1.0, 1.0).to(device) generated_features = model.generator_forward(z) # Loss for fooling the discriminator discr_pred = model.discriminator_forward(generated_features) # Regular GAN: # gener_loss = F.binary_cross_entropy_with_logits(discr_pred, valid) # WGAN: gener_loss = wasserstein_loss(valid, discr_pred) optim_gener.zero_grad() gener_loss.backward() optim_gener.step() # -------------------------- # Train Discriminator # -------------------------- # WGAN: 5 loops for discriminator for _ in range(num_iter_critic): discr_pred_real = model.discriminator_forward(features.view(-1, IMG_SIZE)) # Regular GAN: # real_loss = F.binary_cross_entropy_with_logits(discr_pred_real, valid) # WGAN: real_loss = wasserstein_loss(valid, discr_pred_real) discr_pred_fake = model.discriminator_forward(generated_features.detach()) # Regular GAN: # fake_loss = F.binary_cross_entropy_with_logits(discr_pred_fake, fake) # WGAN: fake_loss = wasserstein_loss(fake, discr_pred_fake) # Regular GAN: discr_loss = (real_loss + fake_loss) # WGAN: #discr_loss = -(real_loss - fake_loss) optim_discr.zero_grad() discr_loss.backward() optim_discr.step() # WGAN: for p in model.discriminator.parameters(): p.data.clamp_(-weight_clip_value, weight_clip_value) discr_costs.append(discr_loss.item()) gener_costs.append(gener_loss.item()) ### LOGGING if not batch_idx % 100: print ('Epoch: %03d/%03d | Batch %03d/%03d | Gen/Dis Loss: %.4f/%.4f' %(epoch+1, NUM_EPOCHS, batch_idx, len(train_loader), gener_loss, discr_loss)) print('Time elapsed: %.2f min' % ((time.time() - start_time)/60)) print('Total Training Time: %.2f min' % ((time.time() - start_time)/60)) ``` ## Evaluation ``` %matplotlib inline import matplotlib.pyplot as plt ax1 = plt.subplot(1, 1, 1) ax1.plot(range(len(gener_costs)), gener_costs, label='Generator loss') ax1.plot(range(len(discr_costs)), discr_costs, label='Discriminator loss') ax1.set_xlabel('Iterations') ax1.set_ylabel('Loss') ax1.legend() ################### # Set scond x-axis ax2 = ax1.twiny() newlabel = list(range(NUM_EPOCHS+1)) iter_per_epoch = len(train_loader) newpos = [e*iter_per_epoch for e in newlabel] ax2.set_xticklabels(newlabel[::10]) ax2.set_xticks(newpos[::10]) ax2.xaxis.set_ticks_position('bottom') ax2.xaxis.set_label_position('bottom') ax2.spines['bottom'].set_position(('outward', 45)) ax2.set_xlabel('Epochs') ax2.set_xlim(ax1.get_xlim()) ################### plt.show() ########################## ### VISUALIZATION ########################## model.eval() # Make new images z = torch.zeros((5, LATENT_DIM)).uniform_(-1.0, 1.0).to(device) generated_features = model.generator_forward(z) imgs = generated_features.view(-1, 28, 28) fig, axes = plt.subplots(nrows=1, ncols=5, figsize=(20, 2.5)) for i, ax in enumerate(axes): axes[i].imshow(imgs[i].to(torch.device('cpu')).detach(), cmap='binary') ```
github_jupyter