code
stringlengths
2.5k
150k
kind
stringclasses
1 value
# Homework 6 ## Due: Tuesday, October 10 at 11:59 PM # Problem 1: Bank Account Revisited We are going to rewrite the bank account closure problem we had a few assignments ago, only this time developing a formal class for a Bank User and Bank Account to use in our closure (recall previously we just had a nonlocal variable amount that we changed). ### Some Preliminaries: First we are going to define two types of bank accounts. Use the code below to do this: ``` from enum import Enum class AccountType(Enum): SAVINGS = 1 CHECKING = 2 ``` An Enum stands for an enumeration, it's a convenient way for you to define lists of things. Typing: ``` AccountType.SAVINGS ``` returns a Python representation of an enumeration. You can compare these account types: ``` AccountType.SAVINGS == AccountType.SAVINGS AccountType.SAVINGS == AccountType.CHECKING ``` To get a string representation of an Enum, you can use: ``` AccountType.SAVINGS.name raise ValueError("ba") ``` ### Part 1: Create a BankAccount class with the following specification: Constructor is `BankAccount(self, owner, accountType)` where `owner` is a string representing the name of the account owner and `accountType` is one of the AccountType enums Methods `withdraw(self, amount)` and `deposit(self, amount)` to modify the account balance of the account Override methods `__str__` to write an informative string of the account owner and the type of account, and `__len__` to return the balance of the account ``` class BankAccount(): "Class Bank Account specifying owner and Account type" def __init__(self, owner, accountType): self.owner = owner self.balance = 0 self.AccountType = accountType def withdraw(self,amount): if amount <= 0: raise ValueError("Cannot withdraw a negative value") if amount > self.balance: raise ValueError("Insufficient funds") self.balance = self.balance - amount print("Withdrawal of ",amount," from account ",self.AccountType.name,". Remaining balance ", self.balance) def deposit(self,deposit): if deposit <=0: raise ValueError else: self.balance +=deposit print("Deposit of ", deposit, " and new balance is ", self.balance) def __str__(self): return "Owner of account "+ self.owner + " and account type is " + self.AccountType.name def __len__(self): #return the balance of acc return self.balance Bankacc1 = BankAccount("Filip M", AccountType.SAVINGS) print(Bankacc1) Bankacc1.deposit(100) print(len(Bankacc1)) ``` ### Part 2: Write a class BankUser with the following specification: Constructor `BankUser(self, owner)` where `owner` is the name of the account. Method `addAccount(self, accountType)` - to start, a user will have no accounts when the BankUser object is created. `addAccount` will add a new account to the user of the `accountType` specified. **Only one savings/checking account per user, return appropriate error otherwise** Methods `getBalance(self, accountType)`, `deposit(self, accountType, amount)`, and `withdraw(self, accountType, amount)` for a specific AccountType. Override `__str__` to have an informative summary of user's accounts. ``` class BankUser(): def __init__(self,owner): self.owner = owner self.savings = None self.checking = None self.myCaccount = None self.mySaccount = None def addAccount(self, accountType): #Check what account type and then whether the account is there if accountType == AccountType.SAVINGS: if self.savings == None: self.savings = 1 self.mySaccount = BankAccount(self.owner,accountType) else: raise ValueError("You already have a savings account") elif accountType == AccountType.CHECKING: if self.checking == None: self.savings = 1 self.myCaccount = BankAccount(self.owner,accountType) else: raise ValueError("You already have a checking account") else: raise ValueError("Account type is not valid") def getBalance(self,accountType): if accountType == AccountType.SAVINGS: if self.mySaccount == None: raise ValueError("No savings account") else: return self.mySaccount.balance elif accountType == AccountType.CHECKING: if self.myCaccount == None: raise ValueError("No checking account") else: return self.myCaccount.balance else: raise ValueError("Invalid account type inserted") def deposit(self,accountType,amount): if accountType == AccountType.SAVINGS: if self.mySaccount == None: raise ValueError("No savings account") else: return self.mySaccount.deposit(amount) elif accountType == AccountType.CHECKING: if self.myCaccount == None: raise ValueError("No checking account") else: return self.myCaccount.deposit(amount) else: raise ValueError("Invalid account type inserted") def withdraw(self,accountType,amount): #analogous to deposit if accountType == AccountType.SAVINGS: if self.mySaccount == None: raise ValueError("No savings account") else: return self.mySaccount.withdraw(amount) elif accountType == AccountType.CHECKING: if self.myCaccount == None: raise ValueError("No checking account") else: return self.myCaccount.withdraw(amount) else: raise ValueError("Invalid account type inserted") user1 = BankUser("Filip") user1.addAccount(AccountType.CHECKING) user1.getBalance(AccountType.CHECKING) user1.deposit(AccountType.CHECKING,7500) user1.withdraw(AccountType.CHECKING,-2500) ``` Write some simple tests to make sure this is working. Think of edge scenarios a user might try to do. ``` def test_withdraw_negative(): try: user1 = BankUser("Filip") user1.addAccount(AccountType.CHECKING) user1.withdraw(AccountType.CHECKING,-2500) except ValueError as Valer: assert(type(Valer)==ValueError) test_withdraw_negative() ``` ### Part 3: ATM Closure Finally, we are going to rewrite a closure to use our bank account. We will make use of the [input function](http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/io.html) which takes user input to decide what actions to take. Write a closure called ATMSession(bankUser) which takes in a BankUser object. Return a method called Interface that when called, would provide the following interface: First screen for user will look like: **Enter Option:** **1)Exit** **2)Create Account** **3)Check Balance** **4)Deposit** **5)Withdraw** Pressing 1 will exit, any other option will show the options: **Enter Option:** **1)Checking** **2)Savings** If a deposit or withdraw was chosen, then there must be a third screen: **Enter Integer Amount, Cannot Be Negative:** This is to keep the code relatively simple, if you'd like you can also curate the options depending on the BankUser object (for example, if user has no accounts then only show the Create Account option), but this is up to you. In any case, you must handle any input from the user in a reasonable way that an actual bank would be okay with, and give the user a proper response to the action specified. Upon finishing a transaction or viewing balance, it should go back to the original screen ``` def ATMSession(bankUser): def interface(): while True: ``` ### Part 4: Put everything in a module Bank.py We will be grading this problem with a test suite. Put the enum, classes, and closure in a single file named Bank.py. It is very important that the class and method specifications we provided are used (with the same capitalization), otherwise you will receive no credit. --- ## Problem 2: Linear Regression Class Let's say you want to create Python classes for three related types of linear regression: Ordinary Least Squares Linear Regression, Ridge Regression, and Lasso Regression. Consider the multivariate linear model: $$y = X\beta + \epsilon$$ where $y$ is a length $n$ vector, $X$ is an $m \times p$ matrix, and $\beta$ is a $p$ length vector of coefficients. #### Ordinary Least Squares Linear Regression [OLS Regression](https://en.wikipedia.org/wiki/Ordinary_least_squares) seeks to minimize the following cost function: $$\|y - \beta\mathbf {X}\|^{2}$$ The best fit coefficients can be obtained by: $$\hat{\beta} = (X^T X)^{-1}X^Ty$$ where $X^T$ is the transpose of the matrix $X$ and $X^{-1}$ is the inverse of the matrix $X$. #### Ridge Regression [Ridge Regression](https://en.wikipedia.org/wiki/Tikhonov_regularization) introduces an L2 regularization term to the cost function: $$\|y - \beta\mathbf {X}\|^{2}+\|\Gamma \mathbf {x} \|^{2}$$ Where $\Gamma = \alpha I$ for some constant $\alpha$ and the identity matrix $I$. The best fit coefficients can be obtained by: $$\hat{\beta} = (X^T X+\Gamma^T\Gamma)^{-1}X^Ty$$ #### Lasso Regression [Lasso Regression](https://en.wikipedia.org/wiki/Lasso_%28statistics%29) introduces an L1 regularization term and restricts the total number of predictor variables in the model. The following cost function: $${\displaystyle \min _{\beta _{0},\beta }\left\{{\frac {1}{m}}\left\|y-\beta _{0}-X\beta \right\|_{2}^{2}\right\}{\text{ subject to }}\|\beta \|_{1}\leq \alpha.}$$ does not have a nice closed form solution. For the sake of this exercise, you may use the [sklearn.linear_model.Lasso](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html) class, which uses a coordinate descent algorithm to find the best fit. You should only use the class in the fit() method of this exercise (ie. do not re-use the sklearn for other methods in your class). #### $R^2$ score The $R^2$ score is defined as: $${R^{2} = {1-{SS_E \over SS_T}}}$$ Where: $$SS_T=\sum_i (y_i-\bar{y})^2, SS_R=\sum_i (\hat{y_i}-\bar{y})^2, SS_E=\sum_i (y_i - \hat{y_i})^2$$ where ${y_i}$ are the original data values, $\hat{y_i}$ are the predicted values, and $\bar{y_i}$ is the mean of the original data values. ### Part 1: Base Class Write a class called `Regression` with the following methods: $fit(X, y)$: Fits linear model to $X$ and $y$. $get\_params()$: Returns $\hat{\beta}$ for the fitted model. The parameters should be stored in a dictionary. $predict(X)$: Predict new values with the fitted model given $X$. $score(X, y)$: Returns $R^2$ value of the fitted model. $set\_params()$: Manually set the parameters of the linear model. This parent class should throw a `NotImplementedError` for methods that are intended to be implemented by subclasses. ``` class Regression: def __init__(self): self.X = 0 self.y = 0 self.betas = 0 self.alpha=0.1 def fit(self,X,y): #reshape x and y arrays to get the right matrices for regre raise NotImplementedError("Subclasses should implement this!") def get_params(self): self_betas=self.betas.reshape(-1,1) print(self.betas.shape) betas_dict={} for index,value in enumerate(self.betas): betas_dict[index]=value return betas_dict def predict(self,X): #print(self.betas.shape) #print(sm.add_constant(X).T.shape) #print(self.betas) #print(sm.add_constant(X).T) ypred = np.dot(self.betas,(sm.add_constant(X).T)) #raise NotImplementedError("Subclasses should implement this!") return ypred def score(self,X,y): ypred = self.predict(X) ypred = ypred.reshape(-1,1) #print(ypred) y_mean=np.mean(y) y=y.reshape(-1,1) #print("y_mean") #print(y_mean) #print("y-y_pred") #print((y - ypred)*(y - ypred)) #print(((y - ypred)).shape) #print((y - y_mean)*(y-y_mean)) #print(np.sum((y - y_mean**2))) score = 1- (np.sum((y - ypred)*(y - ypred))/np.sum((y - y_mean)*(y - y_mean))) #raise NotImplementedError("Subclasses should implement this!") return score def set_params(self): beta0 = float(input("Please input zero coefficient")) beta1 = float(input("Please input first coefficient")) self.betas = [beta0,beta1] return None #import statsmodels.api as sm x_train = np.array([1, 2, 3, 4]) y_train = np.array([2, 2, 4, 5]) #print(y_train.reshape(-1,1)) test_regr = Regression(x_train,y_train) #test_regr.fit(x_train,y_train) #test_regr.set_params() #test_regr.get_params() ``` ### Part 2: OLS Linear Regression Write a class called `OLSRegression` that implements the OLS Regression model described above and inherits the `Regression` class. ``` class OLSRegression(Regression): def fit(self,X,y): #reshape x and y arrays to get the right matrices for regre self.X = X self.y= y #x_train = self.X.reshape(len(self.X),1) #y_train = self.y.reshape(len(self.y),1) #build matrix X by concatenating predictors and a column of ones n = x_train.shape[0] ones_col = np.ones((n, 1)) X = np.concatenate((ones_col, x_train), axis=1) #matrix X^T X LHS = np.dot(np.transpose(X), X) #print(LHS) #matrix X^T Y RHS = np.dot(np.transpose(X), y_train) #solution beta to normal equations, since LHS is invertible by construction self.betas = np.dot(np.linalg.inv(LHS), RHS) print(self.betas) print(self.betas.shape) #return None x_train = np.array([1, 2, 3, 4]) y_train = np.array([2, 2, 4, 5]) #print(y_train.reshape(-1,1)) #gamma = 3*np.eye(x_train.shape[0]) #print(gamma) test_regr1 = OLSRegression() test_regr1.fit(x_train,y_train) #test_regr1.score(x_train,y_train) test_regr1.predict(x_train) test_regr1.score(x_train,y_train) ``` ### Part 3: Ridge Regression Write a class called `RidgeRegression` that implements Ridge Regression and inherits the `OLSRegression` class. $$\hat{\beta} = (X^T X+\Gamma^T\Gamma)^{-1}X^Ty$$ ``` class RidgeRegression(OLSRegression): def fit(self,X,y,alpha): self.X = X self.y= y #x_train = self.X.reshape(len(self.X),1) #y_train = self.y.reshape(len(self.y),1) #build matrix X by concatenating predictors and a column of ones n = x_train.shape[0] ones_col = np.ones((n, 1)) X = np.concatenate((ones_col, x_train), axis=1) #matrix X^T X LHS = np.dot(np.transpose(X), X) #print(LHS) gamma = alpha*np.eye(LHS.shape[0]) #print(gamma) gammas = np.dot(np.transpose(gamma), gamma) #matrix X^T Y RHS = np.dot(np.transpose(X), y_train) #solution beta to normal equations, since LHS is invertible by construction self.betas = np.dot(np.linalg.inv(LHS+gammas), RHS) #print(self.betas) print(self.betas) print(self.betas.shape) return None test_ridge = RidgeRegression() test_ridge.fit(x_train,y_train,0.5) test_ridge.predict(x_train) test_ridge.get_params() test_ridge.score(x_train,y_train) test_ridge.get_params() ``` ### Part 3: Lasso Regression Write a class called `LassoRegression` that implements Lasso Regression and inherits the `OLSRegression` class. You should only use Lasso(), Lasso.fit(), Lasso.coef_, and Lasso._intercept from the `sklearn.linear_model.Lasso` class. ``` from sklearn.linear_model import Lasso class LassoRegression(OLSRegression): def fit(self,X,y): #X=X.reshape(len(X),1) #y=y.reshape(len(y),1) model = Lasso(alpha=self.alpha,fit_intercept=True) fitted_model= model.fit(X,y) intercept = fitted_model.intercept_ coeffs = np.array(fitted_model.coef_) self.betas=np.insert(coeffs,0,intercept) #self.betas = self.betas.reshape(-1,1) #=np.array([fitted_model.intercept_[0],fitted_model.coef_[i]]).reshape(-1,1) print(self.betas) print(x_train.shape) print(y_train.shape) lasso_test= LassoRegression() lasso_test.fit(x_train,y_train) #lasso_test.get_params() lasso_test.score(x_train,y_train) ``` ### Part 4: Model Scoring You will use the [Boston dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html) for this part. Instantiate each of the three models above. Using a for loop, fit (on the training data) and score (on the testing data) each model on the Boston dataset. Print out the $R^2$ value for each model and the parameters for the best model using the `get_params()` method. Use an $\alpha$ value of 0.1. **Hint:** You can consider using the `sklearn.model_selection.train_test_split` method to create the training and test datasets. ``` from sklearn.datasets import load_boston from sklearn.model_selection import train_test_split boston = load_boston() #print(boston.target) #print(boston.data) x_train, x_test, y_train,y_test = train_test_split(boston.data,boston.target, test_size=0.3) print(x_train.shape) """ modelOLS = OLSRegression() modelRidge = RidgeRegression() modelLasso = LassoRegression() models = [modelOLS, modelRidge, modelLasso] for model in models: model.fit(x_train,y_train) model.score """ ``` ### Part 5: Visualize Model Performance We can evaluate how the models perform for various values of $\alpha$. Calculate the $R^2$ scores for each model for $\alpha \in [0.05, 1]$ and plot the three lines on the same graph. To change the parameters, use the `set_params()` method. Be sure to label each line and add axis labels.
github_jupyter
``` %matplotlib notebook import numpy as np import matplotlib.pyplot as plt ``` # Utilizando un modelo pre-entrenado [`torchvision.models`](https://pytorch.org/vision/stable/models.html) ofrece una serie de modelos famosos de la literatura de *deep learning* Por defecto el modelo se carga con pesos aleatorios Si indicamos `pretrained=True` se descarga un modelo entrenado Se pueden escoger modelos para clasificar, localizar y segmentar ## Modelo para clasificar imágenes torchvision tiene una basta cantidad de modelos para clasificar incluyendo distintas versiones de VGG, ResNet, AlexNet, GoogLeNet, DenseNet, entre otros Cargaremos un modelo [resnet18](https://arxiv.org/pdf/1512.03385.pdf) [pre-entrenado](https://pytorch.org/docs/stable/torchvision/models.html#torchvision.models.resnet18) en [ImageNet](http://image-net.org/) ``` from torchvision import models model = models.resnet18(pretrained=True, progress=True) model.eval() ``` Los modelos pre-entrenados esperan imágenes con - tres canales (RGB) - al menos 224x224 píxeles - píxeles entre 0 y 1 (float) - normalizadas con normalize = torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ``` img from PIL import Image import torch from torchvision import transforms img = Image.open("img/dog.jpg") my_transform = transforms.Compose([transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225))]) # Las clases con probabilidad más alta son probs = torch.nn.Softmax(dim=1)(model.forward(my_transform(img).unsqueeze(0))) best = probs.argsort(descending=True) display(best[0, :10], probs[0, best[0, :10]]) ``` ¿A qué corresponde estas clases? Clases de ImageNet: https://gist.github.com/ageitgey/4e1342c10a71981d0b491e1b8227328b ## Modelo para detectar entidades en imágenes Adicional a los modelos de clasificación torchvision también tiene modelos para - Detectar entidades en una imagen: Faster RCNN - Hacer segmentación por instancia: Mask RCNN - Hacer segmentación semántica: FCC, DeepLab - Clasificación de video A continuación probaremos la [Faster RCNN](https://arxiv.org/abs/1506.01497) para hace detección Este modelo fue pre-entrenado en la base de datos [COCO](https://cocodataset.org/) El modelo retorna un diccionario con - 'boxes': Los bounding box de las entidades - 'labels': La etiqueta de la clase más probable de la entidad - 'score': La probabilidad de la etiqueta ``` model = models.detection.fasterrcnn_resnet50_fpn(pretrained=True) model.eval() transform = transforms.ToTensor() img = Image.open("img/pelea.jpg") # No require normalización de color img_tensor = transform(img) result = model(img_tensor.unsqueeze(0))[0] def filter_results(result, threshold=0.9): mask = result['scores'] > threshold bbox = result['boxes'][mask].detach().cpu().numpy() lbls = result['labels'][mask].detach().cpu().numpy() return bbox, lbls from PIL import ImageFont, ImageDraw #fnt = ImageFont.truetype("arial.ttf", 20) label2name = {1: 'persona', 2: 'bicicleta', 3: 'auto', 4: 'moto', 8: 'camioneta', 18: 'perro'} def draw_rectangles(img, bbox, lbls): draw = ImageDraw.Draw(img) for k in range(len(bbox)): if lbls[k] in label2name.keys(): draw.rectangle(bbox[k], fill=None, outline='white', width=2) draw.text([int(d) for d in bbox[k][:2]], label2name[lbls[k]], fill='white') bbox, lbls = filter_results(result) img = Image.open("img/pelea.jpg") draw_rectangles(img, bbox, lbls) display(img) ``` # Transferencia de Aprendizaje A continuación usaremos la técnicas de transferencia de aprendizaje para aprender un clasificador de imágenes para un fragmento de la base de datos food 5k El objetivo es clasificar si la imagen corresponde a comida o no Guardamos las imagenes con la siguiente estructura de carpetas ``` !ls img/food5k/ !ls img/food5k/train !ls img/food5k/valid ``` Con esto podemos usar `torchvision.datasets.ImageFolder` para crear los dataset de forma muy sencilla Dado que usaremos un modelo preentrenado debemos transformar entregar las imágenes en tamaño 224x224 y con color normalizado Usaremos también aumentación de datos en el conjunto de entrenamiento ``` from torchvision import datasets train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) valid_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) train_dataset = datasets.ImageFolder('img/food5k/train', transform=train_transforms) valid_dataset = datasets.ImageFolder('img/food5k/valid', transform=valid_transforms) train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=32, shuffle=True) valid_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=256, shuffle=False) for image, label in train_loader: break fig, ax = plt.subplots(1, 6, figsize=(9, 2), tight_layout=True) for i in range(6): ax[i].imshow(image[i].permute(1,2,0).numpy()) ax[i].axis('off') ax[i].set_title(label[i].numpy()) ``` Usaremos el modelo ResNet18 ``` model = models.resnet18(pretrained=True, progress=True) # model = models.squeezenet1_1(pretrained=True, progress=True) display(model) ``` En este caso re-entrenaremos sólo la última capa: `fc` Las demás capas las congelaremos Para congelar una capa simplemente usamos `requires_grad=False` en sus parámetros Cuando llamemos `backward` no se calculará gradiente para estas capas ``` #Congelamos todos los parámetros for param in model.parameters(): param.requires_grad = False # La reemplazamos por una nueva capa de salida model.fc = torch.nn.Linear(model.fc.in_features , 2) # Para resnet #model.classifier = torch.nn.Sequential(torch.nn.Dropout(p=0.5, inplace=False), # torch.nn.Conv2d(512, 2, kernel_size=(1, 1), stride=(1, 1)), # torch.nn.ReLU(inplace=True), # torch.nn.AdaptiveAvgPool2d(output_size=(1, 1))) # Para Squeezenet criterion = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) for epoch in range(10): for x, y in train_loader: optimizer.zero_grad() yhat = model.forward(x) loss = criterion(yhat, y) loss.backward() optimizer.step() epoch_loss = 0.0 for x, y in valid_loader: yhat = model.forward(x) loss = criterion(yhat, y) epoch_loss += loss.item() print(f"{epoch}, {epoch_loss:0.4f}, {torch.sum(yhat.argmax(dim=1) == y).item()/100}") targets, predictions = [], [] for mbdata, label in valid_loader: logits = model.forward(mbdata) predictions.append(logits.argmax(dim=1).detach().numpy()) targets.append(label.numpy()) predictions = np.concatenate(predictions) targets = np.concatenate(targets) from sklearn.metrics import confusion_matrix, classification_report cm = confusion_matrix(targets, predictions) display(cm) print(classification_report(targets, predictions)) ``` ¿Cómo se compara lo anterior a entrenar una arquitectura convolucional desde cero? A modo de ejemplo se adapta la arquitectura Lenet5 para aceptar imágenes a color de 224x224 ¿Cuánto desempeño se obtiene entrenando la misma cantidad de épocas? ``` import torch.nn as nn class Lenet5(nn.Module): def __init__(self): super(type(self), self).__init__() # La entrada son imágenes de 3x224x224 self.features = nn.Sequential(nn.Conv2d(3, 6, 5), nn.ReLU(), nn.MaxPool2d(3), nn.Conv2d(6, 16, 5), nn.ReLU(), nn.MaxPool2d(3), nn.Conv2d(16, 32, 5), nn.ReLU(), nn.MaxPool2d(3)) self.classifier = nn.Sequential(nn.Linear(32*6*6, 120), nn.ReLU(), nn.Linear(120, 84), nn.ReLU(), nn.Linear(84, 2)) def forward(self, x): z = self.features(x) #print(z.shape) # Esto es de tamaño Mx16x5x5 z = z.view(-1, 32*6*6) # Esto es de tamaño Mx400 return self.classifier(z) model = Lenet5() criterion = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) for epoch in range(10): for x, y in train_loader: optimizer.zero_grad() yhat = model.forward(x) loss = criterion(yhat, y) loss.backward() optimizer.step() epoch_loss = 0.0 for x, y in valid_loader: yhat = model.forward(x) loss = criterion(yhat, y) epoch_loss += loss.item() print(f"{epoch}, {epoch_loss:0.4f}, {torch.sum(yhat.argmax(dim=1) == y).item()/100}") targets, predictions = [], [] for mbdata, label in valid_loader: logits = model.forward(mbdata) predictions.append(logits.argmax(dim=1).detach().numpy()) targets.append(label.numpy()) predictions = np.concatenate(predictions) targets = np.concatenate(targets) from sklearn.metrics import confusion_matrix, classification_report cm = confusion_matrix(targets, predictions) display(cm) print(classification_report(targets, predictions)) ``` # Resumen Aspectos a considerar durante el entrenamiento de redes neuronales - Arquitecturas: cantidad y organización de capas, funciones de activación - Funciones de costo, optimizadores y sus parámetros (tasa de aprendizaje, momentum) - Verificar convergencia y sobreajuste: - Checkpoint: Guardar el último modelo y el con menor costo de validación - Early stopping: Detener el entrenamiento si el error de validación no disminuye en un cierto número de épocas - Inicialización de los parámetros: Probar varios entrenamientos desde inicios aleatorios distintos - Si el modelo se sobreajusta pronto - Disminuir complejidad - Incorporar regularización: Aumentación de datos, decaimiento de pesos, Dropout - Si quiero aprovechar un modelo preentrenado - Transferencia de aprendizaje - [Zoológico de modelos](https://modelzoo.co/) - [Papers with code](https://paperswithcode.com/) Estrategia agil > Desarrolla rápido e itera: Empieza simple. Propón una solución, impleméntala, entrena y evalua. Analiza las fallas, modifica e intenta de nuevo Mucho exito en sus desarrollos futuros!
github_jupyter
<a href="https://colab.research.google.com/github/saketkc/pyFLGLM/blob/master/Chapters/01_Chapter01.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Chapter 1 - Introduction to Linear and Generalized Linear Models ``` import warnings import pandas as pd import proplot as plot import seaborn as sns import statsmodels.api as sm import statsmodels.formula.api as smf from patsy import dmatrices from scipy import stats warnings.filterwarnings("ignore") %pylab inline plt.rcParams["axes.labelweight"] = "bold" plt.rcParams["font.weight"] = "bold" crabs_df = pd.read_csv("../data/Crabs.tsv.gz", sep="\t") crabs_df.head() ``` This data comes from a study of female horseshoe crabs (citation unknown). During spawning session, the females migrate to the shore to brred. The males then attach themselves to females' posterior spine while the females burrows into the sand and lays cluster of eggs. The fertilization of eggs happens externally in the sand beneath the pair. During this spanwing, multulpe males may cluster the pair and may also fertilize the eggs. These males are called satellites. **crab**: observation index **y**: Number of satellites attached **weight**: weight of the female crab **color**: color of the female **spine**:condition of female's spine ``` print((crabs_df["y"].mean(), crabs_df["y"].var())) sns.distplot(crabs_df["y"], kde=False, color="slateblue") pd.crosstab(index=crabs_df["y"], columns="count") formula = """y ~ 1""" response, predictors = dmatrices(formula, crabs_df, return_type="dataframe") fit_pois = sm.GLM( response, predictors, family=sm.families.Poisson(link=sm.families.links.identity()) ).fit() print(fit_pois.summary()) ``` Fitting a Poisson distribution with a GLM containing only an iontercept and using identity link function gives the estimate of intercept which is essentially the mean of `y`. But poisson has the same mean as its variance. The sample variance of 9.92 suggests that a poisson fit is not appropriate here. ### Linear Model Using Weight to Predict Satellite Counts ``` print((crabs_df["weight"].mean(), crabs_df["weight"].var())) print(crabs_df["weight"].quantile(q=[0, 0.25, 0.5, 0.75, 1])) fig, ax = plt.subplots(figsize=(5, 5)) ax.scatter(crabs_df["weight"], crabs_df["y"]) ax.set_xlabel("weight") ax.set_ylabel("y") ``` The plot shows that there is no clear trend in relation between y (number of satellites) and weight. ### Fit a LM vs GLM (Gaussian) ``` formula = """y ~ weight""" fit_weight = smf.ols(formula=formula, data=crabs_df).fit() print(fit_weight.summary()) response, predictors = dmatrices(formula, crabs_df, return_type="dataframe") fit_weight2 = sm.GLM(response, predictors, family=sm.families.Gaussian()).fit() print(fit_weight2.summary()) ``` Thus OLS and a GLM using Gaussian family and identity link are one and the same. ### Plotting the linear fit ``` fig, ax = plt.subplots() ax.scatter(crabs_df["weight"], crabs_df["y"]) line = fit_weight2.params[0] + fit_weight2.params[1] * crabs_df["weight"] ax.plot(crabs_df["weight"], line, color="#f03b20") ``` ### Comparing Mean Numbers of Satellites by Crab Color ``` crabs_df["color"].value_counts() ``` color: 1 = medium light, 2 = medium, 3 = medium dark, 4 = dark ``` crabs_df.groupby("color").agg(["mean", "var"])[["y"]] ``` Majority of the crabs are of medoum color and the mean response also decreases as the color gets darker. If we fit a linear model between $y$ and $color$ using `sm.ols`, color is treated as a quantitative variable: ``` mod = smf.ols(formula="y ~ color", data=crabs_df) res = mod.fit() print(res.summary()) ``` **Let's treat color as a qualitative variable:** ``` mod = smf.ols(formula="y ~ C(color)", data=crabs_df) res = mod.fit() print(res.summary()) ``` This is equivalent to doing a GLM fit with a gaussian family and identity link: ``` formula = """y ~ C(color)""" response, predictors = dmatrices(formula, crabs_df, return_type="dataframe") fit_color = sm.GLM(response, predictors, family=sm.families.Gaussian()).fit() print(fit_color.summary()) ``` If we instead do a poisson fit: ``` formula = """y ~ C(color)""" response, predictors = dmatrices(formula, crabs_df, return_type="dataframe") fit_color2 = sm.GLM( response, predictors, family=sm.families.Poisson(link=sm.families.links.identity) ).fit() print(fit_color2.summary()) ``` And we get the same estimates as when using Gaussian family with identity link! Because the ML estimates for the poisson distirbution is also the sample mean if there is a single predictor. But the standard values are much smaller. Because the errors here are heteroskedastic while the gaussian version assume homoskesdasticity. ### Using both qualitative and quantitative variables ``` formula = """y ~ weight + C(color)""" response, predictors = dmatrices(formula, crabs_df, return_type="dataframe") fit_weight_color = sm.GLM(response, predictors, family=sm.families.Gaussian()).fit() print(fit_weight_color.summary()) formula = """y ~ weight + C(color)""" response, predictors = dmatrices(formula, crabs_df, return_type="dataframe") fit_weight_color2 = sm.GLM( response, predictors, family=sm.families.Poisson(link=sm.families.links.identity()) ).fit() print(fit_weight_color2.summary()) ```
github_jupyter
# Tutorial about loading localization data from file ``` from pathlib import Path import locan as lc lc.show_versions(system=False, dependencies=False, verbose=False) ``` Localization data is typically provided as text or binary file with different formats depending on the fitting software. Locan provides functions for loading various localization files. All available functions can be looked up in the [API documentation](https://locan.readthedocs.io/en/latest/source/generated/locan.locan_io.locdata.html#module-locan.locan_io.locdata). In locan there are functions availabel to deal with file types according to the constant enum `FileType`: ``` list(lc.FileType._member_names_) ``` Currently the following io functions are available: ``` [name for name in dir(lc.locan_io) if not name.startswith("__")] ``` Throughout this manual it might be helpful to use pathlib to provide path information. In all cases a string path is also usable. ## Load rapidSTORM data file Here we identify some data in the test_data directory and provide a path using pathlib (a pathlib object is returned by `lc.ROOT_DIR`): ``` path = lc.ROOT_DIR / 'tests/test_data/rapidSTORM_dstorm_data.txt' print(path, '\n') ``` The data is then loaded from a rapidSTORM localization file. The file header is read to provide correct property names. The number of localisations to be read can be limited by *nrows* ``` dat = lc.load_rapidSTORM_file(path=path, nrows=1000) ``` Print information about the data: ``` print('Data head:') print(dat.data.head(), '\n') print('Summary:') dat.print_summary() print('Properties:') print(dat.properties) ``` Column names are exchanged with standard locan property names according to the following mapping. If no mapping is defined a warning is issued and the original column name is kept. ``` lc.RAPIDSTORM_KEYS ``` ## Load Zeiss Elyra data file The Elyra super-resolution microscopy system from Zeiss uses as slightly different file format. Elyra column names are exchanged with locan property names upon loading the data. ``` path_Elyra = lc.ROOT_DIR / 'tests/test_data/Elyra_dstorm_data.txt' print(path_Elyra, '\n') dat_Elyra = lc.load_Elyra_file(path=path_Elyra, nrows=1000) print('Data head:') print(dat_Elyra.data.head(), '\n') print('Summary:') dat_Elyra.print_summary() print('Properties:') print(dat_Elyra.properties) ``` ## Localization data from a custom text file Other custom text files can be read with a function that wraps the pandas.read_table() method. ``` path_csv = lc.ROOT_DIR / 'tests/test_data/five_blobs.txt' print(path_csv, '\n') ``` Here data is loaded from a comma-separated-value file. Column names are read from the first line and a warning is given if the naming does not comply with locan conventions. Column names can also be provided as *column*. The separater, e.g. a tab '\t' can be provided as *sep*. ``` dat_csv = lc.load_txt_file(path=path_csv, sep=',', columns=None, nrows=100) print('Data head:') print(dat_csv.data.head(), '\n') print('Summary:') dat_csv.print_summary() print('Properties:') print(dat_csv.properties) ``` ## Load localization data file A general function for loading localization data is provided. Targeting specific localization file formats is done through the `file_format` parameter. ``` path = lc.ROOT_DIR / 'tests/test_data/rapidSTORM_dstorm_data.txt' print(path, '\n') dat = lc.load_locdata(path=path, file_type=lc.FileType.RAPIDSTORM, nrows=1000) dat.print_summary() ``` The file type can be specified by using the enum class `FileType` and use tab control to make a choice. ``` lc.FileType.__members__ lc.FileType.RAPIDSTORM ```
github_jupyter
``` # Copyright 2021 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Vertex Pipelines: AutoML tabular regression pipelines using google-cloud-pipeline-components <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/notebooks/official/pipelines/google_cloud_pipeline_components_automl_tabular.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/notebooks/official/pipelines/google_cloud_pipeline_components_automl_tabular.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/notebooks/official/pipelines/google_cloud_pipeline_components_automl_tabular.ipynb"> Open in Google Cloud Notebooks </a> </td> </table> <br/><br/><br/> ## Overview This notebook shows how to use the components defined in [`google_cloud_pipeline_components`](https://github.com/kubeflow/pipelines/tree/master/components/google-cloud) to build an AutoML tabular regression workflow on [Vertex Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines). ### Dataset The dataset used for this tutorial is the [California Housing dataset from the 1990 Census](https://developers.google.com/machine-learning/crash-course/california-housing-data-description) The dataset predicts the median house price. ### Objective In this tutorial, you create an AutoML tabular regression using a pipeline with components from `google_cloud_pipeline_components`. The steps performed include: - Create a `Dataset` resource. - Train an AutoML `Model` resource. - Creates an `Endpoint` resource. - Deploys the `Model` resource to the `Endpoint` resource. The components are [documented here](https://google-cloud-pipeline-components.readthedocs.io/en/latest/google_cloud_pipeline_components.aiplatform.html#module-google_cloud_pipeline_components.aiplatform). ### Costs This tutorial uses billable components of Google Cloud: * Vertex AI * Cloud Storage Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage. ### Set up your local development environment If you are using Colab or Google Cloud Notebook, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: - The Cloud Storage SDK - Git - Python 3 - virtualenv - Jupyter notebook running in a virtual environment with Python 3 The Cloud Storage guide to [Setting up a Python development environment](https://cloud.google.com/python/setup) and the [Jupyter installation guide](https://jupyter.org/install) provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: 1. [Install and initialize the SDK](https://cloud.google.com/sdk/docs/). 2. [Install Python 3](https://cloud.google.com/python/setup#installing_python). 3. [Install virtualenv](Ihttps://cloud.google.com/python/setup#installing_and_using_virtualenv) and create a virtual environment that uses Python 3. 4. Activate that environment and run `pip3 install Jupyter` in a terminal shell to install Jupyter. 5. Run `jupyter notebook` on the command line in a terminal shell to launch Jupyter. 6. Open this notebook in the Jupyter Notebook Dashboard. ## Installation Install the latest version of Vertex SDK for Python. ``` import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG ``` Install the latest GA version of *google-cloud-storage* library as well. ``` ! pip3 install -U google-cloud-storage $USER_FLAG ``` Install the latest GA version of *google-cloud-pipeline-components* library as well. ``` ! pip3 install $USER kfp google-cloud-pipeline-components --upgrade ``` ### Restart the kernel Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. ``` import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) ``` Check the versions of the packages you installed. The KFP SDK version should be >=1.6. ``` ! python3 -c "import kfp; print('KFP SDK version: {}'.format(kfp.__version__))" ! python3 -c "import google_cloud_pipeline_components; print('google_cloud_pipeline_components version: {}'.format(google_cloud_pipeline_components.__version__))" ``` ## Before you begin ### GPU runtime This tutorial does not require a GPU runtime. ### Set up your Google Cloud project **The following steps are required, regardless of your notebook environment.** 1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs. 2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project) 3. [Enable the Vertex AI APIs, Compute Engine APIs, and Cloud Storage.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component,storage-component.googleapis.com) 4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook. 5. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. **Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$`. ``` PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID ``` #### Region You can also change the `REGION` variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. - Americas: `us-central1` - Europe: `europe-west4` - Asia Pacific: `asia-east1` You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services. Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations) ``` REGION = "us-central1" # @param {type: "string"} ``` #### Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. ``` from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") ``` ### Authenticate your Google Cloud account **If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. **Otherwise**, follow these steps: In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page. **Click Create service account**. In the **Service account name** field, enter a name, and click **Create**. In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. ``` # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. import os import sys # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' ``` ### Create a Cloud Storage bucket **The following steps are required, regardless of your notebook environment.** When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. ``` BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP ``` **Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket. ``` ! gsutil mb -l $REGION $BUCKET_NAME ``` Finally, validate access to your Cloud Storage bucket by examining its contents: ``` ! gsutil ls -al $BUCKET_NAME ``` #### Service Account **If you don't know your service account**, try to get your service account using `gcloud` command by executing the second cell below. ``` SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"} if ( SERVICE_ACCOUNT == "" or SERVICE_ACCOUNT is None or SERVICE_ACCOUNT == "[your-service-account]" ): # Get your GCP project id from gcloud shell_output = !gcloud auth list 2>/dev/null SERVICE_ACCOUNT = shell_output[2].strip() print("Service Account:", SERVICE_ACCOUNT) ``` #### Set service account access for Vertex Pipelines Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account. ``` ! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_NAME ! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_NAME ``` ### Set up variables Next, set up some variables used throughout the tutorial. ### Import libraries and define constants ``` import google.cloud.aiplatform as aip ``` #### Vertex AI constants Setup up the following constants for Vertex AI: - `API_ENDPOINT`: The Vertex AI API service endpoint for `Dataset`, `Model`, `Job`, `Pipeline` and `Endpoint` services. ``` # API service endpoint API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION) ``` #### Vertex Pipelines constants Setup up the following constants for Vertex Pipelines: ``` PIPELINE_ROOT = "{}/pipeline_root/cal_housing".format(BUCKET_NAME) ``` Additional imports. ``` import kfp from google_cloud_pipeline_components import aiplatform as gcc_aip ``` ## Initialize Vertex SDK for Python Initialize the Vertex SDK for Python for your project and corresponding bucket. ``` aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME) ``` ## Define AutoML tabular regression model pipeline that uses components from `google_cloud_pipeline_components` Next, you define the pipeline. Create and deploy an AutoML tabular regression `Model` resource using a `Dataset` resource. ``` TRAIN_FILE_NAME = "california_housing_train.csv" ! gsutil cp gs://aju-dev-demos-codelabs/sample_data/california_housing_train.csv {PIPELINE_ROOT}/data/ gcs_csv_path = f"{PIPELINE_ROOT}/data/{TRAIN_FILE_NAME}" @kfp.dsl.pipeline(name="automl-tab-training-v2") def pipeline(project: str = PROJECT_ID): dataset_create_op = gcc_aip.TabularDatasetCreateOp( project=project, display_name="housing", gcs_source=gcs_csv_path ) training_op = gcc_aip.AutoMLTabularTrainingJobRunOp( project=project, display_name="train-automl-cal_housing", optimization_prediction_type="regression", optimization_objective="minimize-rmse", column_transformations=[ {"numeric": {"column_name": "longitude"}}, {"numeric": {"column_name": "latitude"}}, {"numeric": {"column_name": "housing_median_age"}}, {"numeric": {"column_name": "total_rooms"}}, {"numeric": {"column_name": "total_bedrooms"}}, {"numeric": {"column_name": "population"}}, {"numeric": {"column_name": "households"}}, {"numeric": {"column_name": "median_income"}}, {"numeric": {"column_name": "median_house_value"}}, ], dataset=dataset_create_op.outputs["dataset"], target_column="median_house_value", ) deploy_op = gcc_aip.ModelDeployOp( # noqa: F841 model=training_op.outputs["model"], project=project, machine_type="n1-standard-4", ) ``` ## Compile the pipeline Next, compile the pipeline. ``` from kfp.v2 import compiler # noqa: F811 compiler.Compiler().compile( pipeline_func=pipeline, package_path="tabular regression_pipeline.json".replace(" ", "_"), ) ``` ## Run the pipeline Next, run the pipeline. ``` DISPLAY_NAME = "cal_housing_" + TIMESTAMP job = aip.PipelineJob( display_name=DISPLAY_NAME, template_path="tabular regression_pipeline.json".replace(" ", "_"), pipeline_root=PIPELINE_ROOT, ) job.run() ``` Click on the generated link to see your run in the Cloud Console. <!-- It should look something like this as it is running: <a href="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" target="_blank"><img src="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" width="40%"/></a> --> In the UI, many of the pipeline DAG nodes will expand or collapse when you click on them. Here is a partially-expanded view of the DAG (click image to see larger version). <a href="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" target="_blank"><img src="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" width="40%"/></a> # Cleaning up To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial -- *Note:* this is auto-generated and not all resources may be applicable for this tutorial: - Dataset - Pipeline - Model - Endpoint - Batch Job - Custom Job - Hyperparameter Tuning Job - Cloud Storage Bucket ``` delete_dataset = True delete_pipeline = True delete_model = True delete_endpoint = True delete_batchjob = True delete_customjob = True delete_hptjob = True delete_bucket = True try: if delete_model and "DISPLAY_NAME" in globals(): models = aip.Model.list( filter=f"display_name={DISPLAY_NAME}", order_by="create_time" ) model = models[0] aip.Model.delete(model) print("Deleted model:", model) except Exception as e: print(e) try: if delete_endpoint and "DISPLAY_NAME" in globals(): endpoints = aip.Endpoint.list( filter=f"display_name={DISPLAY_NAME}_endpoint", order_by="create_time" ) endpoint = endpoints[0] endpoint.undeploy_all() aip.Endpoint.delete(endpoint.resource_name) print("Deleted endpoint:", endpoint) except Exception as e: print(e) if delete_dataset and "DISPLAY_NAME" in globals(): if "tabular" == "tabular": try: datasets = aip.TabularDataset.list( filter=f"display_name={DISPLAY_NAME}", order_by="create_time" ) dataset = datasets[0] aip.TabularDataset.delete(dataset.resource_name) print("Deleted dataset:", dataset) except Exception as e: print(e) if "tabular" == "image": try: datasets = aip.ImageDataset.list( filter=f"display_name={DISPLAY_NAME}", order_by="create_time" ) dataset = datasets[0] aip.ImageDataset.delete(dataset.resource_name) print("Deleted dataset:", dataset) except Exception as e: print(e) if "tabular" == "text": try: datasets = aip.TextDataset.list( filter=f"display_name={DISPLAY_NAME}", order_by="create_time" ) dataset = datasets[0] aip.TextDataset.delete(dataset.resource_name) print("Deleted dataset:", dataset) except Exception as e: print(e) if "tabular" == "video": try: datasets = aip.VideoDataset.list( filter=f"display_name={DISPLAY_NAME}", order_by="create_time" ) dataset = datasets[0] aip.VideoDataset.delete(dataset.resource_name) print("Deleted dataset:", dataset) except Exception as e: print(e) try: if delete_pipeline and "DISPLAY_NAME" in globals(): pipelines = aip.PipelineJob.list( filter=f"display_name={DISPLAY_NAME}", order_by="create_time" ) pipeline = pipelines[0] aip.PipelineJob.delete(pipeline.resource_name) print("Deleted pipeline:", pipeline) except Exception as e: print(e) if delete_bucket and "BUCKET_NAME" in globals(): ! gsutil rm -r $BUCKET_NAME ```
github_jupyter
# Pull census data for the neighborhoods in Seattle Use this link to find tables: https://api.census.gov/data/2018/acs/acs5/variables.html ``` import pandas as pd import censusdata import csv import numpy as np import seaborn as sns import matplotlib.pyplot as plt import scipy from scipy import stats sample = censusdata.search('acs5', 2018,'concept', 'household income') print(sample[0]) sample = censusdata.search('acs5', 2018,'concept', 'population') print(sample[:5]) states = censusdata.geographies(censusdata.censusgeo([('state', '*')]), 'acs5', 2018) print(states['Washington']) counties = censusdata.geographies(censusdata.censusgeo([('state', '53'), ('county', '*')]), 'acs5', 2018) print(counties['King County, Washington']) ``` ## Collect Population Data for King County ``` data = censusdata.download('acs5', 2018, censusdata.censusgeo([('state', '53'), ('county', '033'), ('tract', '*')]), ['B01003_001E']) df = data df = df.reset_index() df = df.rename(columns={"index": "tract", "B01003_001E":"pop"}) df.head() #convert object type to string df['tract']= df['tract'].astype(str) #Split out uneeded info df[['tract']] = df['tract'].str.split(',').str[0] #Get the first value df[['tract']] = df['tract'].str.split(' ').str[2] #Remove the words so only tract number remains #convert object type to float df['tract']= df['tract'].astype(float) #There may be missing values listed as -666666. Delete those. df = df[df['B01003_001E'] >= 0] df.head() df = df.sort_values(by=['tract']) df.head() df.to_csv('pop-by-tract.csv', mode = 'w', index=False) ``` ## Collect income data for King County ``` data = censusdata.download('acs5', 2018, censusdata.censusgeo([('state', '53'), ('county', '033'), ('tract', '*')]), ['B19013_001E']) data.head() data['tract']=data.index df = data #convert object type to string df['tract']= df['tract'].astype(str) #Split out uneeded info df[['tract']] = df['tract'].str.split(',').str[0] #Get the first value df[['tract']] = df['tract'].str.split(' ').str[2] #Remove the words so only tract number remains #convert object type to float df['tract']= df['tract'].astype(float) #There may be missing values listed as -666666. Delete those. df = df[df['B19013_001E'] >= 0] df.head() df.to_csv('seattle-census-tract-acs5-2018.csv', mode = 'w', index=False) #Open the full tract file if needed df = pd.read_csv('seattle-census-tract-acs5-2018.csv',encoding='utf-8') ``` ### Regression for income and LFLs per population for all of Seattle This merges population and lfl number data for tracts with the income data ``` from sklearn.linear_model import LinearRegression #Open the file if needed dfinc = pd.read_csv('seattle-census-tract-acs5-2018.csv',encoding='utf-8') lflvtract = pd.read_csv('census-tracts-lfl-counts.csv',encoding='utf-8') pop = pd.read_csv('pop-by-tract.csv',encoding='utf-8') #Merge with the population dataframe dfregr = pd.merge(dfinc, pop, on='tract', how='inner') dfregr.head() #Merge with the lfl number dataset dfregr = pd.merge(dfregr, lflvtract, on='tract', how='inner') dfregr.head() dfregr['lflperpop'] = dfregr['numlfls']/dfregr['pop'] #In case there are any negative values dfregr = dfregr[dfregr['household_income'] >= 0] ax = sns.scatterplot(x="household_income", y="lflperpop", data=dfregr) ax.set(ylim=(0, 0.005)) x = dfregr[['household_income']] y = dfregr[['lflperpop']] model = LinearRegression().fit(x, y) r_sq = model.score(x, y) print('coefficient of determination:', r_sq) print('intercept:', model.intercept_) print('slope:', model.coef_) ``` ### Check for Normality ``` #Create list of values to check - standardized number of lfls lflperpop = dfregr['lflperpop'] plt.hist(lflperpop) plt.show() #Create list of values to check - income income = dfregr['household_income'] plt.hist(income) plt.show() ``` #### Because the number of lfls is not normally distributed, use Spearman's correlation coefficien https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3576830/#:~:text=In%20summary%2C%20correlation%20coefficients%20are,otherwise%20use%20Spearman's%20correlation%20coefficient. ``` #Spearmans Correlation for all variables in the table dfregr.corr(method='spearman') ``` household_income vs lflperpop is what we're interested in. 0.5 is considered moderate correlation. So moderate positive correlation ### Use SciPy to calculate spearman with p value for trend ``` #The %.3f' sets the number of decimal places coef, p = stats.spearmanr(dfregr['household_income'],dfregr['lflperpop']) print('Spearmans correlation coefficient: %.3f' % coef,' p-value: %.3f' % p) ``` ## Collect diversity numbers ``` sample = censusdata.search('acs5', 2018,'concept', 'families') print(sample[0]) # This gets data for total, white, african american, american indian, asian, hawaiian, other, and three combineation categories. #https://api.census.gov/data/2016/acs/acs5/groups/B02001.html divdata = censusdata.download('acs5', 2016, censusdata.censusgeo([('state', '53'), ('county', '033'), ('tract', '*')]), ['B02001_001E', 'B02001_002E', 'B02001_003E', 'B02001_004E', 'B02001_005E', 'B02001_006E', 'B02001_007E', 'B02001_008E', 'B02001_009E', 'B02001_010E']) divdata.head() #Create a new dataframe in case something gets messed up df = divdata #Rename columns and parse index df['tract']=df.index #convert object type to string df['tract']= df['tract'].astype(str) #Split out uneeded info df[['tract']] = df['tract'].str.split(',').str[0] #Get the first value df[['tract']] = df['tract'].str.split(' ').str[2] #Remove the words so only tract number remains #convert object type to float df['tract']= df['tract'].astype(float) df.rename(columns={'B02001_001E':'tot','B02001_002E':'wh', 'B02001_003E':'afam', 'B02001_004E':'amin', 'B02001_005E':'as', 'B02001_006E':'hw', 'B02001_007E':'ot', 'B02001_008E':'combo1', 'B02001_009E':'combo2', 'B02001_010E':'combo3'}, inplace=True) #Drop any rows that have a zero for tot column df.drop(df[df['tot'] == 0].index, inplace = True) df.head() df = df.reset_index() df = df.drop(columns=['index']) df.head() ``` #### Calculate simpsons index (gini index is 1-simpsons) Simpsons is the sum of the squared category proportions. Lower value is more diverse. Gini-Simpsons, higher value more diverse ``` def simpsons(row): return (row['wh'] / row['tot'])**2 + (row['afam'] / row['tot'])**2 + (row['amin'] / row['tot'])**2 + (row['as'] / row['tot'])**2 + (row['hw'] / row['tot'])**2 + (row['ot'] / row['tot'])**2 + (row['combo1'] / row['tot'])**2 + (row['combo2'] / row['tot'])**2 + (row['combo3'] / row['tot'])**2 df['simpsons'] = df.apply(simpsons, axis=1) df['gini-simp'] = 1 - df['simpsons'] df.head() #Save file as csv df.to_csv('diversity-seattle-census-tract-acs5-2018.csv', mode = 'w', index=False) ``` ## Education Levels ``` data = censusdata.download('acs5', 2018, censusdata.censusgeo([('state', '53'), ('county', '033'), ('tract', '*')]), ['B06009_005E', 'B06009_006E']) data['tract']=data.index df = data df['collegePlus'] = df['B06009_005E'] + df['B06009_006E'] df.drop(columns=['B06009_005E','B06009_006E'], inplace=True) df.head() #convert object type to string df['tract']= df['tract'].astype(str) #Split out uneeded info df[['tract']] = df['tract'].str.split(',').str[0] #Get the first value df[['tract']] = df['tract'].str.split(' ').str[2] #Remove the words so only tract number remains #convert object type to float df['tract']= df['tract'].astype(float) #There may be missing values listed as -666666. Delete those. df = df[df['collegePlus'] >= 0] df.head() df = df.reset_index() df = df.drop(columns=['index']) df.head() df.to_csv('seattle-census-tract-acs5-2018-edu.csv', mode = 'w', index=False) #Open the edu tract file if needed #dfedu = df dfedu = pd.read_csv('seattle-census-tract-acs5-2018-edu.csv',encoding='utf-8') ``` ## Calculate correlation between number of lfls per pop and education level ``` #Open census tract csv with the counts of lfls #I used QGIS to make a csv with the number of lfls per area for each census tract lflvtract = pd.read_csv('census-tracts-lfl-counts.csv',encoding='utf-8') #Merge with the lfl number dataset dfedulfl = pd.merge(lflvtract, dfedu, on='tract', how='inner') dfedulfl.head() #Open the population data pop = pd.read_csv('pop-by-tract.csv',encoding='utf-8') #Merge with the population dataframe dfedulfl = pd.merge(dfedulfl, pop, on='tract', how='inner') dfedulfl.head() dfedulfl['lflperpop']=dfedulfl['numlfls']/dfedulfl['pop'] ax = sns.scatterplot(x="collegePlus", y="lflperpop", data=dfedulfl) ax.set(ylim=(0, 0.005)) #Create list of values to check edu = dfedulfl['collegePlus'] plt.hist(edu) plt.show() #Create list of values to check - income lflperpop = dfedulfl['lflperpop'] plt.hist(lflperpop) plt.show() ``` Neither are normal most likely so use Spearman's ``` dfedulfl.corr(method='spearman') ``` A value of 0.15 is very weakly positively correlated ``` #Use SciPy #The %.3f' sets the number of decimal places coef, p = stats.spearmanr(dfedulfl['collegePlus'],dfedulfl['lflperpop']) print('Spearmans correlation coefficient: %.3f' % coef,' p-value: %.3f' % p) ``` ## Calculate correlation between number of lfls per pop and diversity ``` #Open census tract csv with the counts of lfls #I used QGIS to make a csv with the number of lfls per area for each census tract lflvtract = pd.read_csv('census-tracts-lfl-counts.csv',encoding='utf-8') #Open the diversity tract file if needed dfdiv = pd.read_csv('diversity-seattle-census-tract-acs5-2018.csv',encoding='utf-8') #Open the population data if needed pop = pd.read_csv('pop-by-tract.csv',encoding='utf-8') #Merge with the lfl number dataset dfdivlfl = pd.merge(lflvtract, dfdiv, on='tract', how='inner') dfdivlfl.head() #Merge with the population dataframe dfdivlfl = pd.merge(dfdivlfl, pop, on='tract', how='inner') dfdivlfl.head() #Calculate lfls per pop. dfdivlfl['lflperpop']=dfdivlfl['numlfls']/dfdivlfl['pop'] #Take only the useful columns (here tot is the total number included in diversity calculation) dfdivlfl = dfdivlfl[['tract','tot','gini-simp','pop','lflperpop']].copy() #Create list of values to check div = dfdivlfl['gini-simp'] plt.hist(div) plt.show() #Not normal so use Spearman's dfdivlfl.corr(method='spearman') ``` gini-simp is moderately and negatively (-0.5) correlated with lfls per population by tract. Suggesting as diversity decreases, the number of lfls increase ``` #Use SciPy #The %.3f' sets the number of decimal places coef, p = stats.spearmanr(dfdivlfl['gini-simp'],dfdivlfl['lflperpop']) print('Spearmans correlation coefficient: %.3f' % coef,' p-value: %.3f' % p) ``` ## Calculate average median income for the study neighborhoods #### I manually listed what census tracts match the neighborhood boundaries (Seattle's community reporting areas). I also got the population by census tract from here: https://www.census.gov/geographies/reference-files/2010/geo/2010-centers-population.html (divide the six number code by 100 to get the tract number ``` #Open census tract csv hoodtracts = pd.read_csv('censustracts-neighborhoods.csv',encoding='utf-8') #Open the median data if it's not already open medians = pd.read_csv('seattle-census-tract-acs5-2018.csv',encoding='utf-8') #Merge the dataframes dflfl = pd.merge(hoodtracts, medians, on='tract', how='inner') dflfl.head() #Open the population data pop = pd.read_csv('pop-by-tract.csv',encoding='utf-8') #Merge with the population dataframe dflfl = pd.merge(dflfl, pop, on='tract', how='inner') dflfl.head() #Open census tract csv with the counts of lfls #I used QGIS to make a csv with the number of lfls per area for each census tract. However, sq km column may be incorrect!!! lflvtract = pd.read_csv('census-tracts-lfl-counts.csv',encoding='utf-8') #Merge with the lfl number dataset dflfl = pd.merge(lflvtract, dflfl, on='tract', how='inner') dflfl.head() dflfl['lflperpop'] = dflfl['numlfls']/dflfl['pop'] dflfl.head() #Save file as csv dflfl.to_csv('census-compiled-data.csv', mode = 'w', index=False) ax = sns.scatterplot(x="household_income", y="lflperpop", data=dflfl) ax.set(ylim=(0, 0.005)) ``` #### I used this site for the linear regression: https://realpython.com/linear-regression-in-python/#python-packages-for-linear-regression The following is a regression only for neighborhoods in the study! ``` from sklearn.linear_model import LinearRegression #Open the file if needed dflfl = pd.read_csv('census-compiled-data.csv',encoding='utf-8') x = dflfl[['household_income']] y = dflfl[['lflperpop']] model = LinearRegression().fit(x, y) r_sq = model.score(x, y) print('coefficient of determination:', r_sq) print('intercept:', model.intercept_) print('slope:', model.coef_) ``` # Examine lfls and neighborhoods ``` #Open the census compiled data if needed dflfl = pd.read_csv('census-compiled-data.csv',encoding='utf-8') dflflhood = dflfl.groupby('neighborhood').agg({'household_income': ['mean'], 'pop': ['sum'], 'numlfls':['sum']}) # rename columns dflflhood.columns = ['avg-median-income', 'pop', 'numlfls'] # reset index to get grouped columns back dflflhood = dflflhood.reset_index() dflflhood.head(8) ``` #### Diversity ``` #If necessary. Open census tract csv with the counts of lfls #I used QGIS to make a csv with the number of lfls per area for each census tract lflvtract = pd.read_csv('censustracts-neighborhoods.csv',encoding='utf-8') #open up the diversity data dfdiv = pd.read_csv('diversity-seattle-census-tract-acs5-2018.csv',encoding='utf-8') #Merge with the lfl number dataset dflfldiv = pd.merge(dfdiv, lflvtract, on='tract', how='inner') dflfldiv = dflfldiv.drop(columns=['simpsons','gini-simp']) dflfldiv.head() dflfldiv = dflfldiv.groupby('neighborhood').agg({'tot': ['sum'], 'wh': ['sum'], 'afam':['sum'], 'amin':['sum'], 'as':['sum'], 'hw':['sum'], 'ot':['sum'], 'combo1':['sum'], 'combo2':['sum'], 'combo3':['sum']}) # rename columns dflfldiv.columns = ['tot', 'wh', 'afam', 'amin', 'as', 'hw', 'ot', 'combo1', 'combo2', 'combo3'] # reset index to get grouped columns back dflfldiv = dflfldiv.reset_index() dflfldiv.head(8) #Calculate Simpsons and gini def simpsons(row): return (row['wh'] / row['tot'])**2 + (row['afam'] / row['tot'])**2 + (row['amin'] / row['tot'])**2 + (row['as'] / row['tot'])**2 + (row['hw'] / row['tot'])**2 + (row['ot'] / row['tot'])**2 + (row['combo1'] / row['tot'])**2 + (row['combo2'] / row['tot'])**2 + (row['combo3'] / row['tot'])**2 dflfldiv['simpsons'] = dflfldiv.apply(simpsons, axis=1) dflfldiv['gini-simp'] = 1 - dflfldiv['simpsons'] dflfldiv.head(8) #Save file as csv dflfldiv.to_csv('census-lflhood-compiled-data.csv', mode = 'w', index=False) #Merge diversity and income tables dfsocioecon = pd.merge(dflflhood, dflfldiv, on='neighborhood', how='inner') dfsocioecon.head() #Save file dfsocioecon.to_csv('socioeconomic-by-neighborhood.csv', mode = 'w', index=False) ```
github_jupyter
# Playing with Matplotlib Please note I am making no assumptions nor any conclusions as I have not studied this data, it's actual original source, the source I got it from or even looked at most of the dataset itself. It is just some data to make graphs with and part of a tutorial. ``` import pandas as pd #Demo used a direct link to the hosted file but a) slower and b) someone else's bandwidth women_majors = pd.read_csv('percent_bachelors.csv', encoding='utf-8') under_20 = women_majors.loc[0, women_majors.loc[0] < 20] print(under_20) ``` ## Calling Matplotlib ... magic [Literally](https://ipython.readthedocs.io/en/stable/interactive/plotting.html#id1) referred to as magic. Then again [Clarke's Third Law](https://en.wikipedia.org/wiki/Clarke%27s_three_laws). Calling it with the `inline` backend so the output (graphs) are shown under the code block that produced them ``` %matplotlib inline ``` ## Let us make a default graph So we are going to use the year column for the x axis and results of the under_20 index labels. figsize requires a tuple and formats the size of the figure in inches, width followed by height. ``` under_20_graph = women_majors.plot(x = 'Year', y = under_20.index, figsize = (12,8)) print("Type is :", type(under_20_graph)) ``` ## Matplotlib refers to parts of a graph as below: ![Matplotlib graph labels](anatomy1.png) Source: [Matplotlib.org](Matplotlib.org) ## Making it look like 538's style fivethirtyeight seems to be a US website and below is the google description: > Nate Silver's FiveThirtyEight uses statistical analysis — hard numbers — to tell compelling stories about elections, politics, sports, science, economics and ... A backend of the style is [here](https://github.com/matplotlib/matplotlib/blob/38be7aeaaac3691560aeadafe46722dda427ef47/lib/matplotlib/mpl-data/stylelib/fivethirtyeight.mplstyle) on github and the author of the style sheet discusses it [here](https://dataorigami.net/blogs/napkin-folding/17543615-replicating-538s-plot-styles-in-matplotlib) in a blog post. ``` import matplotlib.style as style print(style.available) style.use('fivethirtyeight') women_majors.plot(x = 'Year', y = under_20.index, figsize = (12,8)) ``` ## Further work to do - [ ] Add a title and a subtitle. - [ ] Remove the block-style legend, and add labels near the relevant plot lines. We’ll also have to make the grid lines transparent around these labels. - [ ] Add a signature bottom bar which mentions the author of the graph and the source of the data. - Add a couple of other small adjustments: - [ ] increase the font size of the tick labels; - [ ] add a “%” symbol to one of the major tick labels of the y-axis; - [ ] remove the x-axis label; - [ ] bold the horizontal grid line at y = 0; - [ ] add an extra grid line next to the tick labels of the y-axis; - [ ] increase the lateral margins of the figure. Going to assign the graph to fte_graph and start working on this list ``` fte_graph = women_majors.plot(x = 'Year', y = under_20.index, figsize = (12,8)) ``` # Matplotlib gotcha Text is positioned based on the x and y coords of the generated graph so finish playing with those first Also you need to have all the variables you are going to use on the graph in the same code block. Spacing them out as I have here doesn't produce a graph but makes it easier to learn. ## Custom tick labels - [x] increase the font size of the tick labels; `tick_params()` is a method for modifying the ticks. - `axis` defines which axis to work on - `which` are we working on the major or minor ticks - `labelsize` font size - [x] add a “%” symbol to one of the major tick labels of the y-axis; `set_yticklabels(labels = [list])` - there is also a `get_yticks()` - using some empty spaces in all the percentages that don't have the sign so they align ``` fte_graph.tick_params(axis = 'both', which = 'major', labelsize = 18) fte_graph.set_yticklabels(labels = [-10, '0 ', '10 ', '20 ', '30 ', '40 ', '50%']) ``` ## Make the y = 0 line bold - [x] bold the horizontal grid line at y = 0; The title is a little bit of a lie as what we are actually going to do is add a new horizontal line at y = 0 which will be over the top of the other one. `axhline()` is the method for this and we are going to use the below arguements (doc for method is [here](https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.axhline.html?highlight=axhline#matplotlib.axes.Axes.axhline)) - `y` where on the y axis we want the line - `color` what color we want the line - `linewidth` how wide we want the line - `alpha` how transparent we want the line or can be used to drop the color intensity ``` fte_graph.axhline(y = 0, color = 'black', linewidth = 1.3, alpha = .7) ``` ## Add an extra vertical line - [x] add an extra grid line next to the tick labels of the y-axis; To add the additional grid line near to the labels on the y axis we are going to extend the x axis range using `set_xlim()` `set_xlim()` we are going to use 2 simple and self explanatory parameters: - `left` where we want the start - `right` where we want the end ``` fte_graph.set_xlim(left = 1969, right = 2011) ``` ## Generating a sig bar - [x] remove the x-axis label; - [x] Add a signature bottom bar which mentions the author of the graph and the source of the data. ### Removing the x axis label `set_visible(False)` we are going to use this method on the x axis label. ### Generate the sig This is slightly messy as it requires some testing on position but at the same time works nicely when it is sorted. Between the Author and Source details you will need to pad with spaces. We are going to set the background of a text area to the grey required and the text colour to the actual graphs background colour. The "white space" of spacing will actually be the selected background colour. The method is `text()` - `x` coord - `y` coord - `s` the string you want to use` - `fontsize` - `color` text colour - `backgroundcolor` ``` fte_graph.xaxis.label.set_visible(False) fte_graph.text(x = 1965.8, y = -7, s = 'In your general direction Source: National Center for Education Statistics', fontsize = 14, color = '#f0f0f0', backgroundcolor = 'grey') ``` We could also do things with multple lines of text as per below: `fte_graph.text(x = 1967.1, y = -6.5, s = '________________________________________________________________________________________________________________', color = 'grey', alpha = .7) fte_graph.text(x = 1966.1, y = -9, s = ' ©DATAQUEST Source: National Center for Education Statistics ', fontsize = 14, color = 'grey', alpha = .7)` ## Adding a title and a subtitle - [x] Add a title and a subtitle. Apparently Matplotlib's inbuilt `title()` and `subtitle()` methods do not allow much control over the positioning there for it can be simpler to use the `text()` method. This will need some finesing into the final location but allows a lot of precision. `weight` can be used within the `text()` method to make text bold. Text inside `text()` doesn't wordwrap you need to use `\n` when you want it to move onto a new line ``` fte_graph.text(x = 1966.65, y = 62.7, s = "Some title for this graph that is interesting", fontsize = 26, weight = "bold", alpha = .75) fte_graph.text(x = 1966.65, y = 57, s = "A sub title text that goes on and on and on so hopefuly it will\nword wrap onto the second line and look awesome", fontsize = 19, alpha = .85) ``` ## Colourblind friendly colours ![Colour blind friendly colours](cb_friendly.jpg) Source: [Points of View: Color blindness by Bang Wong](http://www.nature.com/nmeth/journal/v8/n6/full/nmeth.1618.html#ref1) We need to declare all the colours we want the graph to use. We are going to use the above colours as they are more friendly to people with colourblindness but we are not going to use the yellow as reading yellow on white or grey is generally difficult. We then need to modify the `plot()` to include the arguement `color =` assigned to whatever we call the colours variable. - One odd thing is for this color arguement (not text) we need to pass the colours as floats between 0 - 1. Hence the `[230/255,159/255,0]` - Needs to be passed as a list of lists ``` colours = [[0,0,0], [230/255,159/255,0], [86/255,180/255,233/255], [0,158/255,115/255], [213/255,94/255,0], [0,114/255,178/255]] ``` ## Remove the legend and place labels directly on the lines in the same colour - [x] Remove the block-style legend, and add labels near the relevant plot lines. We’ll also have to make the grid lines transparent around these labels. To remove the standard legend we add `legend = False` to the `plot()`. To make the text labels easier to read we will set their background to the main background colour which will effectively make the grid lines disappear around the labels. Lastly we will use the `rotation` parameter so the text lines up nicely with the lines. ``` #Defining our colour set colours = [[0,0,0], [230/255,159/255,0], [86/255,180/255,233/255], [0,158/255,115/255], [213/255,94/255,0], [0,114/255,178/255]] #Set the graph name and the data it is going to plot fte_graph = women_majors.plot(x = 'Year', y = under_20.index, figsize = (12,8), color = colours, legend = False) #Place some text in the correct position to be a title with weighing and larger size fte_graph.text(x = 1966.65, y = 62.7, s = "Some title for this graph that is interesting", fontsize = 26, weight = "bold", alpha = .75) #Place some text in the correct position to be a subtitle fte_graph.text(x = 1966.65, y = 57, s = "A sub title text that goes on and on and on so hopefuly it will\nword wrap onto the second line and look awesome", fontsize = 19, alpha = .85) #Place some text in the correct position and colour to generate a sig bar fte_graph.text(x = 1965.8, y = -7, s = ' In your general direction Source: National Center for Education Statistics ', fontsize = 14, color = '#f0f0f0', backgroundcolor = 'grey') #Widen the x axis fte_graph.set_xlim(left = 1969, right = 2011) #Make the x axis label invisible fte_graph.xaxis.label.set_visible(False) #Put a bold line over y = 0 fte_graph.axhline(y = 0, color = 'black', linewidth = 1.3, alpha = .7) #Making the axis labels larger fte_graph.tick_params(axis = 'both', which = 'major', labelsize = 18) #Changing the x axis tick labels fte_graph.set_yticklabels(labels = [-10, '0 ', '10 ', '20 ', '30 ', '40 ', '50%']) #Line labels fte_graph.text(x = 1994, y = 44, s = 'Agriculture', color = colours[0], weight = 'bold', rotation = 33, backgroundcolor = '#f0f0f0') fte_graph.text(x = 1985, y = 42.2, s = 'Architecture', color = colours[1], weight = 'bold', rotation = 18, backgroundcolor = '#f0f0f0') fte_graph.text(x = 2004, y = 51, s = 'Business', color = colours[2], weight = 'bold', rotation = -5, backgroundcolor = '#f0f0f0') fte_graph.text(x = 2001, y = 30, s = 'Computer Science', color = colours[3], weight = 'bold', rotation = -42.5, backgroundcolor = '#f0f0f0') fte_graph.text(x = 1987, y = 11.5, s = 'Engineering', color = colours[4], weight = 'bold', backgroundcolor = '#f0f0f0') fte_graph.text(x = 1976, y = 25, s = 'Physical Sciences', color = colours[5], weight = 'bold', rotation = 27, backgroundcolor = '#f0f0f0') ``` # So this is as per tutorial Thanks to [Dataquest.io](https://www.dataquest.io) for the blog post tutorial
github_jupyter
``` import azureml from azureml.core import Workspace, Experiment, Datastore, Environment from azureml.core.runconfig import RunConfiguration from azureml.data.datapath import DataPath, DataPathComputeBinding from azureml.data.data_reference import DataReference from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException from azureml.pipeline.core import Pipeline, PipelineData, PipelineParameter from azureml.pipeline.steps import PythonScriptStep, EstimatorStep from azureml.widgets import RunDetails from azureml.train.estimator import Estimator import os print("Azure ML SDK Version: ", azureml.core.VERSION) ``` # Setup Variables ``` os.environ['STORAGE_ACCOUNT_KEY'] = 'YourAccountKeyHere' datastorename='seerdata' datastorepath='hardware' containername='seer-container' storageaccountname='aiml50setupstorage' storageaccountkey=os.environ.get('STORAGE_ACCOUNT_KEY') computetarget='twtcluster' ``` # Register/Reference a Datastore ``` # workspace ws = Workspace.from_config( path='./azureml-config.json') print(ws.datastores) # See if that datastore already exists and unregister it if so try: datastore = ws.datastores[datastorename] print ('Unregistering existing datastore') datastore.unregister() except: print ('Data store doesn\'t exist, no need to remove') finally: # register the datastore datastore = Datastore.register_azure_blob_container(workspace=ws, datastore_name=datastorename, container_name=containername, account_name=storageaccountname, account_key=storageaccountkey, create_if_not_exists=True) print('Datastore registered: ', datastore) # data datastore = ws.datastores['seerdata'] datareference = DataReference(datastore=datastore, data_reference_name="seerdata", path_on_datastore=datastorepath) ``` # Create Compute Resources ``` try: cpu_cluster = ComputeTarget(workspace=ws, name=computetarget) print('Found existing cluster, use it.') except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration( vm_size='STANDARD_NC6', min_nodes=1, max_nodes=4) cpu_cluster = ComputeTarget.create(ws, computetarget, compute_config) cpu_cluster.wait_for_completion(show_output=True) compute = ws.compute_targets[computetarget] print('Compute registered: ', compute) ``` # Define Pipeline! The following will be created and then run: 1. Pipeline Parameters 2. Data Process Step 3. Training Step 4. Model Registration Step 5. Pipeline registration 6. Submit the pipeline for execution ## Pipeline Parameters We need to tell the Pipeline what it needs to learn to see! ``` datapath = DataPath(datastore=datastore, path_on_datastore=datastorepath) data_path_pipeline_param = (PipelineParameter(name="data", default_value=datapath), DataPathComputeBinding(mode='mount')) print(data_path_pipeline_param) # Configuration for data prep and training steps dataprepEnvironment = Environment.from_pip_requirements('dataprepenv', 'requirements-dataprepandtraining.txt') dataprepRunConfig = RunConfiguration() dataprepRunConfig.environment = dataprepEnvironment ``` ## Data Process Step ``` seer_tfrecords = PipelineData( "tfrecords_set", datastore=datastore, is_directory=True ) prepStep = PythonScriptStep( 'parse.py', source_directory='.', name='Data Preparation', compute_target=compute, arguments=["--source_path", data_path_pipeline_param, "--target_path", seer_tfrecords], runconfig=dataprepRunConfig, inputs=[data_path_pipeline_param], outputs=[seer_tfrecords] ) print(prepStep) ``` ## Training Step ``` seer_training = PipelineData( "train", datastore=datastore, is_directory=True ) train = Estimator(source_directory='.', compute_target=compute, entry_script='train.py', pip_requirements_file='requirements-dataprepandtraining.txt') trainStep = EstimatorStep( name='Model Training', estimator=train, estimator_entry_script_arguments=["--source_path", seer_tfrecords, "--target_path", seer_training, "--epochs", 5, "--batch", 10, "--lr", 0.001], inputs=[seer_tfrecords], outputs=[seer_training], compute_target=compute ) print(trainStep) ``` # Register Model Step ``` registerEnvironment = Environment.from_pip_requirements('registerenv', 'requirements-registration.txt') registerRunConfig = RunConfiguration() registerRunConfig.environment = registerEnvironment seer_model = PipelineData( "model", datastore=datastore, is_directory=True ) registerStep = PythonScriptStep( 'register.py', source_directory='.', name='Model Registration', arguments=["--source_path", seer_training, "--target_path", seer_model], inputs=[seer_training], outputs=[seer_model], compute_target=compute, runconfig=registerRunConfig ) print(registerStep) ``` ## Create and publish the Pipeline ``` pipeline = Pipeline(workspace=ws, steps=[prepStep, trainStep, registerStep]) published_pipeline = pipeline.publish( name="Seer Pipeline", description="Transfer learned image classifier. Uses folders as labels.") # Submit the pipeline to be run pipeline_run = Experiment(ws, 'seer',).submit(published_pipeline) print('Run created with ID: ', pipeline_run.id) RunDetails(pipeline_run).show() ```
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from datetime import datetime, timedelta from sklearn.model_selection import train_test_split from sklearn.preprocessing import OneHotEncoder import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.utils.data import Dataset, DataLoader from tqdm.auto import tqdm device = torch.device("cuda" if torch.cuda.is_available() else "cpu") plt.rcParams["figure.figsize"] = (13, 7) ``` # Wolt Data Science Assignment 2022 ### Karl Hendrik Nurmeots # Introduction Hey! Thank you for taking the time to read through my assignment submission! In this assignment I will be exploring the provided Wolt orders data and developing models to forecast the number of orders per hour for the next 24 hours based on the preceding 48 hours. As Wolt is a platform that connects thousands of restaurants and vendors with thousands of customers by involving thousands of drivers, it is a big logistical challenge that could greatly benefit from having accurate predictions for the number of orders. I begin by analyzing the Wolt orders dataset using visualizations that produce insights into how this problem should be modeled. Then, I propose a seq2seq recurrent neural network model for solving the problem and evaluate its performance. I improve the model further by adding additional, non-time series features to the model. Finally, I explain a bit about what motivates me to work at Wolt and how my past experiences make me a great candidate for the position. # Data Preparation ``` #### Data Preparation df = pd.read_csv('https://raw.githubusercontent.com/woltapp/data-science-summer-intern-2022/master/orders_autumn_2020.csv') df.columns = ['timestamp', 'delivery_delay', 'item_count', 'user_lat', 'user_long', 'venue_lat', 'venue_long', 'delivery_estimate', 'delivery_actual', 'cloud_coverage', 'temperature', 'wind_speed', 'precipitation'] df.timestamp = pd.to_datetime(df.timestamp) # Timestamps are given with minute-precision, so round all timestamps down to their hour df.timestamp = df.timestamp.apply(lambda x: datetime(x.year, x.month, x.day, x.hour, 0)) # Group by timestamp gb = df.groupby('timestamp').agg({'timestamp': 'count'}) gb.columns = ['orders'] # There may be hours where no orders were placed, so generate all hours from Aug 1 to Sept 30 and join with data hours = pd.Series(pd.date_range(start=datetime(2020, 8, 1), end=datetime(2020, 9, 30, 23), freq='1H')) gb = gb.merge(hours.rename('hours'), how='right', left_index=True, right_on='hours') gb = gb.set_index('hours') # Replace NaN orders with 0 #### Here I am assuming that there is no reason for data to be missing besides no orders being made: #### if this was not assumed, it might be better to replace the missing values with something like -1 #### so that the models can learn that these particular datapoints contain no useful information gb.orders = gb.orders.fillna(0) gb['hour'] = gb.index.hour gb['weekday'] = gb.index.weekday ``` # Exploratory Data Analysis In this section my goal is to visually analyze the data to understand its behaviour, and what aspects of the data the forecasting models should understand. ``` # Beginning date for graph date = datetime(2020, 8, 1) temp = gb[(gb.index >= date) & (date + timedelta(days=7) > gb.index)] plt.plot(temp.index, temp.orders) plt.title(f"Number of Orders per Hour: {date.date()} to {(date + timedelta(days=7)).date()}", fontsize=20) plt.ylabel('No. of orders', fontsize=16) plt.show() ``` Looking at the first week of the dataset a clear pattern emerges: most days in this week display two distinct peaks in the number of orders, although some days also showcase a third peak. ``` dates = [datetime(2020, 8, 1), datetime(2020, 8, 15), datetime(2020, 9, 1), datetime(2020, 9, 15)] fig, axs = plt.subplots(2, 2, sharey=True) for i in range(len(axs.flatten())): ax = axs.flatten()[i] date = dates[i] temp = gb[(gb.index >= date) & (date + timedelta(days=1) > gb.index)] ax.plot(temp.index.hour, temp.orders) if i in (0, 2): ax.set_ylabel('No. of orders', fontsize=12) ax.set_title(str(date.date())) plt.suptitle('Number of Orders per Hour', fontsize=16) plt.show() ``` Looking at four days separately we see that some days do not follow the previously observed bimodality at all. The peaks in these days also occur at different times. September 1, 2020 is a particularly anomalyous day as it displays a completely different pattern from most other days in the dataset. It is clear that this time series data is somewhat complex and we want to model it by using techniques that can obtain a good understanding of how the data behaves. ``` temp = gb.groupby('hour').agg({'orders': 'sum'}) plt.bar(temp.index, temp.orders) plt.xticks(np.arange(24)) plt.xlabel("Hour", fontsize=12) plt.ylabel("No. of Orders", fontsize=12) plt.title("Total Orders per Hour of the Day", fontsize=16) plt.show() ``` Looking at the distribution of orders per hour of the day we once again see the bimodal pattern observed earlier. This shows that, on average, the first peak of orders occurs at 09:00, and the second peak tends to occur at 15:00. With more time, I would dig deeper into how the distribution of orders differs across different weekdays, especially weekends. Taking holidays and otherwise significant days of the year into account could also yield valuable information. These insights could be useful in improving the models that follow. # Modeling If I was solving this problem while working for Wolt and if I had more time, I would approach the problem by first trying out simple models and then seeing if more complicated ones can improve on the results. A great baseline model for time series forecasting is simply using the past 48 hours' data as the prediction for the next 24: if the patterns in the data have little variation, this super simple model can be a surprisingly tough baseline to beat. Other simple models to try on this problem would be linear regression and ARIMA. However, for this assignment for the sake of saving time I will break my principle and jump directly to the cool stuff - recurrent neural networks (RNNs). RNNs are great at modeling complex patterns and behaviours in temporal data, and as such make a strong candidate for solving this problem. For this problem I will be using the following seq2seq RNN model: ![](https://raw.githubusercontent.com/Arturus/kaggle-web-traffic/master/images/encoder-decoder.png) [Image credit](https://www.kaggle.com/c/web-traffic-time-series-forecasting/discussion/43795) While writing together this model would usually take a fair bit of time and testing, I am able to do this quickly because I initially created this model with a friend as part of [a time series school project](https://github.com/KNurmik/Spain-Electricity-Load-Timeseries-Analysis). We have licensed the model such that anyone is able to use it for any purpose, including commercial purposes. I have a good idea that this model will work not only because of the school project we did, but also because I was able to adapt this model with great success for a time series problem I solved at my most recent summer internship at MindTitan, where I used this model to forecast customer satisfaction metrics for over 500,000 Elisa Eesti mobile internet users. As I have built up my [public GitHub portfolio](https://github.com/KNurmik) over time, I often find myself referring back to my earlier work and saving myself a lot of time and effort. Again, I will use the past 48h worth of orders data to predict the following 24h. The model will take as input, for every observation, an array of 48 historical datapoints and output the predicted 24 future ones. I have split the data into training, validation and testing sets with roughly a 80%/10%/10% split where all of the training data chronologically precedes the validation data, which itself precedes the testing data. This way the model does not have any knowledge of the data that will be used for testing and we get the best understanding of how the model behaves when facing before-unseen data. As is standard practice for ensuring numerical stability, all of the data is normalized based on the training data mean and standard deviation. The model will use mean squared error as its loss function as this is a natural choice for regression tasks. When training, the model state with the best validation loss will be saved and used for evaluation on the test set later. ``` # Model-specific data preparation predict_interval = 24 history_interval = 48 trim = predict_interval + history_interval # Create N x (24+24) array where every row is one observation of 24h of history + 24h of target data indices = np.stack([np.arange(i, trim+i) for i in range(gb.shape[0] - trim + 1)]) data = np.stack([gb.orders.iloc[indices[i]] for i in range(indices.shape[0])]) X = data[:, :history_interval] y = data[:, history_interval:] # Train-test-validation split (rougly 80%/10%/10%) X_train = X[:1100] y_train = y[:1100] X_val = X[1100:1250] y_val = y[1100:1250] X_test = X[1250:] y_test = y[1250:] y_test_unnorm = y_test.copy() # Mean-std normalization norm_mean = X_train.mean() norm_std = X_train.std() def normalize(arr): return (arr - norm_mean) / norm_std vars = [X_train, y_train, X_val, y_val, X_test, y_test] for var in vars: var[:] = normalize(var) # PyTorch data preparation class WoltDataset(Dataset): def __init__(self, X, y): self.X = X self.y = y def __getitem__(self, idx): return self.X[idx, :], self.y[idx, :] def __len__(self): return self.X.shape[0] def unscale(self, X): return (X * norm_std) + norm_mean train_dataset = WoltDataset(X_train, y_train) val_dataset = WoltDataset(X_val, y_val) test_dataset = WoltDataset(X_test, y_test) ``` I played around with the hyperparameters below and kept the ones that gave me the best validation results. With more time it may be worthwhile to set up a grid search to find the best possible hyperparameters. ``` # Hyperparameters BATCH_SIZE = 32 EPOCHS = 25 LR = 1e-3 HIDDEN_SIZE = 256 train_dataloader = DataLoader(train_dataset, batch_size=BATCH_SIZE) val_dataloader = DataLoader(val_dataset, batch_size=BATCH_SIZE) test_dataloader = DataLoader(test_dataset, batch_size=BATCH_SIZE) class WoltEncoder(nn.Module): def __init__(self, hidden_size): super().__init__() self.gru = nn.GRU(input_size=1, hidden_size=hidden_size, batch_first= True) def forward(self, x): out, hidden = self.gru(x) return out, hidden.squeeze(0) class WoltDecoder(nn.Module): def __init__(self, hidden_size): super().__init__() self.grucell = nn.GRUCell(input_size=1, hidden_size=hidden_size) self.fc = nn.Sequential( nn.Linear(in_features=hidden_size, out_features=hidden_size), nn.LeakyReLU(), nn.Linear(in_features=hidden_size, out_features=1) ) def forward(self, y, hidden): ht = self.grucell(y, hidden) out = self.fc(ht) return out, ht class WoltEncoderDecoder(nn.Module): def __init__(self, hidden_size, output_size): super().__init__() self.encoder = WoltEncoder(hidden_size) self.decoder = WoltDecoder(hidden_size) self.output_size = output_size def forward(self, x): encoder_out, hidden = self.encoder(x) # [batchsize, hidden_size] N = x.shape[0] outputs = torch.empty(N, self.output_size) curr = x[:, -1] for i in range(self.output_size): curr, hidden = self.decoder(curr, hidden) outputs[:, i] = curr.view(-1) return outputs model = WoltEncoderDecoder(HIDDEN_SIZE, predict_interval) optimizer = optim.AdamW(model.parameters(), lr=LR) criterion = nn.MSELoss() model = model.to(device) train_losses = [] val_losses = [] best_model = model.state_dict() best_val_loss = np.inf for epoch in range(EPOCHS): #t = tqdm(train_dataloader, desc=f"Epoch: {epoch}") # Uncomment and comment out the next line to get cool progress bars! t = train_dataloader cost = 0 # Training for i, (prev, target) in enumerate(t): pred = model(prev.unsqueeze(-1).float().to(device)) loss = criterion(pred, target.float()) loss.backward() optimizer.step() optimizer.zero_grad() cost += float(loss.item() * prev.shape[0]) train_losses.append(cost/len(train_dataset)) # Validation with torch.no_grad(): val_cost = 0 for prev, target in val_dataloader: pred = model(prev.unsqueeze(-1).float().to(device)) loss = criterion(pred, target.float()) val_cost += float(loss.item() * prev.shape[0]) val_loss = val_cost/len(val_dataset) val_losses.append(val_loss) # Keep model with best validation loss if val_loss < best_val_loss: best_model = model.state_dict() best_val_loss = val_loss plt.plot(train_losses, label='Train') plt.plot(val_losses, label='Validation') plt.title("Loss over time", fontsize=16) plt.xlabel("Epoch", fontsize=12) plt.ylabel("Loss", fontsize=12) plt.legend() plt.show() ``` It does not seem like the model has completely converged, but it is close to doing so - given the limited time, it is an adequate outcome. With more time it might be a good idea to set up an adaptive learning rate and early stopping, and let the model train for longer for optimal results. # Evaluation For evaluation of the model I will primarily rely on qualitative visual analysis as the ideal model would perfectly capture the bimodal patterns that were observed previously. Root Mean Squared Error (RMSE) will be used to evaluate the model from a quantitative perspective. ``` # Restore best model state and run it on the test set torch.cuda.empty_cache() model.load_state_dict(best_model) model.eval() X_test_tensor = torch.FloatTensor(test_dataset.X).unsqueeze(-1).to(device) with torch.no_grad(): rnn_pred = model(X_test_tensor) rnn_pred = rnn_pred.detach().cpu().numpy() predicted = test_dataset.unscale(rnn_pred[0, :]) predicted[predicted < 0] = 0 plt.plot(np.arange(history_interval+1), np.r_[test_dataset.unscale(test_dataset.X[0, :]), test_dataset.unscale(test_dataset.y[0, 0])], label="Input") plt.plot(np.arange(history_interval, history_interval + predict_interval), test_dataset.unscale(test_dataset.y[0, :]), label="True", c="green") plt.plot(np.arange(history_interval, history_interval + predict_interval), predicted, label="Predicted", c="red") plt.title("Predicted Orders", fontsize=15) plt.xlabel("Hour", fontsize=12) plt.ylabel("No. of Orders", fontsize=12) plt.legend() plt.show() ``` The above graph looks at the first observation in the test set more closely. Clearly, the model has correctly learned about the bimodal structure in the daily data and makes a good guess about when the peaks happen, but it does not do a great job of predicting the correct values for the peaks. ``` def plot_predictions(true, pred, time, is_scaled=False, std=norm_std, mean=norm_mean): if not is_scaled: pred = (pred * std) + mean pred[pred < 0] = 0 rmse = np.sqrt((np.sum((true[:, time] - pred[:, time])**2)) / true.shape[0]) plt.plot(true[:, time], label="True") plt.plot(pred[:, time], label="Pred") plt.suptitle(f"Predictions {time+1} hour(s) in the future", fontsize=15) plt.title('Root mean squared error: {}'.format(round(rmse, 3))) plt.legend() plt.xlabel("Timestep") plt.ylabel("No. of Orders") plot_predictions(y_test_unnorm, rnn_pred, 0) ``` Here, the model's predictions for the entire test set are shown when considering the model's predictions one hour into the future. Again we see that the model does a good job in predicting the average number of orders per day, but does not exactly capture the daily two peak values. ``` plot_predictions(y_test_unnorm, rnn_pred, 23) ``` When evaluating the model for predictions 24 hours into the future, its performance is expectedly worse, but not overly so: these predictions are not much worse than for 1 hour into the future. The same problem of not capturing peaks well remains an issue. Overall, I would say these results are promising for a first prototype model. The model has learned the patterns in the data and does a decent job understanding if a given day will have more or fewer total orders. As there are usually 20 to 50 orders in a given hour, having an RMSE of 5 to 6 orders is not too bad either as it would likely still allow for informed business decisions to be made, but obviously a model with a smaller error is desireable. This model would probably not work as well when used on time periods further away from the training data - the total amount and distribution of orders is likely different at various parts of the year, so a model that has only seen how the data behaves in August and September might not generalize well to other time periods. This issue could be largely mitigated by using more data - a year's worth at least to ensure the model understands yearly seasonal patterns. # Extending the Model As mentioned earlier, it is likely that the distributions of orders share similarities across the same weekdays. The current model does not have direct access to what day the target data belongs to, but it may be able to make a guess based on the input data. Still, we can feed the weekday info into the model in hopes of improving it. To do so, we will take the weekday info for the first day of the 48 hours of data that is fed into the model - this is sufficient for the model to know what days it is given and what day it will predict. The weekday information is one-hot encoded. Then, when the encoder produces its hidden state output, the hidden state along with the one-hot weekday information is sent through a fully connected layer to produce a new hidden state that contains the weekday information. This state is then passed to the decoder as previously. The below code is largely the same as the code above. Any differences are pointed out by comments. ``` predict_interval = 24 history_interval = 48 trim = predict_interval + history_interval indices = np.stack([np.arange(i, trim+i) for i in range(gb.shape[0] - trim + 1)]) data = np.stack([gb.orders.iloc[indices[i]] for i in range(indices.shape[0])]) # Fetch weekday data for each observation and one-hot encode it weekday = np.stack([gb.weekday.iloc[indices[i][0]] for i in range(indices.shape[0])]) enc = OneHotEncoder() weekday = enc.fit_transform(weekday.reshape(-1, 1)).toarray() X = data[:, :history_interval] y = data[:, history_interval:] X_train = X[:1100] y_train = y[:1100] weekday_train = weekday[:1100] X_val = X[1100:1250] y_val = y[1100:1250] weekday_val = weekday[1100:1250] X_test = X[1250:] y_test = y[1250:] weekday_test = weekday[1250:] y_test_unnorm = y_test.copy() norm_mean = X_train.mean() norm_std = X_train.std() def normalize(arr): return (arr - norm_mean) / norm_std vars = [X_train, y_train, X_val, y_val, X_test, y_test] for var in vars: var[:] = normalize(var) class WoltDataset(Dataset): def __init__(self, X, y, additional): self.X = X self.y = y # Include matrix of additional information in the dataset self.additional = additional def __getitem__(self, idx): return self.X[idx, :], self.y[idx, :], self.additional[idx, :] def __len__(self): return self.X.shape[0] def unscale(self, X): return (X * norm_std) + norm_mean train_dataset = WoltDataset(X_train, y_train, weekday_train) val_dataset = WoltDataset(X_val, y_val, weekday_val) test_dataset = WoltDataset(X_test, y_test, weekday_test) BATCH_SIZE = 32 EPOCHS = 50 LR = 1e-4 HIDDEN_SIZE = 256 train_dataloader = DataLoader(train_dataset, batch_size=BATCH_SIZE) val_dataloader = DataLoader(val_dataset, batch_size=BATCH_SIZE) test_dataloader = DataLoader(test_dataset, batch_size=BATCH_SIZE) class WoltEncoderDecoder_additional(nn.Module): def __init__(self, hidden_size, output_size, additional_size): super().__init__() self.encoder = WoltEncoder(hidden_size) self.decoder = WoltDecoder(hidden_size) self.output_size = output_size # Additional fully connected layer to add weekday information self.fc = nn.Sequential( nn.Linear(hidden_size + additional_size, hidden_size), nn.LeakyReLU() ) def forward(self, x, additional): encoder_out, hidden = self.encoder(x) # Add weekday information to time series data to produce new hidden state hidden = self.fc(torch.cat((hidden, additional), dim=1)) N = x.shape[0] outputs = torch.empty(N, self.output_size) curr = x[:, -1] for i in range(self.output_size): curr, hidden = self.decoder(curr, hidden) outputs[:, i] = curr.view(-1) return outputs model = WoltEncoderDecoder_additional(HIDDEN_SIZE, predict_interval, weekday.shape[1]) optimizer = optim.AdamW(model.parameters(), lr=LR) criterion = nn.MSELoss() model = model.to(device) train_losses = [] val_losses = [] best_model = model.state_dict() best_val_loss = np.inf for epoch in range(EPOCHS): #t = tqdm(train_dataloader, desc=f"Epoch: {epoch}") # Uncomment and comment out the next line to get cool progress bars! t = train_dataloader cost = 0 # Training for i, (prev, target, additional) in enumerate(t): pred = model(prev.unsqueeze(-1).float().to(device), additional.float().to(device)) loss = criterion(pred, target.float()) loss.backward() optimizer.step() optimizer.zero_grad() cost += float(loss.item() * prev.shape[0]) train_losses.append(cost/len(train_dataset)) # Validation with torch.no_grad(): val_cost = 0 for prev, target, additional in val_dataloader: pred = model(prev.unsqueeze(-1).float().to(device), additional.float().to(device)) loss = criterion(pred, target.float()) val_cost += float(loss.item() * prev.shape[0]) val_loss = val_cost/len(val_dataset) val_losses.append(val_loss) # Keep model with best validation loss if val_loss < best_val_loss: best_model = model.state_dict() best_val_loss = val_loss plt.plot(train_losses, label='Train') plt.plot(val_losses, label='Validation') plt.title("Loss over time", fontsize=16) plt.xlabel("Epoch", fontsize=12) plt.ylabel("Loss", fontsize=12) plt.legend() plt.show() ``` It seems that just like the last model, this one hasn't fully converged and could likewise benefit from an adaptive learning rate and more training. ``` # Restore best model state and run it on the test set torch.cuda.empty_cache() model.load_state_dict(best_model) model.eval() X_test_tensor = torch.FloatTensor(test_dataset.X).unsqueeze(-1).to(device) with torch.no_grad(): rnn_pred = model(X_test_tensor, torch.FloatTensor(weekday_test).to(device)) rnn_pred = rnn_pred.detach().cpu().numpy() predicted = test_dataset.unscale(rnn_pred[0, :]) predicted[predicted < 0] = 0 plt.plot(np.arange(history_interval+1), np.r_[test_dataset.unscale(test_dataset.X[0, :]), test_dataset.unscale(test_dataset.y[0, 0])], label="Input") plt.plot(np.arange(history_interval, history_interval + predict_interval), test_dataset.unscale(test_dataset.y[0, :]), label="True", c="green") plt.plot(np.arange(history_interval, history_interval + predict_interval), predicted, label="Predicted", c="red") plt.title("Predicted Orders", fontsize=15) plt.xlabel("Hour", fontsize=12) plt.ylabel("No. of Orders", fontsize=12) plt.legend() plt.show() ``` For this instance, the results do not differ much from the simpler model - the model still doesn't capture the peaks very well. ``` plot_predictions(y_test_unnorm, rnn_pred, 0) ``` On various runs I was able to obtain results that both beat and fail to beat the simpler model. On average, however, it seems that this model is a slight improvement, but randomization has a large part in the results. The issue of not capturing the daily peaks correctly is more complicated to fix than just adding little additional information, clearly. ``` plot_predictions(y_test_unnorm, rnn_pred, 23) ``` A bit surprisingly, this model tends to perform significantly worse than the simpler one on predictions 24 hours into the future. It might be worth reconsidering the way the additional weekdays information is incorporated into the model. Overall, it is clear that this model can be improved upon. It would greatly benefit from having lots more training data. Feeding more additional non-time series data into the model can only do good as well. However, even just the additional weekday information made training the model significantly slower - there is definitely a tradeoff between predictive performance and runtime performance here. It is also important to note that this model has been trained to predict a complete day from 00:00 to 23:59. It would likely not work as well if it had to predict, say, Wednesday 12:00 to Thursday 11:59. To allow for this, we could feed the clock hour information of one or many datapoints to the model as well. # My Background ### Why Wolt? I am looking for an opportunity where I can put my data science skills to practice to deliver impactful business value and benefit stakeholders. An ideal role would allow me to directly communicate with stakeholders to then produce the most suitable solution together with my peers. Data science is central to Wolt's operations, so I am very thrilled about the idea of working as part of Wolt's data science team. What really excites me about working at Wolt is the thought of what a huge logistical challenge Wolt's operations must be, and how much data science could be used to solve these challenges. Wolt's data science team is world class - I feel I would have so much to learn from them while hopefully also benefitting the team back through my own unique knowledge and experiences. ### My Education I am currently finishing up my Data Science Honours BSc degree at the University of Toronto, which is often ranked in the top 10 universities in the world in the field of computer science. As this is a four-year degree program, it is comparable to a BSc+MSc obtained in Europe. I feel confident in my skills in most of the common data science paradigms, but my strongest interests are in deep learning and its uses in regression, classification and computer vision. Currently, I am focusing a large part of my effort to improving my natural language processing skills and knowledge. At Wolt, I do not have a particular problem I would want to solve or a particular technique I would want to use. What is most important to me is that I get to take on tasks that produce genuinely useful value and impact for Wolt, its partners, and its customers, using whatever tools necessary to get the job done. ### My Professional Experience Nearly all of the things I have studied in school I have already put to use in industry through the numerous internships I have had. I have solved problems with great success in domains completely different from each other such as insurance, renewable energy, telecommunications, and virology. For solving these problems, I have relied on vastly different techniques, ranging from deep learning to plain linear regression to simple visualization analysis. What was common between all of these experiences was the process of communicating with stakeholders to understand what is most important, and translating the issues brought up into data science problems. What I would bring to Wolt is a great ability to understand business needs and how data science can be best used to satisfy them. I have built up my skills and knowledge in a way that allows me to always find suitable tools to solve whatever problem may arise. ### My Extracurriculars When data science isn't on my mind, I really enjoy playing chess, going swimming and working out, playing and watching football, and playing video games. In Toronto, I am also an active member in the local Estonian community where I frequently volunteer by helping host events through any way possible: bartending, lifting stuff backstage or hosting livestreams. ### ---------------------------------------------------------- Thanks again for checking out my assignment submission! I hope to hear back from you soon! #### - Karl ``` ```
github_jupyter
# Levy Stable models of Stochastic Volatility This tutorial demonstrates inference using the Levy [Stable](http://docs.pyro.ai/en/stable/distributions.html#stable) distribution through a motivating example of a non-Gaussian stochastic volatilty model. Inference with stable distribution is tricky because the density `Stable.log_prob()` is not defined. In this tutorial we demonstrate two approaches to inference: (i) using the [poutine.reparam](http://docs.pyro.ai/en/latest/poutine.html#pyro.poutine.handlers.reparam) effect to transform models in to a tractable form, and (ii) using the likelihood-free loss [EnergyDistance](http://docs.pyro.ai/en/latest/inference_algos.html#pyro.infer.energy_distance.EnergyDistance) with SVI. #### Summary - [Stable.log_prob()](http://docs.pyro.ai/en/stable/distributions.html#stable) is undefined. - Stable inference requires either reparameterization or a likelihood-free loss. - Reparameterization: - The [poutine.reparam()](http://docs.pyro.ai/en/latest/poutine.html#pyro.poutine.handlers.reparam) handler can transform models using various [strategies](http://docs.pyro.ai/en/latest/infer.reparam.html). - The [StableReparam](http://docs.pyro.ai/en/latest/infer.reparam.html#pyro.infer.reparam.stable.StableReparam) strategy can be used for Stable distributions in SVI or HMC. - The [LatentStableReparam](http://docs.pyro.ai/en/latest/infer.reparam.html#pyro.infer.reparam.stable.LatentStableReparam) strategy is a little cheaper, but cannot be used for likelihoods. - The [DiscreteCosineReparam](http://docs.pyro.ai/en/latest/infer.reparam.html#pyro.infer.reparam.discrete_cosine.DiscreteCosine) strategy improves geometry in batched latent time series models. - Likelihood-free loss with SVI: - The [EnergyDistance](http://docs.pyro.ai/en/latest/inference_algos.html#pyro.infer.energy_distance.EnergyDistance) loss allows stable distributions in the guide and in model likelihoods. #### Table of contents - [Daily S&P data](#data) - [Fitting a single distribution to log returns](#fitting) using `EnergyDistance` - [Modeling stochastic volatility](#modeling) using `poutine.reparam` ## Daily S&P 500 data <a class="anchor" id="data"></a> The following daily closing prices for the S&P 500 were loaded from [Yahoo finance](https://finance.yahoo.com/quote/%5EGSPC/history/). ``` import math import os import torch import pyro import pyro.distributions as dist from matplotlib import pyplot from torch.distributions import constraints from pyro import poutine from pyro.contrib.examples.finance import load_snp500 from pyro.infer import EnergyDistance, Predictive, SVI, Trace_ELBO from pyro.infer.autoguide import AutoDiagonalNormal from pyro.infer.reparam import DiscreteCosineReparam, StableReparam from pyro.optim import ClippedAdam from pyro.ops.tensor_utils import convolve %matplotlib inline assert pyro.__version__.startswith('1.3.0') pyro.enable_validation(True) smoke_test = ('CI' in os.environ) df = load_snp500() dates = df.Date.to_numpy() x = torch.tensor(df["Close"]).float() x.shape pyplot.figure(figsize=(9, 3)) pyplot.plot(x) pyplot.yscale('log') pyplot.ylabel("index") pyplot.xlabel("trading day") pyplot.title("S&P 500 from {} to {}".format(dates[0], dates[-1])); ``` Of interest are the log returns, i.e. the log ratio of price on two subsequent days. ``` pyplot.figure(figsize=(9, 3)) r = (x[1:] / x[:-1]).log() pyplot.plot(r, "k", lw=0.1) pyplot.title("daily log returns") pyplot.xlabel("trading day"); pyplot.figure(figsize=(9, 3)) pyplot.hist(r, bins=200) pyplot.yscale('log') pyplot.ylabel("count") pyplot.xlabel("daily log returns") pyplot.title("Empirical distribution. mean={:0.3g}, std={:0.3g}".format(r.mean(), r.std())); ``` ## Fitting a single distribution to log returns <a class="anchor" id="fitting"></a> Log returns appear to be heavy-tailed. First let's fit a single distribution to the returns. To fit the distribution, we'll use a likelihood free statistical inference algorithm [EnergyDistance](http://docs.pyro.ai/en/latest/inference_algos.html#pyro.infer.energy_distance.EnergyDistance), which matches fractional moments of observed data and can handle data with heavy tails. ``` def model(): stability = pyro.param("stability", torch.tensor(1.9), constraint=constraints.interval(0, 2)) skew = 0. scale = pyro.param("scale", torch.tensor(0.1), constraint=constraints.positive) loc = pyro.param("loc", torch.tensor(0.)) with pyro.plate("data", len(r)): return pyro.sample("r", dist.Stable(stability, skew, scale, loc), obs=r) %%time pyro.clear_param_store() pyro.set_rng_seed(1234567890) num_steps = 1 if smoke_test else 201 optim = ClippedAdam({"lr": 0.1, "lrd": 0.1 ** (1 / num_steps)}) svi = SVI(model, lambda: None, optim, EnergyDistance()) losses = [] for step in range(num_steps): loss = svi.step() losses.append(loss) if step % 20 == 0: print("step {} loss = {}".format(step, loss)) print("-" * 20) pyplot.figure(figsize=(9, 3)) pyplot.plot(losses) pyplot.yscale("log") pyplot.ylabel("loss") pyplot.xlabel("SVI step") for name, value in sorted(pyro.get_param_store().items()): if value.numel() == 1: print("{} = {:0.4g}".format(name, value.squeeze().item())) samples = poutine.uncondition(model)().detach() pyplot.figure(figsize=(9, 3)) pyplot.hist(samples, bins=200) pyplot.yscale("log") pyplot.xlabel("daily log returns") pyplot.ylabel("count") pyplot.title("Posterior predictive distribution"); ``` This is a poor fit, but that was to be expected since we are mixing all time steps together: we would expect this to be a scale-mixture of distributions (Normal, or Stable), but are modeling it as a single distribution (Stable in this case). ## Modeling stochastic volatility <a class="anchor" id="modeling"></a> We'll next fit a stochastic volatility model. Let's begin with a constant volatility model where log price $p$ follows Brownian motion $$ \log p_t = \log p_{t-1} + w_t \sqrt h $$ where $w_t$ is a sequence of standard white noise. We can rewrite this model in terms of the log returns $r_t=\log(p_t\,/\,p_{t-1})$: $$ r_t = w_t \sqrt h $$ Now to account for [volatility clustering](https://en.wikipedia.org/wiki/Volatility_clustering) we can generalize to a stochastic volatility model where volatility $h$ depends on time $t$. Among the simplest such models is one where $h_t$ follows geometric Brownian motion $$ \log h_t = \log h_{t-1} + \sigma v_t $$ where again $v_t$ is a sequence of standard white noise. The entire model thus consists of a geometric Brownian motion $h_t$ that determines the diffusion rate of another geometric Brownian motion $p_t$: $$ \log h_t = \log h_{t-1} + \sigma v_t \\ \log p_t = \log p_{t-1} + w_t \sqrt h_t $$ Usually $v_1$ and $w_t$ are both Gaussian. We will generalize to a Stable distribution for $w_t$, learning three parameters (stability, skew, and location), but still scaling by $\sqrt h_t$. Our Pyro model will sample the increments $v_t$ and record the computation of $\log h_t$ via [pyro.deterministic](http://docs.pyro.ai/en/stable/primitives.html#pyro.deterministic). Note that there are many ways of implementing this model in Pyro, and geometry can vary depending on implementation. The following version seems to have good geometry, when combined with reparameterizers. ``` def model(data): # Note we avoid plates because we'll later reparameterize along the time axis using # DiscreteCosineReparam, breaking independence. This requires .unsqueeze()ing scalars. h_0 = pyro.sample("h_0", dist.Normal(0, 1)).unsqueeze(-1) sigma = pyro.sample("sigma", dist.LogNormal(0, 1)).unsqueeze(-1) v = pyro.sample("v", dist.Normal(0, 1).expand(data.shape).to_event(1)) log_h = pyro.deterministic("log_h", h_0 + sigma * v.cumsum(dim=-1)) sqrt_h = log_h.mul(0.5).exp().clamp(min=1e-8, max=1e8) # Observed log returns, assumed to be a Stable distribution scaled by sqrt(h). r_loc = pyro.sample("r_loc", dist.Normal(0, 1e-2)).unsqueeze(-1) r_skew = pyro.sample("r_skew", dist.Uniform(-1, 1)).unsqueeze(-1) r_stability = pyro.sample("r_stability", dist.Uniform(0, 2)).unsqueeze(-1) pyro.sample("r", dist.Stable(r_stability, r_skew, sqrt_h, r_loc * sqrt_h).to_event(1), obs=data) ``` We use two reparameterizers: [StableReparam](http://docs.pyro.ai/en/latest/infer.reparam.html#pyro.infer.reparam.stable.StableReparam) to handle the `Stable` likelihood (since `Stable.log_prob()` is undefined), and [DiscreteCosineReparam](http://docs.pyro.ai/en/latest/infer.reparam.html#pyro.infer.reparam.discrete_cosine.DiscreteCosineReparam) to improve geometry of the latent Gaussian process for `v`. We'll then use `reparam_model` for both inference and prediction. ``` reparam_model = poutine.reparam(model, {"v": DiscreteCosineReparam(), "r": StableReparam()}) %%time pyro.clear_param_store() pyro.set_rng_seed(1234567890) num_steps = 1 if smoke_test else 1001 optim = ClippedAdam({"lr": 0.05, "betas": (0.9, 0.99), "lrd": 0.1 ** (1 / num_steps)}) guide = AutoDiagonalNormal(reparam_model) svi = SVI(reparam_model, guide, optim, Trace_ELBO()) losses = [] for step in range(num_steps): loss = svi.step(r) / len(r) losses.append(loss) if step % 50 == 0: median = guide.median() print("step {} loss = {:0.6g}".format(step, loss)) print("-" * 20) for name, (lb, ub) in sorted(guide.quantiles([0.325, 0.675]).items()): if lb.numel() == 1: lb = lb.detach().squeeze().item() ub = ub.detach().squeeze().item() print("{} = {:0.4g} ± {:0.4g}".format(name, (lb + ub) / 2, (ub - lb) / 2)) pyplot.figure(figsize=(9, 3)) pyplot.plot(losses) pyplot.ylabel("loss") pyplot.xlabel("SVI step") pyplot.xlim(0, len(losses)) pyplot.ylim(min(losses), 20) ``` It appears the log returns exhibit very little skew, but exhibit a stability parameter slightly but significantly less than 2. This contrasts the usual Normal model corresponding to a Stable distribution with skew=0 and stability=2. We can now visualize the estimated volatility: ``` fig, axes = pyplot.subplots(2, figsize=(9, 5), sharex=True) pyplot.subplots_adjust(hspace=0) axes[1].plot(r, "k", lw=0.2) axes[1].set_ylabel("log returns") axes[1].set_xlim(0, len(r)) # We will pull out median log returns using the autoguide's .median() and poutines. with torch.no_grad(): pred = Predictive(reparam_model, guide=guide, num_samples=20, parallel=True)(r) log_h = pred["log_h"] axes[0].plot(log_h.median(0).values, lw=1) axes[0].fill_between(torch.arange(len(log_h[0])), log_h.kthvalue(2, dim=0).values, log_h.kthvalue(18, dim=0).values, color='red', alpha=0.5) axes[0].set_ylabel("log volatility") stability = pred["r_stability"].median(0).values.item() axes[0].set_title("Estimated index of stability = {:0.4g}".format(stability)) axes[1].set_xlabel("trading day"); ``` Observe that volatility roughly follows areas of large absolute log returns. Note that the uncertainty is underestimated, since we have used an approximate `AutoDiagonalNormal` guide. For more precise uncertainty estimates, one could use [HMC](http://docs.pyro.ai/en/stable/mcmc.html#hmc) or [NUTS](http://docs.pyro.ai/en/stable/mcmc.html#nuts) inference.
github_jupyter
### Send email Clint #### Importing all dependency ``` # ! /usr/bin/python import smtplib from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText from email.header import Header from email.utils import formataddr import getpass ``` #### User Details Function ``` def user(): # ORG_EMAIL = "@gmail.com" # FROM_EMAIL = "ypur mail" + ORG_EMAIL # FROM_PWD = "yourpss" FROM_EMAIL = raw_input("insert Email : ") FROM_PWD = getpass.getpass("input : ") return FROM_EMAIL,FROM_PWD ``` ### Login function In this function we call user details function and get the user name and password, Than we use those details for IMAP login. ** SMTP is Simple Mail Transfer Protocol** ``` def login(): gmail_user, gmail_pwd = user() #calling the user function for get user details smtpserver = smtplib.SMTP("smtp.gmail.com",587) #Declaring gmail SMTP server address and port smtpserver.starttls() #Starting tls service, Transport Layer Security (TLS) are cryptographic protocols that provide communications security over a computer network. smtpserver.login(gmail_user, gmail_pwd) #Login to Gmail server using TLS print 'Login successful' return smtpserver ``` ### Send mail function. This function takes 5 argument. 1. Login Data. 2. To Email 3. From Email 4. HTML format massage 5. Normal text **The HTML message, is best and preferred.** ``` # text = "Hi!\n5633222222222222222http://www.python.org" # html = """\ # <html> # <head></head> # <body> # <p>Hi!<br> # How are you?<br> # Here is the <a href="http://www.python.org">link</a> you wanted. # </p> # </body> # </html> # """ def Send_Mail(smtpserver,TO_EMAIL,text=None,html=None,subject='Subject missing',FROM_EMAIL='Shahariar'): # Create message container - the correct MIME type is multipart/alternative. msg = MIMEMultipart('alternative') # In turn, use text/plain and text/html parts within the multipart/alternative part. msg['Subject'] = subject #Subject of the message msg['From'] = formataddr((str(Header(FROM_EMAIL, 'utf-8')), FROM_EMAIL)) #Adding custom Sender Name msg['To'] = TO_EMAIL #Assining Reciver email part1 = MIMEText(text, 'plain') #adding text part of mail part2 = MIMEText(html, 'html') #Adding HTMLpart of mail # Attach parts into message container. # According to RFC 2046, the last part of a multipart message, in this case # the HTML message, is best and preferred. msg.attach(part1) #attach Plain text msg.attach(part2) #attach HTML text # sendmail function takes 3 arguments: sender's address, recipient's address # and message to send - here it is sent as one string. try: smtpserver.sendmail(FROM_EMAIL, TO_EMAIL, msg.as_string()) print " Message Send" smtpserver.quit() #stopping server except Exception: print Exception ```
github_jupyter
# healthy versus severe ``` import re import numpy as np import pandas as pd import matplotlib.pyplot as plt import pydotplus from IPython.display import Image from six import StringIO import matplotlib.image as mpimg #%pylab inline from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor, plot_tree, export_graphviz from sklearn import preprocessing, metrics from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report, confusion_matrix, mean_squared_error #!pip install biopython from Bio import Entrez from Bio import SeqIO url = "healthy_versus_severe/healthy.csv" healthy = pd.read_csv(url, sep='\t') healthy['type'] = 'healthy' healthy.shape[0] healthy.sample(n=2) healthy.columns url = "healthy_versus_severe/severe.csv" severe = pd.read_csv(url, sep=',' ) severe['type'] = 'severe' severe.shape[0] severe.sample(n=2) severe.columns df = pd.concat([healthy, severe], ignore_index=True) df.tail() def getChromossome( ncbi_id ): if "chr" in ncbi_id: return ncbi_id else: Entrez.email = "waldeyr@gmail.com" with Entrez.efetch( db="nucleotide", rettype="gb", id=ncbi_id ) as handle: record = SeqIO.read(handle, "gb") for f in record.features: if f.qualifiers['chromosome'][0]: return "chr" + str(f.qualifiers['chromosome'][0]) else: return ncbi_id df['Region'] = df['Region'].apply(lambda x: getChromossome(x)) def setRegion(Region): if Region == 'chrX': return 23 # chromossome X if Region == 'chrY': return 24 # chromossome Y if Region == 'chrM': return 25 # Mitochondrial return re.sub('chr', '', Region) df['Region'] = df['Region'].apply(lambda x: setRegion(str(x))) df = df.fillna(int(0)) # all NaN fields are strings type, so they will be factorize later and the zero will be a category df.Region = pd.factorize(df.Region, na_sentinel=None)[0] df.subs = pd.factorize(df.subs, na_sentinel=None)[0] df.defSubs = pd.factorize(df.defSubs, na_sentinel=None)[0] df.sym = pd.factorize(df.sym, na_sentinel=None)[0] df.ensembl_id = pd.factorize(df.ensembl_id, na_sentinel=None)[0] df.type = pd.factorize(df.type, na_sentinel=None)[0] df.genetic_var = pd.factorize(df.genetic_var, na_sentinel=None)[0] df.most_severe_cons = pd.factorize(df.most_severe_cons, na_sentinel=None)[0] df.aa_change = pd.factorize(df.aa_change, na_sentinel=None)[0] df.codons_change = pd.factorize(df.codons_change, na_sentinel=None)[0] df.all_cons = pd.factorize(df.all_cons, na_sentinel=None)[0] df.most_severe_cons = pd.factorize(df.most_severe_cons, na_sentinel=None)[0] df = df.drop(['Unnamed: 0'], axis=1) df.tail() y = df['type'].values y X = df.drop(['type'], axis=1) X.columns X.dtypes X X_treino, X_teste, y_treino, y_teste = train_test_split(X, y, test_size = 0.1, shuffle = True, random_state = 1) arvore = DecisionTreeClassifier(criterion='entropy', max_depth=10, min_samples_leaf=30, random_state=0) modelo= arvore.fit(X_treino, y_treino) %pylab inline previsao = arvore.predict(X_teste) np.sqrt(mean_squared_error(y_teste, previsao)) pylab.figure(figsize=(25,20)) plot_tree(arvore, feature_names=X_treino.columns) # Aplicando mo modelo gerado na base de testes y_predicoes = modelo.predict(X_teste) # Avaliação do modelo print(f"Acurácia da árvore: {metrics.accuracy_score(y_teste, y_predicoes)}") print(classification_report(y_teste, y_predicoes)) ```
github_jupyter
``` import base64 import imageio import IPython import matplotlib import matplotlib.pyplot as plt import PIL.Image import pyvirtualdisplay import tensorflow as tf from tf_agents.agents.dqn import dqn_agent from tf_agents.agents.dqn import q_network from tf_agents.drivers import dynamic_step_driver from tf_agents.environments import suite_gym from tf_agents.environments import tf_py_environment from tf_agents.environments import trajectory from tf_agents.metrics import metric_utils, tf_metrics from tf_agents.policies import random_tf_policy from tf_agents.replay_buffers import tf_uniform_replay_buffer from tf_agents.utils import common tf.compat.v1.enable_v2_behavior() display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start() # hyperparameters env_name = 'CartPole-v0' num_iterations = 20000 initial_collect_steps = 1000 collect_steps_per_iteration = 1 replay_buffer_capacity = 100000 fc_layer_params = (100,) batch_size = 64 learning_rate = 1e-3 log_interval = 200 num_eval_episodes = 10 eval_interval = 1000 env = suite_gym.load(env_name) env.reset() # PIL.Image.fromarray(env.render()) env.time_step_spec().observation env.action_spec() time_step = env.reset() time_step action = 1 next_time_step = env.step(action) next_time_step train_py_env = suite_gym.load(env_name) eval_py_env = suite_gym.load(env_name) train_env = tf_py_environment.TFPyEnvironment(train_py_env) eval_env = tf_py_environment.TFPyEnvironment(eval_py_env) q_net = q_network.QNetwork( train_env.observation_spec(), train_env.action_spec(), fc_layer_params=fc_layer_params, ) optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate) train_step_counter = tf.Variable(0) tf_agent = dqn_agent.DqnAgent( train_env.time_step_spec(), train_env.action_spec(), q_network=q_net, optimizer=optimizer, train_step_counter=train_step_counter, td_errors_loss_fn=dqn_agent.element_wise_squared_loss, ) tf_agent.initialize() eval_policy = tf_agent.policy collect_policy = tf_agent.collect_policy eval_policy collect_policy random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(), train_env.action_spec()) def compute_avg_return(environment, policy, num_episodes=10): total_return = 0.0 for _ in range(num_episodes): time_step = environment.reset() episode_return = 0.0 while not time_step.is_last(): action_step = policy.action(time_step) time_step = environment.step(action_step.action) episode_return += time_step.reward total_return += episode_return avg_return = total_return / num_episodes return avg_return.numpy()[0] compute_avg_return(eval_env, random_policy, num_eval_episodes) replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer( data_spec=tf_agent.collect_data_spec, batch_size=train_env.batch_size, max_length=replay_buffer_capacity ) def collect_step(environment, policy): time_step = environment.current_time_step() action_step = policy.action(time_step) next_time_step = environment.step(action_step.action) traj = trajectory.from_transition(time_step, action_step, next_time_step) replay_buffer.add_batch(traj) for _ in range(initial_collect_steps): collect_step(train_env, random_policy) dataset = replay_buffer.as_dataset( num_parallel_calls=3, sample_batch_size=batch_size, num_steps=2 ).prefetch(3) iterator = iter(dataset) %%time tf_agent.train = common.function(tf_agent.train) tf_agent.train_step_counter.assign(0) avg_return = compute_avg_return(eval_env, tf_agent.policy, num_eval_episodes) returns = [avg_return] for _ in range(num_iterations): for _ in range(collect_steps_per_iteration): collect_step(train_env, tf_agent.collect_policy) experience, unused_info = next(iterator) train_loss = tf_agent.train(experience) step = tf_agent.train_step_counter.numpy() if step % log_interval == 0: print('step = {0}: loss = {1}'.format(step, train_loss.loss)) if step % eval_interval == 0: avg_return = compute_avg_return(eval_env, tf_agent.policy, num_eval_episodes) print('step = {0}: Average Return = {1}'.format(step, avg_return)) returns.append(avg_return) steps = range(0, num_iterations+1, eval_interval) plt.plot(steps, returns) plt.ylabel('Average Return') plt.xlabel('Step') plt.ylim(top=250) ```
github_jupyter
``` !pip install --upgrade tables !pip install eli5 !pip install xgboost import pandas as pd import numpy as np from sklearn.dummy import DummyRegressor from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor import xgboost as xgb from sklearn.metrics import mean_absolute_error as mean from sklearn.model_selection import cross_val_score, KFold import eli5 from eli5.sklearn import PermutationImportance cd '/content/drive/My Drive/Colab Notebook/dw_matrix/matrix_2/dw_matrix_car' df = df = pd.read_hdf('data/car.h5') df.shape ``` ##Feature Engineering ``` SUFFIX_CAT ='__cat' for feat in df.columns: if isinstance(df[feat][0],list): continue factorize_values = df[feat].factorize()[0] if SUFFIX_CAT in feat: df[feat] = factorize_values else: df[feat + SUFFIX_CAT] = factorize_values cat_feats = [x for x in df.columns if SUFFIX_CAT in x ] cat_feats = [x for x in cat_feats if 'price' not in x ] len(cat_feats) def run_model(model, feats): X = df[feats].values y = df['price_value'].values scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error') return np.mean(scores),np.std(scores) ``` ##Decision Tree ``` run_model (DecisionTreeRegressor(max_depth=5), cat_feats) ``` ##Random Forest ``` model=RandomForestRegressor(max_depth=5, n_estimators=50) run_model (model, cat_feats) ``` ##XGBoost ``` xgb_params = { 'max_depth' : 5, 'n_estimators' : 50, 'learning_rate' :0.1, 'seed':0 } run_model (xgb.XGBRegressor(**xgb_params), cat_feats) m = xgb.XGBRegressor(max_depth=5, n_estimators=50, learning_rate=0.1, seed=0) m.fit(X,y) imp = PermutationImportance(m, random_state=0).fit(X,y) eli5.show_weights(imp, feature_names= cat_feats) len(cat_feats) feats = ['param_napęd__cat', 'param_rok-produkcji__cat', 'param_stan__cat', 'param_skrzynia-biegów__cat', 'param_faktura-vat__cat', 'param_moc__cat', 'param_marka-pojazdu__cat','feature_kamera-cofania__cat', 'param_typ__cat', 'param_pojemność-skokowa__cat', 'seller_name__cat', 'feature_wspomaganie-kierownicy__cat', 'param_model-pojazdu__cat', 'param_wersja__cat','param_kod-silnika__cat', 'feature_system-start-stop__cat', 'feature_asystent-pasa-ruchu__cat', 'feature_czujniki-parkowania-przednie__cat', 'feature_łopatki-zmiany-biegów__cat', 'feature_regulowane-zawieszenie__cat'] len(feats) run_model (xgb.XGBRegressor(**xgb_params), feats) df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x)) feats = ['param_napęd__cat', 'param_rok-produkcji', 'param_stan__cat', 'param_skrzynia-biegów__cat', 'param_faktura-vat__cat', 'param_moc__cat', 'param_marka-pojazdu__cat','feature_kamera-cofania__cat', 'param_typ__cat', 'param_pojemność-skokowa__cat', 'seller_name__cat', 'feature_wspomaganie-kierownicy__cat', 'param_model-pojazdu__cat', 'param_wersja__cat','param_kod-silnika__cat', 'feature_system-start-stop__cat', 'feature_asystent-pasa-ruchu__cat', 'feature_czujniki-parkowania-przednie__cat', 'feature_łopatki-zmiany-biegów__cat', 'feature_regulowane-zawieszenie__cat'] run_model (xgb.XGBRegressor(**xgb_params),feats) #df['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x) == 'None' else int(x.split(' ')[0])) df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x)) feats = ['param_napęd__cat', 'param_rok-produkcji', 'param_stan__cat', 'param_skrzynia-biegów__cat', 'param_faktura-vat__cat', 'param_moc', 'param_marka-pojazdu__cat','feature_kamera-cofania__cat', 'param_typ__cat', 'param_pojemność-skokowa__cat', 'seller_name__cat', 'feature_wspomaganie-kierownicy__cat', 'param_model-pojazdu__cat', 'param_wersja__cat','param_kod-silnika__cat', 'feature_system-start-stop__cat', 'feature_asystent-pasa-ruchu__cat', 'feature_czujniki-parkowania-przednie__cat', 'feature_łopatki-zmiany-biegów__cat', 'feature_regulowane-zawieszenie__cat'] run_model (xgb.XGBRegressor(**xgb_params),feats) df['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 if str(x) == 'None' else int( str(x).split('cm')[0].replace(' ', ''))) feats = ['param_napęd__cat', 'param_rok-produkcji', 'param_stan__cat', 'param_skrzynia-biegów__cat', 'param_faktura-vat__cat', 'param_moc', 'param_marka-pojazdu__cat','feature_kamera-cofania__cat', 'param_typ__cat', 'param_pojemność-skokowa', 'seller_name__cat', 'feature_wspomaganie-kierownicy__cat', 'param_model-pojazdu__cat', 'param_wersja__cat','param_kod-silnika__cat', 'feature_system-start-stop__cat', 'feature_asystent-pasa-ruchu__cat', 'feature_czujniki-parkowania-przednie__cat', 'feature_łopatki-zmiany-biegów__cat', 'feature_regulowane-zawieszenie__cat'] run_model (xgb.XGBRegressor(**xgb_params),feats) ```
github_jupyter
<a href="https://colab.research.google.com/github/LeonardoQZ/handson-ml2/blob/master/CaliforniaGeostats.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # California Housing with Geostatistics ``` import pandas as pd import numpy as np from sklearn.datasets import fetch_california_housing from pandas.plotting import scatter_matrix ``` # Load, inspect and cleanup the data ``` housing_dataset = fetch_california_housing() print(housing_dataset['DESCR']) source_columns = housing_dataset['feature_names'] + ['target'] housing_df = pd.DataFrame(data= np.c_[housing_dataset['data'], housing_dataset['target']], columns= source_columns) housing_df.describe() import matplotlib.pyplot as plt housing_df.plot(kind="scatter", x="Longitude", y="Latitude", alpha=0.4, s=housing_df["Population"]/100, label="population", figsize=(10,7), c="target", cmap=plt.get_cmap("jet"), colorbar=True, sharex=False) plt.legend() plt.show() ``` There is clearly clipping in the target data. It would make sense to remove the maximum value from the analysis. We will re-index to have simple range compatible with downstream transformations. ``` fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12, 6)) housing_df['target'].hist(bins=100, ax=axes[0]) clipped_indexes = housing_df[ housing_df['target'].ge(5) ].index housing_clipped_df = housing_df.drop(index=clipped_indexes).reset_index() housing_clipped_df['target'].hist(bins=100, ax=axes[1]); housing_clipped_df.info() housing_clipped_df.describe() ``` # Define training set with stratified sampling Before doing anything else let us split the data into train and test set. What kind of sampling to use is a non trivial question. There is high correlation between the median income and the target price. ``` scatter_matrix(housing_clipped_df[['MedInc', 'target']], figsize=(12, 8)); ``` We want to guarantee that the test set is representative of the income distribution. In practical terms we would want estimation errors from populations of very high income have acceptable confidence intervals. To achieve this we can stratify our sampling by breaking up the dataset into discrete median income categories. The estimation error of statistics on each of this categories will be smaller if we randomly sample independently from them than if we randomly sample without stratification. Let us compare strata by Geron, Pew Research with proportional quantile and decile stratification to quantify estimation errors. ``` income_category_geron, income_geron_bins = pd.cut(housing_clipped_df["MedInc"], bins=[0., 1.5, 3.0, 4.5, 6., np.inf], retbins=True, labels=[1, 2, 3, 4, 5]) income_category_pew, income_pew_bins = pd.cut(housing_clipped_df["MedInc"], bins=[0., 3.1, 4.2, 12.6, 18.8, np.inf], retbins=True, labels=[1, 2, 3, 4, 5]) income_category_quartiles, income_quartile_bins = pd.qcut( housing_clipped_df["MedInc"], 4, retbins=True, labels=[str(i+1) for i in range(4)]) income_category_deciles, income_decile_bins = pd.qcut( housing_clipped_df["MedInc"], 10, retbins=True, labels=[str(i+1) for i in range(10)]) fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(12, 6)) income_category_geron.hist(bins=10, ax=axes[0,0]) income_category_pew.hist(bins=10, ax=axes[0,1]) income_category_quartiles.hist(bins=10, ax=axes[1,0]) income_category_deciles.hist(bins=10, ax=axes[1,1]) plt.show() income_categories = pd.DataFrame({ 'Geron' : income_category_geron, 'Pew' : income_category_pew, 'Quartiles' : income_category_quartiles, 'Deciles' : income_category_deciles }) income_categories.describe() housing_categorized = housing_clipped_df.copy() housing_categorized[income_categories.keys()] = income_categories from sklearn.model_selection import train_test_split from sklearn.model_selection import StratifiedShuffleSplit train_set_random, test_set_random = train_test_split(housing_categorized, test_size=0.2, random_state=42) split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) train_sets = {"Random" : train_set_random } test_sets = {"Random" : test_set_random } for key in income_categories: for train_index, test_index in split.split(housing_categorized, housing_categorized[key]): train_sets[key] = housing_categorized.loc[train_index] test_sets[key] = housing_categorized.loc[test_index] def generateCategoryComparison(income_categories): comparison = {} for cat in income_categories: overall = f"Overall {cat}" random = f"Random {cat}" stratified = f"Stratified {cat}" randErr = f"RandErr {cat}" stratErr = f"StratErr {cat}" compare_props = pd.DataFrame({ overall : housing_categorized[cat].value_counts() / len(housing_categorized), random : train_sets['Random'][cat].value_counts() / len(train_sets['Random']), stratified : train_sets[cat][cat].value_counts() / len(train_sets[cat]) }).sort_index() compare_props[randErr] = 100*np.abs(compare_props[random]/compare_props[overall] - 1) compare_props[stratErr] = 100*np.abs(compare_props[stratified]/compare_props[overall] - 1) comparison[cat] = compare_props return comparison cat_comparison = generateCategoryComparison(income_categories) cat_comparison ``` The Income stratification model of Pew Research gives the lowest sampling bias errors but on this particular dataset there are no units with a median in category 5 and very fey in 4. Let us adapt a bit the category. ``` income_strata, income_strata_bins = pd.cut(housing_clipped_df["MedInc"], bins=[0., 3.1, 4.2, 7.5, 9.3, np.inf], retbins=True, labels=[1, 2, 3, 4, 5]) housing_categorized["Strata"] = income_strata train_set_random, test_set_random = train_test_split(housing_categorized, test_size=0.2, random_state=42) train_sets['Random']=train_set_random test_sets['Random']=test_set_random for train_index, test_index in split.split(housing_categorized, housing_categorized["Strata"]): train_sets["Strata"] = housing_categorized.loc[train_index] test_sets["Strata"] = housing_categorized.loc[test_index] cat_comparison_final = generateCategoryComparison(['Pew', 'Strata']) cat_comparison_final income_strata.value_counts() ``` This seems a good starting point for our train and test dataset. Let us make a copy with the categories removed. ``` train_set = train_sets['Strata'].copy() test_set = test_sets['Strata'].copy() train_set.drop(income_categories, axis=1, inplace=True) train_set.drop('Strata', axis=1, inplace=True) test_set.drop(income_categories, axis=1, inplace=True) test_set.drop('Strata', axis=1, inplace=True) ``` # Visualization Let us get a birds eye view of the correlations in the data ``` scatter_matrix(train_set, figsize=(12, 8)); correlations = train_set.corr() print(correlations["target"].sort_values(ascending=False)) ``` Maybe combining some of the features in a sensible way we can find other correlations ``` rooms_per_capita = train_set['AveRooms']/train_set['Population'] bedroom_ratio = train_set['AveBedrms']/train_set['AveRooms'] correlations_xtra = housing_df.corr() print(correlations_xtra["target"].sort_values(ascending=False)) corr_threshold = 0.15 high_corr = correlations_xtra["target"].sort_values(ascending=False).abs().ge(corr_threshold) high_corr_cols=list(high_corr[high_corr].keys()) scatter_matrix(housing_df[high_corr_cols], figsize=(12, 8)); from sklearn.model_selection import StratifiedShuffleSplit ```
github_jupyter
# Lab Three Ryan Gonfiantini --- For this lab we're going to be making and using a bunch of functions. Our Goals are: - Searching our Documentation - Using built in functions - Making our own functions - Combining functions - Structuring solutions ``` # For the following built in functions we didn't touch on them in class. I want you to look for them in the python documentation and implement them. # I want you to find a built in function to SWAP CASE on a string. Print it. # For example the string "HeY thERe HowS iT GoING" turns into "hEy THerE hOWs It gOing" sample_string = "HeY thERe HowS iT GoING" print(sample_string.swapcase()) # I want you to find a built in function to CENTER a string and pad the sides with 4 dashes(-) a side. Print it. # For example the string "Hey There" becomes "----Hey There----" sample_string = "Hey There" print(sample_string.center(17,"-")) # I want you to find a built in function to PARTITION a string. Print it. # For example the string "abcdefg.hijklmnop" would come out to be ["abcdefg",".","hijklmnop"] sample_string = "abcdefg.hijklmnop" print(sample_string.partition(".")) # I want you to write a function that will take in a number and raise it to the power given. # For example if given the numbers 2 and 3. The math that the function should do is 2^3 and should print out or return 8. Print the output. def power(number, exponent) -> int: return number ** exponent problem_one = power(2,3) print(problem_one) # I want you to write a function that will take in a list and see how many times a given number is in the list. # For example if the array given is [2,3,5,2,3,6,7,8,2] and the number given is 2 the function should print out or return 3. Print the output. array = [2,3,5,2,3,6,7,8,2] def number_list(array, target): count = 0 for number in array: if number == target: count += 1 return count problem_two = number_list(array, 2) print(problem_two) # Use the functions given to create a slope function. The function should be named slope and have 4 parameters. # If you don't remember the slope formula is (y2 - y1) / (x2 - x1) If this doesn't make sense look up `Slope Formula` on google. def division(x, y): return x / y def subtraction(x, y): return x - y def slope(x1, x2, y1, y2): return division(subtraction(y2,y1), subtraction(x2, x1)) slope_function = slope(5, 3, 10, 4) print(slope_function) # Use the functions given to create a distance function. The function should be named function and have 4 parameters. # HINT: You'll need a built in function here too. You'll also be able to use functions written earlier in the notebook as long as you've run those cells. # If you don't remember the distance formula it is the square root of the following ((x2 - x1)^2 + (y2 - y1)^2). If this doesn't make sense look up `Distance Formula` on google. import math def addition(x, y): return x + y def distance(x1, x2, y1, y2): left_side = power(subtraction(x2, x1), 2) right_side = power(subtraction(y2, y1), 2) both_sides = addition(left_side, right_side) return math.sqrt(both_sides) print(distance(10, 4, 5, 3)) ```
github_jupyter
# 6.7 门控循环单元(GRU) ## 6.7.2 读取数据集 ``` import numpy as np import torch from torch import nn, optim import torch.nn.functional as F import sys sys.path.append("..") import d2lzh_pytorch as d2l device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') (corpus_indices, char_to_idx, idx_to_char, vocab_size) = d2l.load_data_jay_lyrics() print(torch.__version__, device) ``` ## 6.7.3 从零开始实现 ### 6.7.3.1 初始化模型参数 ``` num_inputs, num_hiddens, num_outputs = vocab_size, 256, vocab_size print('will use', device) def get_params(): def _one(shape): ts = torch.tensor(np.random.normal(0, 0.01, size=shape), device=device, dtype=torch.float32) return torch.nn.Parameter(ts, requires_grad=True) def _three(): return (_one((num_inputs, num_hiddens)), _one((num_hiddens, num_hiddens)), torch.nn.Parameter(torch.zeros(num_hiddens, device=device, dtype=torch.float32), requires_grad=True)) W_xz, W_hz, b_z = _three() # 更新门参数 W_xr, W_hr, b_r = _three() # 重置门参数 W_xh, W_hh, b_h = _three() # 候选隐藏状态参数 # 输出层参数 W_hq = _one((num_hiddens, num_outputs)) b_q = torch.nn.Parameter(torch.zeros(num_outputs, device=device, dtype=torch.float32), requires_grad=True) return nn.ParameterList([W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q]) ``` ### 6.7.3.2 定义模型 ``` def init_gru_state(batch_size, num_hiddens, device): return (torch.zeros((batch_size, num_hiddens), device=device), ) def gru(inputs, state, params): W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q = params H, = state outputs = [] for X in inputs: Z = torch.sigmoid(torch.matmul(X, W_xz) + torch.matmul(H, W_hz) + b_z) R = torch.sigmoid(torch.matmul(X, W_xr) + torch.matmul(H, W_hr) + b_r) H_tilda = torch.tanh(torch.matmul(X, W_xh) + R * torch.matmul(H, W_hh) + b_h) H = Z * H + (1 - Z) * H_tilda Y = torch.matmul(H, W_hq) + b_q outputs.append(Y) return outputs, (H,) ``` ### 6.7.3.3 训练模型并创作歌词 ``` num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2 pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开'] d2l.train_and_predict_rnn(gru, get_params, init_gru_state, num_hiddens, vocab_size, device, corpus_indices, idx_to_char, char_to_idx, False, num_epochs, num_steps, lr, clipping_theta, batch_size, pred_period, pred_len, prefixes) ``` ## 6.7.4 简洁实现 ``` lr = 1e-2 gru_layer = nn.GRU(input_size=vocab_size, hidden_size=num_hiddens) model = d2l.RNNModel(gru_layer, vocab_size).to(device) d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device, corpus_indices, idx_to_char, char_to_idx, num_epochs, num_steps, lr, clipping_theta, batch_size, pred_period, pred_len, prefixes) ```
github_jupyter
# The JupyterLab Interface The JupyterLab interface consists of a main work area containing tabs of documents and activities, a collapsible left sidebar, and a menu bar. The left sidebar contains a file browser, the list of running terminals and kernels, the table of contents, and the extension manager. ![jupyter_lab_startup_page](figures/jupyter_lab_startup.png) JupyterLab sessions always reside in a workspace. Workspaces contain the state of JupyterLab: the files that are currently open, the layout of the application areas and tabs, etc. Reference: [https://jupyterlab.readthedocs.io/en/latest/user/interface.html](https://jupyterlab.readthedocs.io/en/latest/user/interface.html) # Notebook Currently you are looking at a Jupyter Notebook. A Jupyter Notebook is an interactive environment for writing and running ocde. The notebook is capable of running code in a wide range of languages. However, each notebook is associated with a single kernel. This notebook is associated with the IPython kernel, therefore it runs Python code. Reference: [https://github.com/jupyter/notebook/blob/6.1.x/docs/source/examples/Notebook/Running%20Code.ipynb](https://github.com/jupyter/notebook/blob/6.1.x/docs/source/examples/Notebook/Running%20Code.ipynb) ## Notebook Cell Types In a Jupyter Notebook we can have text cells and code cells. In text cells we can write markdown ([Markdown cheat sheet](https://www.markdownguide.org/cheat-sheet/)). In code cells we can write program code which is executed by the IPython kernel associated with the notebook. Code cells have brackets `[ ]:` in front of them: * `[ ]:` means that the cell is empty. * `[*]:` means that the cell is currently being executed. * `[1]:` here the number indicates the execution step in the notebook. This execution step is updated everytime the cell is executed. To render a text cell or execute a code cell you can press the run button in the toolbar above or press `Shift-Enter` on your keyboard. ``` 2 + 2 # If we want to write text in a code cell we have to comment it out with '#'. # Next we asign some variables. a = 2 b = 2 c = a + b print("a is", a) print("b is", b) print("a + b =", c) ``` # Displaying an Image ``` # import packages from tifffile import imread from matplotlib import pyplot as plt img = imread('imgs/t000.tif') plt.figure(figsize=(10, 10)) plt.imshow(img, cmap='gray') ``` # Python Basics ## If Statement The if statement allows us to branch code and act on different conditions. The statement has the following logic: ``` if condition_0: code_0 elif condition_1: code_1 else: code_2 ``` `code_0` is executed if `condition_0` holds true. If `condition_0` is false `condition_1` is evaluated and `code_1` is executed if `condition_1` is true. If both conditions evaluate to false `code_2` is executed. __Note:__ `elif` and `else` are optional. ``` # Assign value to number number = 3 # Test if the number is negative, zero or positive if number < 0: print("{} is a negative number.".format(number)) elif number == 0: print("The number is zero.") else: print("{} is a positive number.".format(number)) # The following code is outside of the if statement and always executed. print("Done") ``` ## Functions In Python we can define functions which can be reused in our code. It is good practice to define a function if we want to reuse the same code multiple times! ``` def categorize_number(number): """ Prints to standard output if the number is negative, zero or positive. Parameter: ---------- number: The number to categorize. """ if number < 0: print("{} is a negative number.".format(number)) elif number == 0: print("The number is zero.") else: print("{} is a positive number.".format(number)) categorize_number(number=-2) ``` ## Lists In python we can easily define a list. ``` numbers = [-1, 0, 1, 2] type(numbers) ``` ## For Loop If we want to apply some code (e.g. a function) to all elements of a list we can use a for loop. ``` for number in numbers: print("Currently processing number = {}.".format(number)) categorize_number(number) ``` ## Range A typical usecase is that we want to get all numbers from 0 up to a given number e.g. 100. Luckely we don't have to type all 100 numbers into a list to iterate over them. We can just use the `range`-function which is part of Python. ``` for i in range(100): categorize_number(number=i) import this ```
github_jupyter
# Check Cell Population Heterogeneity ## Libraries ``` import MySQLdb import pandas import numpy as np from matplotlib import pylab as plt import os import seaborn as sns from scipy.stats import mannwhitneyu as mw from scipy import stats import operator from sklearn.preprocessing import StandardScaler,RobustScaler from sklearn.decomposition import PCA from scipy import stats import operator ``` ## Routine Functions ``` def ensure_dir(file_path): ''' Function to ensure a file path exists, else creates the path :param file_path: :return: ''' directory = os.path.dirname(file_path) if not os.path.exists(directory): os.makedirs(directory) # Effect size def cohen_d(x, y): nx = len(x) ny = len(y) dof = nx + ny - 2 return (np.mean(x) - np.mean(y)) / np.sqrt( ((nx - 1) * np.std(x, ddof=1) ** 2 + (ny - 1) * np.std(y, ddof=1) ** 2) / dof) # Some Easy Outlier detection def reject_outliers_2(data, m=6.): d = np.abs(data - np.median(data)) mdev = np.median(d) s = d / (mdev if mdev else 1.) #return s < m return [data[i] for i in range(0, len(data)) if s[i] < m] ``` ## Load list of significant perturbations - Load all significant perturbations - Load drug decay - Load list of images that are excluded - Load list of features to investigate ### Significant perturbations ``` #Save significant perturbations significant_perturbations = [] #open the file indicating which drug perturbations are significant in a matter of mahalanobis distance to DMSO fp = open('../data/Investigate_CellularHeterogeneity/Single_Perturbation_Significance.csv') fp.next() #go through whole file for line in fp: #split row tmp = line.strip().split(',') #check if mahalanobis distance large than 7 try: batch1_significance = float(tmp[1]) batch2_significance = float(tmp[3]) if batch1_significance > 7: significant_perturbations.append((tmp[0]+'_Batch1',batch1_significance)) if batch2_significance > 7: significant_perturbations.append((tmp[0]+'_Batch2',batch2_significance)) except: continue #sort all perturbations and take the top 10 significant_perturbations.sort(key = operator.itemgetter(1), reverse = True) significant_perturbations = significant_perturbations[0:10] print significant_perturbations ``` ### Drug Decay ``` # Both thresholds need to be true to set a drug as decayed during experiment; threshold_decay is steepness and threshold_MaxDifference absolute difference threshold_decay = 0.05 threshold_MaxDifference = 0.3 # Load all the drug decay regressions # Created by checking the single drug responses over the different plates (there is a temporal context between plate 1 and 123) # One is interested both in the decay as well as the maximum change e.g. if gradient between 0.1 to 0.2, still ok # Create a dic that tells about the status of drug decay i.e. True if drug WORKED CORRECTLY path = '../data/Investigate_CellularHeterogeneity/DrugDecay_Combined.csv' fp = open(path) fp.next() drug_decay = {} batch1_Failed = 0 batch2_Failed = 0 for line in fp: tmp = line.strip().split(',') batch1_decay = float(tmp[1]) batch1_diff = float(tmp[2]) batch2_decay = float(tmp[3]) batch2_diff = float(tmp[4]) batch1_Status = True if batch1_decay >= threshold_decay and batch1_diff >= threshold_MaxDifference: batch1_Status = False batch1_Failed += 1 batch2_Status = True if batch2_decay >= threshold_decay and batch2_diff >= threshold_MaxDifference: batch2_Status = False batch2_Failed += 1 drug_decay[tmp[0]] = {'Batch1':batch1_Status,'Batch2':batch2_Status} fp.close() print 'Number of drugs that decayed in batch1: %d' %batch1_Failed print 'Number of drugs that decayed in batch2: %d' %batch2_Failed ``` ### Load selected features ``` selected_Features = [] fp = open('../data/Investigate_CellularHeterogeneity/Selected_Features.csv') for line in fp: selected_Features.append(line.strip()[7:]) print 'Number of features: %d' %len(selected_Features) ``` ### Load Problematic Images ``` problematic_images = {'Batch1':[],'Batch2':[]} batches = ['1','2'] for batch_ in batches: fp = open('../data/Investigate_CellularHeterogeneity/BadImages/Batch'+batch_+'.csv','r') for line in fp: tmp = line.strip().split(',') problematic_images['Batch'+batch_].append(tmp[0]) ``` ## Actual Analysis ### Load corresponding images ``` # establish link db = MySQLdb.connect("menchelabdb.int.cemm.at","root","cqsr4h","ImageAnalysisDDI" ) ########### # DRUGS # ########### #this will contain all the image numbers that are associated with a specific drug (only singles!) Image_Number_For_Drugs = {} #go through the list of all significant perturbers for entry in significant_perturbations: drug,batch_ = entry[0].split('_') batch_ = batch_[5] # check if the drug is not decayed if drug_decay[drug]['Batch'+batch_] == True: #SQL string string = 'select ImageNumber,Image_Metadata_Plate from DPN1018Batch'+batch_+'Per_Image where Image_Metadata_ID_A like "'+drug+'" and Image_Metadata_ID_B like "DMSO";' #Extract data via pandas ImageNumbers = pandas.read_sql(string, con=db) #go through all rows for line in ImageNumbers.iterrows(): #extract ImageNumber and PlateNumber Drug_ImageNumber = line[1][0] Drug_PlateNumber = line[1][1] #add to dictionary if entry[0] not in Image_Number_For_Drugs: Image_Number_For_Drugs[entry[0]] = {Drug_PlateNumber:[Drug_ImageNumber]} elif Drug_PlateNumber not in Image_Number_For_Drugs[entry[0]]: Image_Number_For_Drugs[entry[0]][Drug_PlateNumber] = [Drug_ImageNumber] else: Image_Number_For_Drugs[entry[0]][Drug_PlateNumber].append(Drug_ImageNumber) ########### # DMSO # ########### # this will contain imagenumbers for DMSO Image_Number_For_DMSO = {} for batch_ in ['1','2']: #SQL string string = 'select ImageNumber,Image_Metadata_Plate from DPN1018Batch'+batch_+'Per_Image where Image_Metadata_ID_A like "DMSO" and Image_Metadata_ID_B like "None";' #Extract data via pandas ImageNumbers = pandas.read_sql(string, con=db) #go through all rows for line in ImageNumbers.iterrows(): #extract ImageNumber and PlateNumber Drug_ImageNumber = line[1][0] Drug_PlateNumber = line[1][1] #add to dictionary if batch_ not in Image_Number_For_DMSO: Image_Number_For_DMSO[batch_] = {Drug_PlateNumber:[Drug_ImageNumber]} elif Drug_PlateNumber not in Image_Number_For_DMSO[batch_]: Image_Number_For_DMSO[batch_][Drug_PlateNumber] = [Drug_ImageNumber] else: Image_Number_For_DMSO[batch_][Drug_PlateNumber].append(Drug_ImageNumber) db.close() ``` ### Defintions - drug colors - feature colors ``` # define color code for individual significant drugs (static) drug_colors = {'CLOUD031':'#8dd3c7','CLOUD053':'#ffffb3','CLOUD057':'#bebada','CLOUD089':'#fb8072','CLOUD112':'#80b1d3','CLOUD117':'#fdb462','CLOUD077':'#b3de69','CLOUD103':'#fccde5', 'CLOUD115':'#c51b8a','CLOUD129':'#bc80bd','DMSO':'grey'} feature_colors = {'AreaShape':'#D53D48', #red 'Intensity':'#BDCA27', 'RadialDistribution':'#BDCA27', #green 'Other':'grey', #grey 'Texture':'#F8B301', #orange 'Granularity':'#3AB9D1'} #blue #create the string for selecting all features selected_feature_string = ','.join(selected_Features) ## EXTRACT DMSO #### # Establish connections db = MySQLdb.connect("menchelabdb.int.cemm.at","root","cqsr4h","ImageAnalysisDDI" ) #define plate and batch plate = 1315101 batch_ = '2' # create SQL string images_dmso = Image_Number_For_DMSO[batch_][plate] imageNumberString_dmso = ','.join([str(x) for x in images_dmso]) string = 'select ImageNumber,ObjectNumber,'+selected_feature_string+' from DPN1018Batch'+batch_+'Per_Object where ImageNumber in ('+imageNumberString_dmso+');' # Extract only selected features (all DMSO cells) DMSO_allFeatures = pandas.read_sql(string, con=db) DMSO_allFeatures['Label'] = 'DMSO' DMSO_allFeatures = DMSO_allFeatures.dropna() db.close() DMSO_allFeatures.head() ## EXTRACT Drugs #### # Establish connections db = MySQLdb.connect("menchelabdb.int.cemm.at","root","cqsr4h","ImageAnalysisDDI" ) # Get all drugs for a choosen plate images_drugs = [] image_to_drug = {} for key in Image_Number_For_Drugs: for current_plate in Image_Number_For_Drugs[key]: if current_plate == plate: images_drugs.extend(Image_Number_For_Drugs[key][current_plate]) for img in Image_Number_For_Drugs[key][current_plate]: image_to_drug[img] = key.split('_')[0] # Create SQL string imageNumberString_drug = ','.join([str(x) for x in images_drugs]) string = 'select ImageNumber,ObjectNumber,'+selected_feature_string+' from DPN1018Batch'+batch_+'Per_Object where ImageNumber in ('+imageNumberString_drug+');' # Extract only selected features (all DMSO cells) Drug_allFeatures = pandas.read_sql(string, con=db) Drug_allFeatures['Label'] = 'Drug' for key in image_to_drug: Drug_allFeatures.loc[Drug_allFeatures['ImageNumber'] == key,['Label']] = image_to_drug[key] Drug_allFeatures = Drug_allFeatures.dropna() db.close() Drug_allFeatures.head() ``` #### Perform Scaling pooled scaling ``` DMSO_and_Drugs = pandas.concat([DMSO_allFeatures,Drug_allFeatures]) DMSO_and_Drugs_allFeatures_scaled = DMSO_and_Drugs.copy() #scaler = RobustScaler() scaler = StandardScaler() DMSO_and_Drugs_allFeatures_scaled[selected_Features] = scaler.fit_transform(DMSO_and_Drugs[selected_Features]) DMSO_and_Drugs_allFeatures_scaled.head() ``` ### Plot results for DMSO and selected drugs (Distributions) ``` sns.set_style("whitegrid", {'axes.grid' : False}) make_plots = True #check that folder exists ensure_dir('../results/Investigate_CellularHeterogeneity/Penetrance_PooledScaled/DMSO/') #sns.set() is_normal = 0 #go through all selected features for f in selected_Features: #extract DMSO values for this specific feature feature_values = DMSO_and_Drugs_allFeatures_scaled.loc[DMSO_and_Drugs_allFeatures_scaled['Label'] == 'DMSO'][f].values #Test for normality are essentiality useless (for small datasets chance of not enough power, while for large dataset everything gets rejected as non normal) pvals = [] for i in range(0,1000): pval_normal = stats.normaltest(np.random.choice(feature_values,50))[1] pvals.append(pval_normal) #pval_normal2 = stats.shapiro(feature_values)[1] if np.mean(pvals) >= 0.05: is_normal += 1 if make_plots: plt.hist(feature_values,bins=100, color='grey',density=True) plt.title(f + 'Normal: %.2f' % np.mean(pvals)) plt.savefig('../results/Investigate_CellularHeterogeneity/Penetrance_PooledScaled/DMSO/'+f+'.pdf') plt.close() print len(selected_Features) print is_normal ensure_dir('../results/Investigate_CellularHeterogeneity/Penetrance_PooledScaled/Drugs/') # Find drugs name all_drugs = list(set(image_to_drug.values())) #go through all selected features for f in selected_Features: #extract the DMSO values feature_values_DMSO = DMSO_and_Drugs_allFeatures_scaled.loc[DMSO_and_Drugs_allFeatures_scaled['Label'] == 'DMSO'][f].values for drug in all_drugs: #extract drug values feature_values = DMSO_and_Drugs_allFeatures_scaled.loc[DMSO_and_Drugs_allFeatures_scaled['Label'] == drug][f].values #overlay the two distributions plt.hist(feature_values_DMSO,bins='doane', color='grey', alpha=0.5, density=True) plt.hist(feature_values,bins='doane', color=drug_colors[drug], alpha=0.5, density=True) plt.savefig('../results/Investigate_CellularHeterogeneity/Penetrance_PooledScaled/Drugs/'+f+'_'+drug+'.pdf') plt.close() #colors for features feature_type_colors = [] compartment_type_colors = [] # contains KS results feature_results = [] # contains percentile results feature_results_effect = [] for f in selected_Features: compartment,featuretype,_ = f.split('_')[0:3] if featuretype in feature_colors.keys(): feature_type_colors.append(feature_colors[featuretype]) else: feature_type_colors.append(feature_colors['Other']) if compartment == 'Cells': compartment_type_colors.append('#a6611a') else: compartment_type_colors.append('#018571') #Get DMSO values for specific feature feature_values_DMSO = DMSO_and_Drugs_allFeatures_scaled.loc[DMSO_and_Drugs_allFeatures_scaled['Label'] == 'DMSO'][f].values #Define the top5 , top95 percentiles low_5 = np.percentile(feature_values_DMSO,5) top_95 = np.percentile(feature_values_DMSO,95) #temporary results (each row contains one feature - all drugs) tmp = [] tmp2 = [] #go through all drugs for drug in all_drugs: # Get Drug values for specific feature feature_values_drug = DMSO_and_Drugs_allFeatures_scaled.loc[DMSO_and_Drugs_allFeatures_scaled['Label'] == drug][f].values #Number of significant cells tmp2.append(len([x for x in feature_values_drug if x < low_5 or x > top_95])/float(len(feature_values_drug))) #Compare curves tmp.append(stats.ks_2samp(feature_values_drug,feature_values_DMSO)[0]) #add results to overall results lists feature_results.append(tmp) feature_results_effect.append(tmp2) #sns.set() sns.clustermap(data=feature_results, xticklabels=all_drugs,yticklabels=selected_Features, row_colors=[feature_type_colors,compartment_type_colors]) #sns.set(font_scale=0.5) plt.savefig('../results/Investigate_CellularHeterogeneity/Penetrance_PooledScaled/Clustermap_KS_Test.pdf') plt.close() #sns.set() sns.clustermap(data=feature_results_effect, xticklabels=all_drugs,yticklabels=selected_Features, row_colors=[feature_type_colors,compartment_type_colors]) sns.set(font_scale=5.5) plt.savefig('../results/Investigate_CellularHeterogeneity/Penetrance_PooledScaled/Clustermap_Percentiles.pdf') plt.close() sns.set() plt.scatter(feature_results,feature_results_effect) plt.plot([0,1],[0,1],ls='--',c='grey') plt.xlabel('Penetrance') plt.ylabel('Effect') plt.savefig('../results/Investigate_CellularHeterogeneity/Penetrance_PooledScaled/Penetrance_vs_Effect.pdf') plt.close() ``` ### Make PCA (all features) ``` db = MySQLdb.connect("menchelabdb.int.cemm.at","root","cqsr4h","ImageAnalysisDDI" ) plate = 1315101 batch_ = '2' images_dmso = Image_Number_For_DMSO[batch_][plate] imageNumberString_dmso = ','.join([str(x) for x in images_dmso]) string = 'select * from DPN1018Batch'+batch_+'Per_Object where ImageNumber in ('+imageNumberString_dmso+');' DMSO_allFeatures = pandas.read_sql(string, con=db) DMSO_allFeatures['Label'] = 'DMSO' for entry in list(Image_Number_For_Drugs.keys()): print entry drug,batch_ = entry.split('_') batch_ = batch_[5] images_drug = Image_Number_For_Drugs[entry][plate] imageNumberString_drug = ','.join([str(x) for x in images_drug]) string = 'select * from DPN1018Batch'+batch_+'Per_Object where ImageNumber in ('+imageNumberString_drug+');' drug_allFeatures = pandas.read_sql(string, con=db) drug_allFeatures['Label'] = 'Drug' # Put both dataframes together DMSO_drug_allFeatures = pandas.concat([drug_allFeatures,DMSO_allFeatures]) to_remove = [x for x in DMSO_drug_allFeatures.columns if 'Location' in x or 'Center' in x] DMSO_drug_allFeatures = DMSO_drug_allFeatures.drop(to_remove, axis=1) DMSO_drug_allFeatures = DMSO_drug_allFeatures.dropna() y = DMSO_drug_allFeatures['Label'].values x = DMSO_drug_allFeatures.iloc[:,3:-1].values # Standardizing the features x = StandardScaler().fit_transform(x) pca = PCA(n_components=2) Drug_DMSO_Fit = pca.fit_transform(x) pca_drug = [] pca_DMSO = [] for label,element in zip(y,list(Drug_DMSO_Fit)): if label == 'Drug': pca_drug.append(element) else: pca_DMSO.append(element) pca_drug = np.array(pca_drug) pca_DMSO = np.array(pca_DMSO) ensure_dir('../results/Investigate_CellularHeterogeneity/'+drug+'/') #plt.scatter(pca_drug[:,0],pca_drug[:,1], alpha=0.4) #plt.scatter(pca_DMSO[:,0],pca_DMSO[:,1], alpha=0.4) #plt.savefig('../results/Investigate_CellularHeterogeneity/'+drug+'/Scatter_AllFeatures.pdf') #plt.show() #plt.close() upper = 99.5 lower = 0.5 x_min = min([np.percentile(pca_drug[:,0],lower),np.percentile(pca_DMSO[:,0],lower)]) x_max = max([np.percentile(pca_drug[:,0],upper),np.percentile(pca_DMSO[:,0],upper)]) y_min = min([np.percentile(pca_drug[:,1],lower),np.percentile(pca_DMSO[:,1],lower)]) y_max = max([np.percentile(pca_drug[:,1],upper),np.percentile(pca_DMSO[:,1],upper)]) #bw = 1.5 sns.kdeplot(pca_drug[:,0],pca_drug[:,1],shade_lowest=False, alpha=0.5) sns.kdeplot(pca_DMSO[:,0],pca_DMSO[:,1],shade_lowest=False, alpha=0.5) plt.xlim([x_min,x_max]) plt.ylim([y_min,y_max]) plt.savefig('../results/Investigate_CellularHeterogeneity/'+drug+'/ContourPlot_AllFeatures.pdf') plt.close() sns.jointplot(pca_drug[:,0],pca_drug[:,1], kind='kde', bw = 'scott', color=drug_colors[drug], shade_lowest=False, alpha=0.5, xlim=[x_min,x_max], ylim=[y_min,y_max]) plt.savefig('../results/Investigate_CellularHeterogeneity/'+drug+'/JoinPlot_Drug_AllFaetures.pdf') plt.close() sns.jointplot(pca_DMSO[:,0],pca_DMSO[:,1], kind='kde', bw = 'scott', color="#D4D4D4", shade_lowest=False,alpha=0.5, xlim=[x_min,x_max], ylim=[y_min,y_max]) plt.savefig('../results/Investigate_CellularHeterogeneity/'+drug+'/JoinPlot_DMSO_AllFaetures.pdf') plt.close() ``` ### Make Violin plot selected features ``` db = MySQLdb.connect("menchelabdb.int.cemm.at","root","cqsr4h","ImageAnalysisDDI" ) #features = ['Cells_Intensity_StdIntensity_MitoTracker','Cells_Granularity_1_BetaTubulin','Nuclei_AreaShape_MaximumRadius','Cells_AreaShape_MaxFeretDiameter'] features = selected_Features plate = 1315101 #batch_ = 2 drug_feature_results_to_plot = {} for entry in Image_Number_For_Drugs: drug,batch_ = entry.split('_') batch_ = batch_[5] drug_feature_results_to_plot[entry] = {} print drug images_drug = Image_Number_For_Drugs[entry][plate] imageNumberString_drug = ','.join([str(x) for x in images_drug]) images_dmso = Image_Number_For_DMSO[batch_][plate] imageNumberString_dmso = ','.join([str(x) for x in images_dmso]) for feature in features: ensure_dir('../results/Investigate_CellularHeterogeneity/'+drug+'/'+feature+'/') string = 'select ImageNumber,ObjectNumber,'+feature+' from DPN1018Batch'+batch_+'Per_Object where ImageNumber in ('+imageNumberString_drug+');' result_drug = list(pandas.read_sql(string, con=db)[feature].values) result_drug = reject_outliers_2([x for x in result_drug if str(x) != 'nan'],6) string = 'select ImageNumber,ObjectNumber,'+feature+' from DPN1018Batch'+batch_+'Per_Object where ImageNumber in ('+imageNumberString_dmso+');' result_dmso = list(pandas.read_sql(string, con=db)[feature].values) result_dmso = reject_outliers_2([x for x in result_dmso if str(x) != 'nan'],6) drug_feature_results_to_plot[entry][feature] = {'Drug':result_drug, 'DMSO':result_dmso} db.close() #drug_colors = {'CLOUD031':'#8dd3c7','CLOUD053':'#ffffb3','CLOUD057':'#bebada','CLOUD089':'#fb8072','CLOUD112':'#80b1d3','CLOUD117':'#fdb462','CLOUD077':'#b3de69','CLOUD103':'#fccde5', # 'CLOUD115':'#d9d9d9','CLOUD129':'#bc80bd','DMSO':'grey',} for feature in features: data = [] drug_names = [] for entry in list(Image_Number_For_Drugs.keys()): drug,batch_ = entry.split('_') drug_names.append((drug,np.median(drug_feature_results_to_plot[entry][feature]['Drug']))) data.append((drug_feature_results_to_plot[entry][feature]['Drug'],np.median(drug_feature_results_to_plot[entry][feature]['Drug']))) #print data data.sort(key = operator.itemgetter(1)) drug_names.sort(key = operator.itemgetter(1)) data = [x[0] for x in data] drug_names = [x[0] for x in drug_names] data.append(drug_feature_results_to_plot[entry][feature]['DMSO']) drug_names.append('DMSO') Percent_95 = np.percentile(drug_feature_results_to_plot[entry][feature]['DMSO'],90) Percent_5 = np.percentile(drug_feature_results_to_plot[entry][feature]['DMSO'],10) my_pal = {0: drug_colors[drug_names[0]], 1: drug_colors[drug_names[1]], 2:drug_colors[drug_names[2]], 3:drug_colors[drug_names[3]],4:drug_colors[drug_names[4]],5:drug_colors[drug_names[5]], 6:drug_colors[drug_names[6]],7:drug_colors[drug_names[7]],8:drug_colors[drug_names[8]] ,9:drug_colors[drug_names[9]],10:drug_colors[drug_names[10]]} #sns.violinplot(data=data,scale='width',bw='scott', palette='Paired', orient='h') sns.violinplot(data=data,scale='width',bw='scott', palette=my_pal, orient='h') plt.axvline(Percent_95,ls='--',color='grey') plt.axvline(Percent_5,ls='--',color='grey') plt.yticks(range(0,len(data)+1),drug_names, fontsize=5) plt.ylabel('Treatment', fontsize=5) plt.xticks(fontsize=5) plt.xlabel(feature, fontsize=5) #sns.swarmplot(data=data) plt.savefig('../results/Investigate_CellularHeterogeneity/Final/'+str(feature)+'_Violin.pdf') #plt.show() plt.close() ``` ### Analyse Features for selected Drugs ``` fp_out = open('../results/Investigate_CellularHeterogeneity/Result_Overview.csv','w') fp_out.write('Batch,Drug,Plate,Feature,Cohens"D,Abs(CohenD),Coefficient_Variation,KS_Normality,MW_PVal\n') #selected_Features = ['Cells_Intensity_StdIntensity_MitoTracker','Cells_Granularity_12_BetaTubulin','Nuclei_AreaShape_MaximumRadius','Cells_AreaShape_MaxFeretDiameter'] selected_Features = ['Cells_AreaShape_FormFactor','Nuclei_AreaShape_MaxFeretDiameter','Cells_Granularity_1_BetaTubulin','Nuclei_Granularity_8_DAPI','Cells_Intensity_StdIntensity_MitoTracker','Nuclei_Intensity_IntegratedIntensity_DAPI'] db = MySQLdb.connect("menchelabdb.int.cemm.at","root","cqsr4h","ImageAnalysisDDI" ) #print Image_Number_For_Drugs for entry in Image_Number_For_Drugs: print entry drug,batch_ = entry.split('_') batch_ = batch_[5] #plates = list(Image_Number_For_Drugs[entry].keys()) plates = [1315101] for plate in plates: images_drug = Image_Number_For_Drugs[entry][plate] imageNumberString_drug = ','.join([str(x) for x in images_drug]) images_dmso = Image_Number_For_DMSO[batch_][plate] imageNumberString_dmso = ','.join([str(x) for x in images_dmso]) for feature in selected_Features: ensure_dir('../results/Investigate_CellularHeterogeneity/'+drug+'/'+feature+'/') string = 'select ImageNumber,ObjectNumber,'+feature+' from DPN1018Batch'+batch_+'Per_Object where ImageNumber in ('+imageNumberString_drug+');' result_drug = list(pandas.read_sql(string, con=db)[feature].values) result_drug = reject_outliers_2([x for x in result_drug if str(x) != 'nan'],6) string = 'select ImageNumber,ObjectNumber,'+feature+' from DPN1018Batch'+batch_+'Per_Object where ImageNumber in ('+imageNumberString_dmso+');' result_dmso = list(pandas.read_sql(string, con=db)[feature].values) result_dmso = reject_outliers_2([x for x in result_dmso if str(x) != 'nan'],6) #sns.violinplot(data=[result_drug,result_dmso],bw=0.5, cut=50) #plt.show() cd = cohen_d(result_drug,result_dmso) mw_Pval = min([1,mw(result_drug,result_dmso)[1] * (len(selected_Features) * len(list(Image_Number_For_Drugs[entry])) * 2)]) coev_var = np.std(result_drug)/np.mean(result_drug) #KS_Normality = stats.kstest(result_drug, 'norm')[1] KS_Normality = stats.shapiro(result_drug)[1] fp_out.write(batch_+','+drug+','+str(plate)+','+feature+','+str(cd)+','+str(abs(cd))+','+str(coev_var)+','+str(KS_Normality)+','+str(mw_Pval)+'\n') #continue #bins = 14 prettier plt.hist(result_drug, bins = 20, color = drug_colors[drug], alpha=0.3, density=True) plt.hist(result_dmso, bins = 20, color = 'grey', alpha=0.3,density=True) plt.xlim([min([np.percentile(result_drug,1),np.percentile(result_dmso,1)]),max([np.percentile(result_drug,99),np.percentile(result_dmso,99)])]) plt.savefig('../results/Investigate_CellularHeterogeneity/'+drug+'/'+feature+'/'+str(plate)+'_Hist.pdf') #plt.show() plt.close() plt.boxplot([result_drug,result_dmso], whis = 1.5, showfliers = True) plt.xticks([1,2],[drug,'DMSO']) plt.savefig('../results/Investigate_CellularHeterogeneity/'+drug+'/'+feature+'/'+str(plate)+'_Box.pdf') #plt.show() plt.close() db.close() ``` ### Load actual cells ``` fp_out = open('../results/Investigate_CellularHeterogeneity/Result_Overview.csv','w') fp_out.write('Batch,Drug,Plate,Feature,Cohens"D,Abs(CohenD),Coefficient_Variation,KS_Normality,MW_PVal\n') #selected_Features = ['Cells_Intensity_StdIntensity_MitoTracker','Cells_Granularity_12_BetaTubulin','Nuclei_AreaShape_MaximumRadius','Cells_AreaShape_MaxFeretDiameter'] db = MySQLdb.connect("menchelabdb.int.cemm.at","root","cqsr4h","ImageAnalysisDDI" ) #print Image_Number_For_Drugs for entry in Image_Number_For_Drugs: print entry drug,batch_ = entry.split('_') batch_ = batch_[5] plates = list(Image_Number_For_Drugs[entry].keys()) #print plates for plate in plates: images_drug = Image_Number_For_Drugs[entry][plate] imageNumberString_drug = ','.join([str(x) for x in images_drug]) images_dmso = Image_Number_For_DMSO[batch_][plate] imageNumberString_dmso = ','.join([str(x) for x in images_dmso]) for feature in selected_Features: ensure_dir('../results/Investigate_CellularHeterogeneity/'+drug+'/'+feature+'/') string = 'select ImageNumber,ObjectNumber,'+feature+' from DPN1018Batch'+batch_+'Per_Object where ImageNumber in ('+imageNumberString_drug+');' result_drug = list(pandas.read_sql(string, con=db)[feature].values) result_drug = reject_outliers_2([x for x in result_drug if str(x) != 'nan'],6) string = 'select ImageNumber,ObjectNumber,'+feature+' from DPN1018Batch'+batch_+'Per_Object where ImageNumber in ('+imageNumberString_dmso+');' result_dmso = list(pandas.read_sql(string, con=db)[feature].values) result_dmso = reject_outliers_2([x for x in result_dmso if str(x) != 'nan'],6) #sns.violinplot(data=[result_drug,result_dmso],bw=0.5, cut=50) #plt.show() cd = cohen_d(result_drug,result_dmso) mw_Pval = min([1,mw(result_drug,result_dmso)[1] * (len(selected_Features) * len(list(Image_Number_For_Drugs[entry])) * 2)]) coev_var = np.std(result_drug)/np.mean(result_drug) #KS_Normality = stats.kstest(result_drug, 'norm')[1] KS_Normality = stats.shapiro(result_drug)[1] fp_out.write(batch_+','+drug+','+str(plate)+','+feature+','+str(cd)+','+str(abs(cd))+','+str(coev_var)+','+str(KS_Normality)+','+str(mw_Pval)+'\n') #continue #bins = 14 prettier plt.hist(result_drug, bins = 20, color = '#3AB9D1', alpha=0.3, density=True) plt.hist(result_dmso, bins = 20, color = 'grey', alpha=0.3,density=True) plt.xlim([min([np.percentile(result_drug,1),np.percentile(result_dmso,1)]),max([np.percentile(result_drug,99),np.percentile(result_dmso,99)])]) plt.savefig('../results/Investigate_CellularHeterogeneity/'+drug+'/'+feature+'/'+str(plate)+'_Hist.pdf') #plt.show() plt.close() plt.boxplot([result_drug,result_dmso], whis = 1.5, showfliers = True) plt.xticks([1,2],[drug,'DMSO']) plt.savefig('../results/Investigate_CellularHeterogeneity/'+drug+'/'+feature+'/'+str(plate)+'_Box.pdf') #plt.show() plt.close() db.close() ``` ### Choose specific features / plate ``` db = MySQLdb.connect("menchelabdb.int.cemm.at","root","cqsr4h","ImageAnalysisDDI" ) features = ['Cells_Intensity_StdIntensity_MitoTracker','Cells_Granularity_12_BetaTubulin','Nuclei_AreaShape_MaximumRadius','Cells_AreaShape_MaxFeretDiameter'] plate = 1315111 #batch_ = 2 drug_feature_results_to_plot = {} for entry in Image_Number_For_Drugs: drug,batch_ = entry.split('_') batch_ = batch_[5] drug_feature_results_to_plot[entry] = {} print drug images_drug = Image_Number_For_Drugs[entry][plate] imageNumberString_drug = ','.join([str(x) for x in images_drug]) images_dmso = Image_Number_For_DMSO[batch_][plate] imageNumberString_dmso = ','.join([str(x) for x in images_dmso]) for feature in features: ensure_dir('../results/Investigate_CellularHeterogeneity/'+drug+'/'+feature+'/') string = 'select ImageNumber,ObjectNumber,'+feature+' from DPN1018Batch'+batch_+'Per_Object where ImageNumber in ('+imageNumberString_drug+');' result_drug = list(pandas.read_sql(string, con=db)[feature].values) result_drug = reject_outliers_2([x for x in result_drug if str(x) != 'nan'],6) string = 'select ImageNumber,ObjectNumber,'+feature+' from DPN1018Batch'+batch_+'Per_Object where ImageNumber in ('+imageNumberString_dmso+');' result_dmso = list(pandas.read_sql(string, con=db)[feature].values) result_dmso = reject_outliers_2([x for x in result_dmso if str(x) != 'nan'],6) drug_feature_results_to_plot[entry][feature] = {'Drug':result_drug, 'DMSO':result_dmso} db.close() for feature in features: data = [] drug_names = [] for entry in list(Image_Number_For_Drugs.keys()): drug,batch_ = entry.split('_') drug_names.append(drug) data.append(drug_feature_results_to_plot[entry][feature]['Drug']) data.append(drug_feature_results_to_plot[entry][feature]['DMSO']) drug_names.append('DMSO') Percent_95 = np.percentile(drug_feature_results_to_plot[entry][feature]['DMSO'],95) Percent_5 = np.percentile(drug_feature_results_to_plot[entry][feature]['DMSO'],5) sns.violinplot(data=data,scale='width') plt.axhline(Percent_95,ls='--',color='grey') plt.axhline(Percent_5,ls='--',color='grey') plt.xticks(range(0,len(data)+1),drug_names, fontsize=5) plt.xlabel('Treatment') plt.ylabel(feature) #sns.swarmplot(data=data) plt.savefig('../results/Investigate_CellularHeterogeneity/Final/'+str(feature)+'_Violin.pdf') #plt.show() plt.close() Image_Number_For_Drugs = {'Batch1':{},'Batch2':{}} db = MySQLdb.connect("menchelabdb.int.cemm.at","root","cqsr4h","ImageAnalysisDDI" ) batches = ['1','2'] for batch_ in batches: for drug in significant_perturbations['Batch'+batch_]: if drug == 'DMSO': string = 'select ImageNumber,Image_Metadata_Plate from DPN1018Batch'+batch_+'Per_Image where Image_Metadata_ID_A like "DMSO" and Image_Metadata_ID_B like "None";' ImageNumbers = pandas.read_sql(string, con=db) for line in ImageNumbers.iterrows(): Drug_ImageNumber = line[1][0] Drug_PlateNumber = line[1][1] if drug not in Image_Number_For_Drugs['Batch'+batch_]: Image_Number_For_Drugs['Batch'+batch_][drug] = {Drug_PlateNumber:[Drug_ImageNumber]} elif Drug_PlateNumber not in Image_Number_For_Drugs['Batch'+batch_][drug]: Image_Number_For_Drugs['Batch'+batch_][drug][Drug_PlateNumber] = [Drug_ImageNumber] else: Image_Number_For_Drugs['Batch'+batch_][drug][Drug_PlateNumber].append(Drug_ImageNumber) elif drug_decay[drug]['Batch'+batch_] == True: string = 'select ImageNumber,Image_Metadata_Plate from DPN1018Batch'+batch_+'Per_Image where Image_Metadata_ID_A like "'+drug+'" and Image_Metadata_ID_B like "DMSO";' ImageNumbers = pandas.read_sql(string, con=db) #print(ImageNumbers) for line in ImageNumbers.iterrows(): Drug_ImageNumber = line[1][0] Drug_PlateNumber = line[1][1] if drug not in Image_Number_For_Drugs['Batch'+batch_]: Image_Number_For_Drugs['Batch'+batch_][drug] = {Drug_PlateNumber:[Drug_ImageNumber]} elif Drug_PlateNumber not in Image_Number_For_Drugs['Batch'+batch_][drug]: Image_Number_For_Drugs['Batch'+batch_][drug][Drug_PlateNumber] = [Drug_ImageNumber] else: Image_Number_For_Drugs['Batch'+batch_][drug][Drug_PlateNumber].append(Drug_ImageNumber) db.close() fp_out = open('../results/Investigate_CellularHeterogeneity/Result_Overview.csv','w') fp_out.write('Batch,Drug,Plate,Feature,Cohens"D,Abs(CohenD),Coefficient_Variation,KS_Normality,MW_PVal\n') db = MySQLdb.connect("menchelabdb.int.cemm.at","root","cqsr4h","ImageAnalysisDDI" ) for batch_ in Image_Number_For_Drugs: print batch_ for drug in Image_Number_For_Drugs[batch_]: for plate in list(Image_Number_For_Drugs[batch_][drug])[0:1]: images_drug = Image_Number_For_Drugs[batch_][drug][plate] imageNumberString_drug = ','.join([str(x) for x in images_drug]) images_dmso = Image_Number_For_Drugs[batch_]['DMSO'][plate] imageNumberString_dmso = ','.join([str(x) for x in images_dmso]) for feature in selected_Features[0:2]: ensure_dir('../results/Investigate_CellularHeterogeneity/'+drug+'/'+feature+'/') string = 'select ImageNumber,ObjectNumber,'+feature+' from DPN1018'+batch_+'Per_Object where ImageNumber in ('+imageNumberString_drug+');' result_drug = list(pandas.read_sql(string, con=db)[feature].values) result_drug = [x for x in result_drug if str(x) != 'nan'] string = 'select ImageNumber,ObjectNumber,'+feature+' from DPN1018'+batch_+'Per_Object where ImageNumber in ('+imageNumberString_dmso+');' result_dmso = list(pandas.read_sql(string, con=db)[feature].values) result_dmso = [x for x in result_dmso if str(x) != 'nan'] #sns.violinplot(data=[result_drug,result_dmso],bw=0.5, cut=50) #plt.show() cd = cohen_d(result_drug,result_dmso) mw_Pval = min([1,mw(result_drug,result_dmso)[1] * (len(selected_Features) * len(list(Image_Number_For_Drugs[batch_][drug])) * 2)]) coev_var = np.std(result_drug)/np.mean(result_drug) #KS_Normality = stats.kstest(result_drug, 'norm')[1] KS_Normality = stats.shapiro(result_drug)[1] fp_out.write(batch_+','+drug+','+str(plate)+','+feature+','+str(cd)+','+str(abs(cd))+','+str(coev_var)+','+str(KS_Normality)+','+str(mw_Pval)+'\n') #continue #bins = 14 prettier plt.hist(result_drug, bins = 20, color = '#3AB9D1', alpha=0.3, density=True) plt.hist(result_dmso, bins = 20, color = 'grey', alpha=0.3,density=True) plt.xlim([min([np.percentile(result_drug,1),np.percentile(result_dmso,1)]),max([np.percentile(result_drug,99),np.percentile(result_dmso,99)])]) plt.savefig('../results/Investigate_CellularHeterogeneity/'+drug+'/'+feature+'/'+str(plate)+'_Hist.pdf') #plt.show() plt.close() plt.boxplot([result_drug,result_dmso], whis = 1.5, showfliers = False) plt.xticks([1,2],[drug,'DMSO']) plt.savefig('../results/Investigate_CellularHeterogeneity/'+drug+'/'+feature+'/'+str(plate)+'_Box.pdf') #plt.show() plt.close() fp_out.close() ```
github_jupyter
# Time Series Forecasting on NYC_Taxi w MLFlow - Objectives - Leverage ML FLow to build some time series models - Simple Forecast of aggregate daily data to start - Later will need to look at splitting out the datasets into different spots ``` %load_ext autotime from setup import start_spark, extract_data sparksesh = start_spark() from tseries.taxi_daily import TaxiDaily taxi_daily = TaxiDaily(sparksesh) taxi_daily.load_data() ``` # Settings for MLflow ``` # credentials for storing our model artifacts # mlflow needs these to be set whenever it is being called os.environ['AWS_ACCESS_KEY_ID'] = os.environ.get('MINIO_ACCESS_KEY') os.environ['AWS_SECRET_ACCESS_KEY'] = os.environ.get('MINIO_SECRET_KEY') os.environ['MLFLOW_S3_ENDPOINT_URL'] = 'http://minio:9000' ``` # Create Our Train Set ``` taxi_daily.dataset.agg(F.min(F.col('pickup_date')), F.max(F.col('pickup_date'))).collect() taxi_daily.dataset.printSchema() ``` Lets take 2 years to start ``` starting_dataset = taxi_daily.dataset.filter("pickup_date < '2015-09-01'") train, val = starting_dataset.filter("pickup_date < '2015-08-01'").toPandas(), \ starting_dataset.filter("pickup_date >= '2015-08-01'").toPandas() ``` ## Forecasting the Dataframe ``` import prophet import pandas as pd from prophet import Prophet from prophet.diagnostics import cross_validation from prophet.diagnostics import performance_metrics import mlflow ``` There was an error in the hostname resolution hence switch to ip ``` #mlflow.delete_experiment('1') mlflow.set_tracking_uri("http://192.168.64.21:5000/") tracking_uri = mlflow.get_tracking_uri() print("Current tracking uri: {}".format(tracking_uri)) ### Quick test on creating experiments from mlflow.exceptions import RestException try: mlflow.create_experiment( name='taxi_daily_forecast' ) except RestException: print('already_created') experiment = mlflow.get_experiment(15) experiment.artifact_location # Build an evaluation function import numpy as np from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score def eval_metrics(actual, pred): rmse = np.sqrt(mean_squared_error(actual, pred)) mae = mean_absolute_error(actual, pred) r2 = r2_score(actual, pred) return rmse, mae, r2 # To save models to mlflow we need to write a python wrapper # to make sure that it performs as mlflow expects import mlflow.pyfunc class ProphetModel(mlflow.pyfunc.PythonModel): def __init__(self, model): self.model = model super().__init__() def load_context(self, context): from prophet import Prophet return def predict(self, context, model_input): future = self.model.make_future_dataframe(periods=model_input['periods'][0]) return self.model.predict(future) ``` 44 seconds for training by default \ 3.62 seconds with processes parallelisation \ 13 seconds after we add the toPandas conversion here and run with parallelisation ``` train_prophet = train[['pickup_date', 'total_rides']] train_prophet.columns = ['ds', 'y'] #train_prophet.head(10) val_prophet = val[['pickup_date', 'total_rides']] val_prophet.columns = ['ds', 'y'] #val_prophet.head(10) %time rolling_window = 0.1 conda_env = { 'channels': ['conda-forge'], 'dependencies': [{ 'pip': [ 'prophet=={0}'.format(prophet.__version__) ] }], "name": "prophetenv" } with mlflow.start_run(experiment_id=15): m = prophet.Prophet(daily_seasonality=True) # need to adjust the fit function to suit m.fit(train_prophet) # cross validation is the thingy that is generating our different train sets # tqdm is glitchy with my setup so disabling for now df_cv = cross_validation(m, initial="28 days", period="7 days", horizon="14 days", disable_tqdm=True, parallel="processes") df_p = performance_metrics(df_cv, rolling_window=rolling_window) mlflow.log_param("rolling_window", rolling_window) mlflow.log_metric("rmse", df_p.loc[0, "rmse"]) mlflow.log_metric("mae", df_p.loc[0, "mae"]) mlflow.log_metric("mape", df_p.loc[0, "mape"]) print(" CV: {}".format(df_cv.head())) print(" Perf: {}".format(df_p.head())) mlflow.pyfunc.log_model("model", conda_env=conda_env, python_model=ProphetModel(m)) print( "Logged model with URI: runs:/{run_id}/model".format( run_id=mlflow.active_run().info.run_id ) ) ``` # Prophet Diagnostics ``` # Python from prophet.plot import plot_cross_validation_metric fig = plot_cross_validation_metric(df_cv, metric='mape') ``` We aren't seeing many differences with longer horizons ``` future = m.make_future_dataframe(periods=len(val_prophet)) forecast = m.predict(future) fig = m.plot_components(forecast) ``` # Testing out Uber Orbit ``` from orbit.models.dlt import DLTFull from orbit.diagnostics.plot import plot_predicted_data dlt = DLTFull( response_col='y', date_col='ds', #regressor_col=['trend.unemploy', 'trend.filling', 'trend.job'], seasonality=7, ) dlt.fit(df=train_prophet) # outcomes data frame predicted_df = dlt.predict(df=val_prophet) plot_predicted_data( training_actual_df=train_prophet, predicted_df=predicted_df, date_col=dlt.date_col, actual_col=dlt.response_col, test_actual_df=val_prophet ) ``` # Testing sktime ``` from sktime.performance_metrics.forecasting import mean_absolute_percentage_error from sktime.utils.plotting import plot_series import numpy as np print("min: {0}, max {1}".format(min(train.pickup_date), max(train.pickup_date))) print("min: {0}, max {1}".format(min(val.pickup_date), max(val.pickup_date))) train_tr = pd.date_range(min(train.pickup_date), max(train.pickup_date)) val_tr = pd.date_range(min(val.pickup_date), max(val.pickup_date)) assert len(train) == len(train_tr) assert len(val) == len(val_tr) train_skt_df = pd.Series(train['total_rides'].values.astype('float'), index=train_tr) val_skt_df = pd.Series(val['total_rides'].values, index=val_tr) plot_series(train_skt_df) plot_series(val_skt_df) # test pandas #pd.PeriodIndex(pd.date_range("2020-01-01", periods=30, freq="D")) from sktime.forecasting.base import ForecastingHorizon #fh = ForecastingHorizon(test_sktime.index, is_relative=False) fh_period = ForecastingHorizon( val_skt_df.index, is_relative=False ) from sktime.forecasting.naive import NaiveForecaster basic_forecaster = NaiveForecaster(strategy="last") forecaster = NaiveForecaster(strategy="last") forecaster.fit(train_skt_df) # stuck here for now preds = forecaster.predict(fh_period) fig, ax = plot_series(val_skt_df, preds, labels=["y", "y_pred"]) mean_absolute_percentage_error(preds, val_skt_df) from sktime.forecasting.theta import ThetaForecaster # theta forecasting th_forecaster = ThetaForecaster(sp=7) th_forecaster.fit(train_skt_df) alpha = 0.05 y_pred, y_pred_ints = th_forecaster.predict(fh_period, return_pred_int=True, alpha=alpha) fig, ax = plot_series(val_skt_df, y_pred, labels=["y", "y_pred"]) ax.fill_between( ax.get_lines()[-1].get_xdata(), y_pred_ints["lower"], y_pred_ints["upper"], alpha=0.2, color=ax.get_lines()[-1].get_c(), label=f"{1 - alpha}% prediction intervals", ) ax.legend(); mean_absolute_percentage_error(y_pred, val_skt_df) from sktime.forecasting.ets import AutoETS et_forecaster = AutoETS(auto=True, sp=7, n_jobs=-1) et_forecaster.fit(train_skt_df) y_pred = et_forecaster.predict(fh_period) plot_series(train_skt_df, val_skt_df, y_pred, labels=["y_train", "y_test", "y_pred"]) #mean_absolute_percentage_error(y_pred, val_skt_df) mean_absolute_percentage_error(y_pred, val_skt_df) from sktime.forecasting.arima import AutoARIMA ar_forecaster = AutoARIMA(sp=7, suppress_warnings=True) ar_forecaster.fit(train_skt_df) y_pred = ar_forecaster.predict(fh_period) plot_series(train_skt_df, val_skt_df, y_pred, labels=["y_train", "y_test", "y_pred"]) #mean_absolute_percentage_error(y_pred, val_skt_df) mean_absolute_percentage_error(y_pred, val_skt_df) from sktime.forecasting.tbats import TBATS tbats_forecaster = TBATS(sp=7, use_trend=True, use_box_cox=True) tbats_forecaster.fit(train_skt_df) y_pred = tbats_forecaster.predict(fh_period) plot_series(train_skt_df, val_skt_df, y_pred, labels=["y_train", "y_test", "y_pred"]) mean_absolute_percentage_error(y_pred, val_skt_df) ``` # Pytorch - Forecasting ``` from pytorch_forecasting import Baseline, NBeats, TimeSeriesDataSet from pytorch_forecasting.data import NaNLabelEncoder, TorchNormalizer import pytorch_lightning as pl from pytorch_lightning.callbacks import EarlyStopping import torch ``` We need to regig this: - need to merge the two datasets and split it by the time index - time index must be integer, cannot use date times etc ``` torch_train = train[['pickup_date', 'total_rides']].copy() torch_train['group'] = 'Only' torch_train['time_idx'] = torch_train.index.astype('int') max(torch_train['time_idx']) torch_val = val[['pickup_date', 'total_rides']].copy() torch_val['group'] = 'Only' torch_val.index = pd.RangeIndex(start=max(torch_train['time_idx'])+1, stop=max(torch_train['time_idx'])+len(torch_val)+1) torch_val['time_idx'] = torch_val.index.astype('int') merged = pd.concat([torch_train, torch_val]) merged.head() torch_train.total_rides.head(2) # create dataset and dataloaders max_encoder_length = 60 max_prediction_length = len(val_skt_df) training_cutoff = 730 context_length = max_encoder_length prediction_length = max_prediction_length training = TimeSeriesDataSet( merged[lambda x: x.time_idx < training_cutoff], time_idx="time_idx", target="total_rides", target_normalizer=TorchNormalizer(), #categorical_encoders={"group": NaNLabelEncoder().fit(torch_train.group)}, group_ids=["group"], # only unknown variable is "value" - and N-Beats can also not take any additional variables time_varying_unknown_reals=["total_rides"], max_encoder_length=context_length, max_prediction_length=prediction_length, ) validation = TimeSeriesDataSet.from_dataset(training, merged, min_prediction_idx=training_cutoff) batch_size = 128 train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=0) val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size, num_workers=0) training.target_normalizer pl.seed_everything(42) trainer = pl.Trainer(gradient_clip_val=0.01) net = NBeats.from_dataset(training, learning_rate=3e-2, weight_decay=1e-2, widths=[32, 512], backcast_loss_ratio=0.1) # find optimal learning rate res = trainer.tuner.lr_find(net, train_dataloader=train_dataloader, val_dataloaders=val_dataloader, min_lr=1e-5) print(f"suggested learning rate: {res.suggestion()}") fig = res.plot(show=True, suggest=True) fig.show() net.hparams.learning_rate = res.suggestion() early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=1e-4, patience=10, verbose=False, mode="min") trainer = pl.Trainer( max_epochs=100, gpus=0, weights_summary="top", gradient_clip_val=0.01, callbacks=[early_stop_callback], limit_train_batches=30, ) net = NBeats.from_dataset( training, learning_rate=4e-3, log_interval=10, log_val_interval=1, weight_decay=1e-2, widths=[32, 512], backcast_loss_ratio=1.0, ) trainer.fit( net, train_dataloader=train_dataloader, val_dataloaders=val_dataloader, ) best_model_path = trainer.checkpoint_callback.best_model_path best_model = NBeats.load_from_checkpoint(best_model_path) actuals = torch.cat([y[0] for x, y in iter(val_dataloader)]) predictions = best_model.predict(val_dataloader) (actuals - predictions).abs().mean() raw_predictions, x = best_model.predict(val_dataloader, mode="raw", return_x=True) best_model.plot_prediction(x, raw_predictions, idx=0, add_loss_to_title=True); # we are only forecasting 1 series so idx is just 0 #for idx in range(10): # plot 10 examples # best_model.plot_prediction(x, raw_predictions, idx=idx, add_loss_to_title=True); ``` # GluonTS requires pandas at least 1.2 ``` pd.__version__ from gluonts.dataset.common import ListDataset import matplotlib.pyplot as plt print(train_skt_df.values.shape) print(val_skt_df.values.shape) # train dataset: cut the last window of length "prediction_length", add "target" and "start" fields train_ds = ListDataset( [{'target': train_skt_df.values.astype(int), 'start':train_skt_df.index[0].to_pydatetime() }], freq="1D" ) # test dataset: use the whole dataset, add "target" and "start" fields test_ds = ListDataset( [{'target': val_skt_df.values.astype(int), 'start': val_skt_df.index[0].to_pydatetime() }], freq="1D" ) #type(train_skt_df.index[0]) type(train_skt_df.index[0].to_pydatetime()) train_skt_df.index[0].to_pydatetime() #train_skt_df.values from gluonts.model.simple_feedforward import SimpleFeedForwardEstimator from gluonts.mx import Trainer estimator = SimpleFeedForwardEstimator( num_hidden_dimensions=[10], prediction_length=len(val_skt_df), context_length=100, freq="D", trainer=Trainer( ctx="cpu", epochs=20, learning_rate=1e-3, num_batches_per_epoch=100 ) ) predictor = estimator.train(training_data=train_ds) from gluonts.evaluation import make_evaluation_predictions forecast_it, ts_it = make_evaluation_predictions( dataset=test_ds, # test dataset predictor=predictor, # predictor num_samples=100, # number of sample paths we want for evaluation ) forecasts = list(forecast_it) tss = list(ts_it) ts_entry = tss[0] forecast_entry = forecasts[0] def plot_prob_forecasts(ts_entry, forecast_entry): plot_length = 150 prediction_intervals = (50.0, 90.0) legend = ["observations", "median prediction"] + [f"{k}% prediction interval" for k in prediction_intervals][::-1] fig, ax = plt.subplots(1, 1, figsize=(10, 7)) ts_entry[-plot_length:].plot(ax=ax) # plot the time series forecast_entry.plot(prediction_intervals=prediction_intervals, color='g') plt.grid(which="both") plt.legend(legend, loc="upper left") plt.show() plot_prob_forecasts(ts_entry, forecast_entry) ``` # Statsmodels ``` import statsmodels.api as sm mod = sm.tsa.SARIMAX(train['total_rides'], order=(1, 0, 0), trend='c') # Estimate the parameters res = mod.fit() print(res.summary()) ``` ## Stopping Spark Session ``` spark.stop() ```
github_jupyter
# Large Scale Kernel Ridge Regression ``` import sys sys.path.insert(0, '/Users/eman/Documents/code_projects/kernellib') sys.path.insert(0, '/home/emmanuel/code/kernellib') import numpy as np from kernellib.large_scale import RKSKernelRidge, KernelRidge as RKernelRidge from kernellib.utils import estimate_sigma, r_assessment from sklearn.model_selection import GridSearchCV import matplotlib.pyplot as plt %matplotlib inline %load_ext autoreload %autoreload 2 ``` #### Sample Data ``` seed = 123 rng = np.random.RandomState(seed) n_train, n_test = 10000, 1000 d_dimensions = 1 noise = 0.1 xtrain = rng.randn(n_train, d_dimensions) ytrain = np.sin(xtrain) + noise * rng.randn(n_train, d_dimensions) xtest = rng.randn(n_test, d_dimensions) ytest = np.sin(xtest) + noise * rng.randn(n_test, d_dimensions) # training n_components = 10 alpha = 1e-3 sigma = estimate_sigma(xtrain) ``` ## Random Kitchen Sinks Regression In this method, I implement the Random Kitchen Sinks algorithm found [here](https://people.eecs.berkeley.edu/~brecht/kitchensinks.html) and [here](https://people.eecs.berkeley.edu/~brecht/kitchensinks.html). I don't try and transform the problem into a matrix approximation and then fit it into the KRR framework. This is largely because the RKS algorithm that they implement use complex values that need to be present in solving and transforming the data. If the complex values are taken out before the transformation, the results are garbage. Furthermore, some experiments that I ran (see below) show that the RKS as a transformer do not approximate the kernel matrix very well. So therefore, this algorithm comes as is. It's a shame that you cannot write the function as a transformer but the phenomenal results that you obtain make it worth it in my opinion. ``` rks_model = RKSKernelRidge(n_components=n_components, alpha=alpha, sigma=sigma, random_state=seed) rks_model.fit(xtrain, ytrain) y_pred = rks_model.predict(xtest) r_assessment(y_pred, ytest, verbose=1); %timeit rks_model.fit(xtrain, ytrain); %timeit rks_model.predict(xtest); fig, ax = plt.subplots() xplot = np.linspace(xtrain.min(), xtest.max(), 100)[:, np.newaxis] yplot = rks_model.predict(xplot) ax.scatter(xtrain, ytrain, color='r', label='Training Data') ax.plot(xplot, yplot, color='k', linewidth=2, label='Predictions') ax.legend() ax.set_title('Random Kitchen Sinks Approximation') plt.show() ``` #### Cross Validation Compatibility ``` sigmaMin = np.log10(sigma*0.1); sigmaMax = np.log10(sigma*10); sigmas = np.logspace(sigmaMin,sigmaMax,20); param_grid = { 'n_components': [1, 5, 10, 25], 'alpha': [1e0, 1e-1, 1e-2, 1e-3], 'sigma': sigmas } n_jobs = 24 cv = 3 rks_grid_model = GridSearchCV(RKSKernelRidge(random_state=seed), param_grid=param_grid, n_jobs=n_jobs, cv=cv, verbose=1) rks_grid_model.fit(xtrain, ytrain); y_pred = rks_grid_model.predict(xtest) r_assessment(y_pred, ytest) fig, ax = plt.subplots() xplot = np.linspace(xtrain.min(), xtest.max(), 100)[:, np.newaxis] yplot = rks_grid_model.predict(xplot) ax.scatter(xtrain, ytrain, color='r', label='Training Data') ax.plot(xplot, yplot, color='k', linewidth=2, label='Predictions') ax.legend() ax.set_title('Random Kitchen Sinks Approximation w/ Grid Search') plt.show() ``` ## Nystrom Approximation ``` approximation = 'nystrom' nys_model = RKernelRidge(n_components=n_components, alpha=alpha, sigma=sigma, kernel='rbf', random_state=seed, approximation=approximation) nys_model.fit(xtrain, ytrain); y_pred = nys_model.predict(xtest) r_assessment(y_pred, ytest, verbose=1); %timeit nys_model.fit(xtrain, ytrain); %timeit nys_model.predict(xtest); fig, ax = plt.subplots() xplot = np.linspace(xtrain.min(), xtest.max(), 100)[:, np.newaxis] yplot = nys_model.predict(xplot) ax.scatter(xtrain, ytrain, color='r', label='Training Data') ax.plot(xplot, yplot, color='k', linewidth=2, label='Predictions') ax.legend() ax.set_title('Nystrom Approximation') plt.show() ``` ### Nystrom w/ Grid Search ``` sigmaMin = np.log10(sigma*0.1); sigmaMax = np.log10(sigma*10); sigmas = np.logspace(sigmaMin,sigmaMax,20); param_grid = { 'kernel': ['rbf'], 'n_components': [1, 5, 10, 25], 'alpha': [1e0, 1e-1, 1e-2, 1e-3], 'sigma': sigmas } n_jobs = 24 cv = 3 nys_grid_model = GridSearchCV(RKernelRidge(random_state=seed, approximation=approximation), param_grid=param_grid, n_jobs=n_jobs, cv=cv, verbose=1) nys_grid_model.fit(xtrain, ytrain); r_assessment(y_pred, ytest, verbose=1); print('Best sigma:', nys_grid_model.best_estimator_.sigma) print('Best alpha:',nys_grid_model.best_estimator_.alpha) print('Best Number of features:', nys_grid_model.best_estimator_.n_components) print('Best Kernel:', nys_grid_model.best_estimator_.kernel) fig, ax = plt.subplots() xplot = np.linspace(xtrain.min(), xtest.max(), 100)[:, np.newaxis] yplot = nys_grid_model.predict(xplot) ax.scatter(xtrain, ytrain, color='r', label='Training Data') ax.plot(xplot, yplot, color='k', linewidth=2, label='Predictions') ax.legend() ax.set_title('Nystrom Approximation w/ Grid Search') plt.show() ``` ## Randomized Nystrom Matrix Approximation ``` approximation = 'rnystrom' k_rank = 10 rnys_model = RKernelRidge(n_components=n_components, alpha=alpha, sigma=sigma, kernel='rbf', random_state=seed, approximation=approximation, k_rank=k_rank) rnys_model.fit(xtrain, ytrain); y_pred = rnys_model.predict(xtest) r_assessment(y_pred, ytest, verbose=1); %timeit rnys_model.fit(xtrain, ytrain); %timeit rnys_model.predict(xtest); fig, ax = plt.subplots() xplot = np.linspace(xtrain.min(), xtest.max(), 100)[:, np.newaxis] yplot = rnys_model.predict(xplot) ax.scatter(xtrain, ytrain, color='r', label='Training Data') ax.plot(xplot, yplot, color='k', linewidth=2, label='Predictions') ax.legend() ax.set_title('Randomized Nystrom Approximation') plt.show() ``` ## Random Fourier Features Approximation ``` approximation = 'rff' rff_model = RKernelRidge(n_components=n_components, alpha=alpha, sigma=sigma, kernel='rbf', random_state=seed, approximation=approximation) rff_model.fit(xtrain, ytrain); y_pred = rff_model.predict(xtest) r_assessment(y_pred, ytest, verbose=1); %timeit rff_model.fit(xtrain, ytrain); %timeit rff_model.predict(xtest); fig, ax = plt.subplots() xplot = np.linspace(xtrain.min(), xtest.max(), 100)[:, np.newaxis] yplot = rff_model.predict(xplot) ax.scatter(xtrain, ytrain, color='r', label='Training Data') ax.plot(xplot, yplot, color='k', linewidth=2, label='Predictions') ax.legend() ax.set_title('Random Fourier Features') plt.show() ``` ### Fast Food ``` approximation = 'fastfood' fastfood_model = RKernelRidge(n_components=n_components, alpha=alpha, sigma=sigma, kernel='rbf', random_state=seed, approximation=approximation, trade_off='mem') fastfood_model.fit(xtrain, ytrain); y_pred = fastfood_model.predict(xtest) r_assessment(y_pred, ytest, verbose=1); %timeit fastfood_model.fit(xtrain, ytrain); %timeit fastfood_model.predict(xtest); fig, ax = plt.subplots() xplot = np.linspace(xtrain.min(), xtest.max(), 100)[:, np.newaxis] yplot = fastfood_model.predict(xplot) ax.scatter(xtrain, ytrain, color='r', label='Training Data') ax.plot(xplot, yplot, color='k', linewidth=2, label='Predictions') ax.legend() ax.set_title('Fast Food') plt.show() ``` ### Timing Comparison #### Number of Features ``` from sklearn.datasets import make_low_rank_matrix import seaborn; seaborn.set() m_range = (2 ** (np.arange(12.3, 20))).astype(int) print(m_range.shape, m_range.min(), m_range.max()) from sklearn.datasets import make_regression print(t1.average, t1.stdev) %%time t_rks = list() t_nys = list() t_rnys = list() t_rbf = list() t_rff = list() # training n_components = 50 alpha = 1e-3 gamma = 1.0 for m in m_range: xtrain, ytrain = make_regression(n_samples=m, n_features=2000, n_informative=200, n_targets=1, effective_rank=50, noise=0.2, random_state=seed) print(xtrain.shape) # ------------------------------- # Random Kitchen Sinks) # ------------------------------- rks_model = RKSKernelRidge(n_components=n_components, alpha=alpha, gamma=gamma, random_state=seed) t1 = %timeit -oq rks_model.fit(xtrain, ytrain) # ------------------------------ # Nystrom # ------------------------------ approximation = 'nystrom' nys_model = RKernelRidge(n_components=n_components, alpha=alpha, gamma=gamma, kernel='rbf', random_state=seed, approximation=approximation) t2 = %timeit -oq nys_model.fit(xtrain, ytrain); # ---------------------------- # Randomized Nystrom # ---------------------------- approximation = 'rnystrom' k_rank = n_components rnys_model = RKernelRidge(n_components=n_components, alpha=alpha, gamma=gamma, kernel='rbf', random_state=seed, approximation=approximation, k_rank=k_rank) t3 = %timeit -oq rnys_model.fit(xtrain, ytrain); # ----------------------------------- # RBF Sampler (Random Kitchen Sinks) # ----------------------------------- approximation = 'rks' rks_model = RKernelRidge(n_components=n_components, alpha=alpha, gamma=gamma, kernel='rbf', random_state=seed, approximation=approximation) t4 = %timeit -oq rks_model.fit(xtrain, ytrain); # ----------------------------- # Random Fourier Features # ----------------------------- approximation = 'rff' rff_model = RKernelRidge(n_components=n_components, alpha=alpha, gamma=gamma, kernel='rbf', random_state=seed, approximation=approximation) t5 = %timeit -oq rff_model.fit(xtrain, ytrain); t_rks.append(t1.best) t_nys.append(t2.best) t_rnys.append(t3.best) t_rbf.append(t4.best) t_rff.append(t5.best) plt.loglog(m_range, t_rks, label='Random Kitchen Sinks') plt.loglog(m_range, t_rff, label='Random Fourier Features') plt.loglog(m_range, t_nys, label='Nystrom') plt.loglog(m_range, t_rnys, label='Randomized Nystrom') plt.loglog(m_range, t_rbf, label='RBF Sampler') plt.legend(loc='upper left') plt.xlabel('Number of Elements') plt.ylabel('Execution Time (secs)'); plt.plot(m_range, t_rks, label='Random Kitchen Sinks') plt.plot(m_range, t_rff, label='Random Fourier Features') plt.plot(m_range, t_nys, label='Nystrom') plt.plot(m_range, t_rnys, label='Randomized Nystrom') plt.plot(m_range, t_rbf, label='RBF Sampler') plt.legend(loc='upper left') plt.xlabel('Number of Elements') plt.ylabel('Execution Time (secs)'); ```
github_jupyter
<a href="https://www.skills.network/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01"><img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DL0120ENedX/labs/Template%20for%20Instructional%20Hands-on%20Labs/images/IDSNlogo.png" width="400px" align="center"></a> <h1 align="center"><font size="5">RESTRICTED BOLTZMANN MACHINES</font></h1> <h3>Introduction</h3> <b>Restricted Boltzmann Machine (RBM):</b> RBMs are shallow neural nets that learn to reconstruct data by themselves in an unsupervised fashion. <h4>Why are RBMs important?</h4> An RBM are a basic form of autoencoder. It can automatically extract <b>meaningful</b> features from a given input. <h4>How does it work?</h4> RBM is a 2 layer neural network. Simply, RBM takes the inputs and translates those into a set of binary values that represents them in the hidden layer. Then, these numbers can be translated back to reconstruct the inputs. Through several forward and backward passes, the RBM will be trained, and a trained RBM can reveal which features are the most important ones when detecting patterns. <h4>What are the applications of an RBM?</h4> RBM is useful for <a href='http://www.cs.utoronto.ca/~hinton/absps/netflixICML.pdf?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01'> Collaborative Filtering</a>, dimensionality reduction, classification, regression, feature learning, topic modeling and even <b>Deep Belief Networks</b>. <h4>Is RBM a generative or Discriminative model?</h4> RBM is a generative model. Let me explain it by first, see what is different between discriminative and generative models: <b>Discriminative:</b> Consider a classification problem where we want to learn to distinguish between Sedan cars (y = 1) and SUV cars (y = 0), based on some features of cars. Given a training set, an algorithm like logistic regression tries to find a straight line, or <i>decision boundary</i>, that separates the suv and sedan. <b>Generative:</b> looking at cars, we can build a model of what Sedan cars look like. Then, looking at SUVs, we can build a separate model of what SUV cars look like. Finally, to classify a new car, we can match the new car against the Sedan model, and match it against the SUV model, to see whether the new car looks more like the SUV or Sedan. Generative Models specify a probability distribution over a dataset of input vectors. We can carry out both supervised and unsupervised tasks with generative models: <ul> <li>In an unsupervised task, we try to form a model for $P(x)$, where $P$ is the probability given $x$ as an input vector.</li> <li>In the supervised task, we first form a model for $P(x|y)$, where $P$ is the probability of $x$ given $y$(the label for $x$). For example, if $y = 0$ indicates that a car is an SUV, and $y = 1$ indicates that a car is a sedan, then $p(x|y = 0)$ models the distribution of SUV features, and $p(x|y = 1)$ models the distribution of sedan features. If we manage to find $P(x|y)$ and $P(y)$, then we can use <b>Bayes rule</b> to estimate $P(y|x)$, because: $$p(y|x) = \frac{p(x|y)p(y)}{p(x)}$$</li> </ul> Now the question is, can we build a generative model, and then use it to create synthetic data by directly sampling from the modeled probability distributions? Lets see. <h2>Table of Contents</h2> <ol> <li><a href="https://#ref1">Initialization</a></li> <li><a href="https://#ref2">RBM layers</a></li> <li><a href="https://#ref3">What RBM can do after training?</a></li> <li><a href="https://#ref4">How to train the model?</a></li> <li><a href="https://#ref5">Learned features</a></li> </ol> <p></p> </div> <br> <hr> <a id="ref1"></a> <h3>Initialization</h3> First, we have to load the utility file which contains different utility functions that are not connected in any way to the networks presented in the tutorials, but rather help in processing the outputs into a more understandable way. ``` import urllib.request with urllib.request.urlopen("https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DL0120EN-SkillsNetwork/labs/Week4/data/utils.py") as url: response = url.read() target = open('utils.py', 'w') target.write(response.decode('utf-8')) target.close() ``` <h2>Installing TensorFlow </h2> We will installing TensorFlow version 2.2.0 and its required prerequistes. Also installing pillow\... ``` !pip install grpcio==1.24.3 !pip install tensorflow==2.2.0 !pip install pillow==8.1.0 ``` <b>Notice:</b> This notebook has been created with TensorFlow version 2.2, and might not work with other versions. Therefore we check: ``` import tensorflow as tf from IPython.display import Markdown, display def printmd(string): display(Markdown('# <span style="color:red">'+string+'</span>')) if not tf.__version__ == '2.2.0': printmd('<<<<<!!!!! ERROR !!!! please upgrade to TensorFlow 2.2.0, or restart your Kernel (Kernel->Restart & Clear Output)>>>>>') ``` Now, we load in all the packages that we use to create the net including the TensorFlow package: ``` import tensorflow as tf import numpy as np from PIL import Image from utils import tile_raster_images import matplotlib.pyplot as plt %matplotlib inline ``` <hr> <a id="ref2"></a> <h3>RBM layers</h3> An RBM has two layers. The first layer of the RBM is called the <b>visible</b> (or input layer). Imagine that our toy example, has only vectors with 7 values, so the visible layer must have $V=7$ input nodes. The second layer is the <b>hidden</b> layer, which has $H$ neurons in our case. Each hidden node takes on values of either 0 or 1 (i.e., $h_i = 1$ or $h_i$ = 0), with a probability that is a logistic function of the inputs it receives from the other $V$ visible units, called for example, $p(h_i = 1)$. For our toy sample, we'll use 2 nodes in the hidden layer, so $H = 2$. <center><img src="https://ibm.box.com/shared/static/eu26opvcefgls6vnwuo29uwp0nudmokh.png" alt="RBM Model" style="width: 400px;"></center> Each node in the first layer also has a <b>bias</b>. We will denote the bias as $v\_{bias}$, and this single value is shared among the $V$ visible units. The <b>bias</b> of the second is defined similarly as $h\_{bias}$, and this single value among the $H$ hidden units. ``` v_bias = tf.Variable(tf.zeros([7]), tf.float32) h_bias = tf.Variable(tf.zeros([2]), tf.float32) ``` We have to define weights among the input layer and hidden layer nodes. In the weight matrix, the number of rows are equal to the input nodes, and the number of columns are equal to the output nodes. We define a tensor $\mathbf{W}$ of shape = (7,2), where the number of visible neurons = 7, and the number of hidden neurons = 2. ``` W = tf.constant(np.random.normal(loc=0.0, scale=1.0, size=(7, 2)).astype(np.float32)) ``` <hr> <a id="ref3"></a> <h3>What RBM can do after training?</h3> Think of RBM as a model that has been trained based on images of a dataset of many SUV and sedan cars. Also, imagine that the RBM network has only two hidden nodes, where one node encodes the weight and, and the other encodes the size. In a sense, the different configurations represent different cars, where one is an SUV and the other is Sedan. In a training process, through many forward and backward passes, the RBM adjust its weights to send a stronger signal to either the SUV node (0, 1) or the sedan node (1, 0) in the hidden layer, given the pixels of images. Now, given an SUV in hidden layer, which distribution of pixels should we expect? RBM can give you 2 things. First, it encodes your images in hidden layer. Second, it gives you the probability of observing a case, given some hidden values. <h3>The Inference Process</h3> RBM has two phases: <ul> <li>Forward Pass</li> <li>Backward Pass or Reconstruction</li> </ul> <b>Phase 1) Forward pass:</b> Input one training sample (one image) $\mathbf{x}$ through all visible nodes, and pass it to all hidden nodes. Processing happens in each node in the hidden layer. This computation begins by making stochastic decisions about whether to transmit that input or not (i.e. to determine the state of each hidden layer). First, the probability vector is computed using the input feature vector $\mathbf{x}$, the weight matrix $\mathbf{W}$, and the bias term $h\_{bias}$, as $$p({h_j}|\mathbf x)= \sigma( \sum\_{i=1}^V W\_{ij} x_i + h\_{bias} )$$, where $\sigma(z) = (1+e^{-z})^{-1}$ is the logistic function. So, what does $p({h_j})$ represent? It is the <b>probability distribution</b> of the hidden units. That is, RBM uses inputs $x_i$ to make predictions about hidden node activations. For example, imagine that the hidden node activation values are \[0.51 0.84] for the first training item. It tells you that the conditional probability for each hidden neuron for Phase 1 is: $$p(h\_{1} = 1|\mathbf{v}) = 0.51$$ $$p(h\_{2} = 1|\mathbf{v}) = 0.84$$ As a result, for each row in the training set, vector of probabilities is generated. In TensorFlow, this is referred to as a `tensor` with a shape of (1,2). We then turn unit $j$ with probability $p(h\_{j}|\mathbf{v})$, and turn it off with probability $1 - p(h\_{j}|\mathbf{v})$ by generating a uniform random number vector $\mathbf{\xi}$, and comparing it to the activation probability as <center>If $\xi_j>p(h_{j}|\mathbf{v})$, then $h_j=1$, else $h_j=0$.</center> Therefore, the conditional probability of a configuration of $\mathbf{h}$ given $\mathbf{v}$ (for a training sample) is: $$p(\mathbf{h} \mid \mathbf{v}) = \prod\_{j=1}^H p(h_j \mid \mathbf{v})$$ where $H$ is the number of hidden units. Before we go further, let's look at a toy example for one case out of all input. Assume that we have a trained RBM, and a very simple input vector, such as \[1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0].\ Let's see what the output of forward pass would look like: ``` X = tf.constant([[1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0]], tf.float32) v_state = X print ("Input: ", v_state) h_bias = tf.constant([0.1, 0.1]) print ("hb: ", h_bias) print ("w: ", W) # Calculate the probabilities of turning the hidden units on: h_prob = tf.nn.sigmoid(tf.matmul(v_state, W) + h_bias) #probabilities of the hidden units print ("p(h|v): ", h_prob) # Draw samples from the distribution: h_state = tf.nn.relu(tf.sign(h_prob - tf.random.uniform(tf.shape(h_prob)))) #states print ("h0 states:", h_state) ``` <b>Phase 2) Backward Pass (Reconstruction):</b> The RBM reconstructs data by making several forward and backward passes between the visible and hidden layers. So, in the second phase (i.e. reconstruction phase), the samples from the hidden layer (i.e. $\mathbf h$) becomes the input in the backward pass. The same weight matrix and visible layer biases are used to passed to the sigmoid function. The reproduced output is a reconstruction which is an approximation of the original input. ``` vb = tf.constant([0.1, 0.2, 0.1, 0.1, 0.1, 0.2, 0.1]) print ("b: ", vb) v_prob = tf.nn.sigmoid(tf.matmul(h_state, tf.transpose(W)) + vb) print ("p(vi∣h): ", v_prob) v_state = tf.nn.relu(tf.sign(v_prob - tf.random.uniform(tf.shape(v_prob)))) print ("v probability states: ", v_state) ``` RBM learns a probability distribution over the input, and then, after being trained, the RBM can generate new samples from the learned probability distribution. As you know, <b>probability distribution</b>, is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment. The (conditional) probability distribution over the visible units v is given by $$p(\mathbf{v} \mid \mathbf{h}) = \prod\_{i=1}^V p(v_i \mid \mathbf{h}),$$ where, $$p(v_i \mid \mathbf{h}) = \sigma\left(\sum\_{j=1}^H W\_{ji} h_j + v\_{bias} \right)$$ so, given current state of hidden units and weights, what is the probability of generating \[1. 0. 0. 1. 0. 0. 0.] in reconstruction phase, based on the above <b>probability distribution</b> function? ``` inp = X print("input X:" , inp.numpy()) print("probablity vector:" , v_prob[0].numpy()) v_probability = 1 for elm, p in zip(inp[0],v_prob[0]) : if elm ==1: v_probability *= p else: v_probability *= (1-p) print("probability of generating X: " , v_probability.numpy()) ``` How similar are vectors $\mathbf{x}$ and $\mathbf{v}$? Of course, the reconstructed values most likely will not look anything like the input vector, because our network has not been trained yet. Our objective is to train the model in such a way that the input vector and reconstructed vector to be same. Therefore, based on how different the input values look to the ones that we just reconstructed, the weights are adjusted. <hr> <h2>MNIST</h2> We will be using the MNIST dataset to practice the usage of RBMs. The following cell loads the MNIST dataset. ``` #loading training and test data mnist = tf.keras.datasets.mnist (trX, trY), (teX, teY) = mnist.load_data() # showing an example of the Flatten class and operation from tensorflow.keras.layers import Flatten flatten = Flatten(dtype='float32') trX = flatten(trX/255.0) trY = flatten(trY/255.0) ``` Lets look at the dimension of the images. MNIST images have 784 pixels, so the visible layer must have 784 input nodes. For our case, we'll use 50 nodes in the hidden layer, so i = 50. ``` vb = tf.Variable(tf.zeros([784]), tf.float32) hb = tf.Variable(tf.zeros([50]), tf.float32) ``` Let $\mathbf W$ be the Tensor of 784x50 (784 - number of visible neurons, 50 - number of hidden neurons) that represents weights between the neurons. ``` W = tf.Variable(tf.zeros([784,50]), tf.float32) ``` Lets define the visible layer: ``` v0_state = tf.Variable(tf.zeros([784]), tf.float32) #testing to see if the matrix product works tf.matmul( [v0_state], W) ``` Now, we can define hidden layer: ``` #computing the hidden nodes probability vector and checking shape h0_prob = tf.nn.sigmoid(tf.matmul([v0_state], W) + hb) #probabilities of the hidden units print("h0_state shape: " , tf.shape(h0_prob)) #defining a function to return only the generated hidden states def hidden_layer(v0_state, W, hb): h0_prob = tf.nn.sigmoid(tf.matmul([v0_state], W) + hb) #probabilities of the hidden units h0_state = tf.nn.relu(tf.sign(h0_prob - tf.random.uniform(tf.shape(h0_prob)))) #sample_h_given_X return h0_state h0_state = hidden_layer(v0_state, W, hb) print("first 15 hidden states: ", h0_state[0][0:15]) ``` Now, we define reconstruction part: ``` def reconstructed_output(h0_state, W, vb): v1_prob = tf.nn.sigmoid(tf.matmul(h0_state, tf.transpose(W)) + vb) v1_state = tf.nn.relu(tf.sign(v1_prob - tf.random.uniform(tf.shape(v1_prob)))) #sample_v_given_h return v1_state[0] v1_state = reconstructed_output(h0_state, W, vb) print("hidden state shape: ", h0_state.shape) print("v0 state shape: ", v0_state.shape) print("v1 state shape: ", v1_state.shape) ``` <h3>What is the objective function?</h3> <b>Goal</b>: Maximize the likelihood of our data being drawn from that distribution <b>Calculate error:</b>\ In each epoch, we compute the "error" as a sum of the squared difference between step 1 and step n, e.g the error shows the difference between the data and its reconstruction. <b>Note:</b> tf.reduce_mean computes the mean of elements across dimensions of a tensor. ``` def error(v0_state, v1_state): return tf.reduce_mean(tf.square(v0_state - v1_state)) err = tf.reduce_mean(tf.square(v0_state - v1_state)) print("error" , err.numpy()) ``` <a id="ref4"></a> <h3>Training the Model</h3> <b>Warning...</b> The following part is math-heavy, but you can skip it if you just want to run the cells in the next section. As mentioned, we want to give a high probability to the input data we train on. So, in order to train an RBM, we have to maximize the product of probabilities assigned to all rows $\mathbf{v}$ (images) in the training set $\mathbf{V}$ (a matrix, where each row of it is treated as a visible vector $\mathbf{v}$) $$\arg \max_W \prod\_{\mathbf{v}\in\mathbf{V}\_T} p(\mathbf{v})$$ which is equivalent to maximizing the expectation of the log probability, given as $$\arg\max_W\left\[ \mathbb{E} \left(\prod\_{\mathbf v\in \mathbf V}\text{log} \left(p(\mathbf v)\right) \right) \right].$$ So, we have to update the weights $W\_{ij}$ to increase $p(\mathbf{v})$ for all $\mathbf{v}$ in our training data during training. So we have to calculate the derivative: $$\frac{\partial \log p(\mathbf v)}{\partial W\_{ij}}$$ This cannot be easily done by typical <b>gradient descent (SGD)</b>, so we can use another approach, which has 2 steps: <ol> <li>Gibbs Sampling</li> <li>Contrastive Divergence</li> </ol> <h3>Gibbs Sampling</h3> <h4>Gibbs Sampling Step 1</h4> Given an input vector $\mathbf{v}$, we are using $p(\mathbf{h}|\mathbf{v})$ to predict the hidden values $\mathbf{h}$. $$p({h_j}|\mathbf v)= \sigma\left(\sum_{i=1}^V W_{ij} v_i + h_{bias} \right)$$ The samples are generated from this distribution by generating the uniform random variate vector $\mathbf{\xi} \sim U[0,1]$ of length $H$ and comparing to the computed probabilities as <center>If $\xi_j>p(h_{j}|\mathbf{v})$, then $h_j=1$, else $h_j=0$.</center> <h4>Gibbs Sampling Step 2</h4> Then, knowing the hidden values, we use $p(\mathbf v| \mathbf h)$ for reconstructing of new input values v. $$p({v_i}|\mathbf h)= \sigma\left(\sum\_{j=1}^H W^{T}*{ij} h_j + v*{bias} \right)$$ The samples are generated from this distribution by generating a uniform random variate vector $\mathbf{\xi} \sim U\[0,1]$ of length $V$ and comparing to the computed probabilities as <center>If $\xi_i>p(v_{i}|\mathbf{h})$, then $v_i=1$, else $v_i=0$.</center> Let vectors $\mathbf v_k$ and $\mathbf h_k$ be for the $k$th iteration. In general, the $kth$ state is generrated as: <b>Iteration</b> $k$: $$\mathbf v\_{k-1} \Rightarrow p(\mathbf h\_{k-1}|\mathbf v\_{k-1})\Rightarrow \mathbf h\_{k-1}\Rightarrow p(\mathbf v\_{k}|\mathbf h\_{k-1})\Rightarrow \mathbf v_k$$ <h3>Contrastive Divergence (CD-k)</h3> The update of the weight matrix is done during the Contrastive Divergence step. Vectors v0 and vk are used to calculate the activation probabilities for hidden values h0 and hk. The difference between the outer products of those probabilities with input vectors v0 and vk results in the update matrix: $$\Delta \mathbf W_k =\mathbf v_k \otimes \mathbf h_k - \mathbf v\_{k-1} \otimes \mathbf h\_{k-1}$$ Contrastive Divergence is actually matrix of values that is computed and used to adjust values of the $\mathbf W$ matrix. Changing $\mathbf W$ incrementally leads to training of the $\mathbf W$ values. Then, on each step (epoch), $\mathbf W$ is updated using the following: $$\mathbf W_k = \mathbf W\_{k-1} + \alpha \* \Delta \mathbf W_k$$ Reconstruction steps: <ul> <li> Get one data point from data set, like <i>x</i>, and pass it through the following steps:</li> <b>Iteration</b> $k=1$: Sampling (starting with input image) $$\mathbf x = \mathbf v\_0 \Rightarrow p(\mathbf h\_0|\mathbf v\_0)\Rightarrow \mathbf h\_0 \Rightarrow p(\mathbf v\_1|\mathbf h\_0)\Rightarrow \mathbf v\_1$$\ followed by the CD-k step $$\Delta \mathbf W\_1 =\mathbf v\_1 \otimes \mathbf h\_1 - \mathbf v\_{0} \otimes \mathbf h\_{0}$$\ $$\mathbf W\_1 = \mathbf W\_{0} + \alpha \* \Delta \mathbf W\_1$$ <li> $\mathbf v_1$ is the reconstruction of $\mathbf x$ sent to the next iteration).</li> <b>Iteration</b> $k=2$: Sampling (starting with $\mathbf v\_1$) $$\mathbf v\_1 \Rightarrow p(\mathbf h\_1|\mathbf v\_1)\Rightarrow \mathbf h\_1\Rightarrow p(\mathbf v\_2|\mathbf h\_1)\Rightarrow \mathbf v\_2$$ followed by the CD-k step $$\Delta \mathbf W\_2 =\mathbf v\_2 \otimes \mathbf h\_2 - \mathbf v\_{1} \otimes \mathbf h\_{1}$$\ $$\mathbf W\_2 = \mathbf W\_{1} + \alpha \* \Delta \mathbf W\_2$$ <li> $\mathbf v_2$ is the reconstruction of $\mathbf v_1$ sent to the next iteration).</li> <b>Iteration</b> $k=K$: Sampling (starting with $\mathbf v\_{K-1}$) $$\mathbf v\_{K-1} \Rightarrow p(\mathbf h\_{K-1}|\mathbf v\_{K-1})\Rightarrow \mathbf h\_{K-1}\Rightarrow p(\mathbf v_K|\mathbf h\_{K-1})\Rightarrow \mathbf v_K$$ followed by the CD-k step $$\Delta \mathbf W_K =\mathbf v_K \otimes \mathbf h_K - \mathbf v\_{K-1} \otimes \mathbf h\_{K-1}$$\ $$\mathbf W_K = \mathbf W\_{K-1} + \alpha \* \Delta \mathbf W_K$$ <b>What is $\alpha$?</b>\ Here, alpha is some small step size, and is also known as the "learning rate". $K$ is adjustable, and good performance can be achieved with $K=1$, so that we just take one set of sampling steps per image. ``` h1_prob = tf.nn.sigmoid(tf.matmul([v1_state], W) + hb) h1_state = tf.nn.relu(tf.sign(h1_prob - tf.random.uniform(tf.shape(h1_prob)))) #sample_h_given_X ``` Lets look at the error of the first run: ``` print("error: ", error(v0_state, v1_state)) #Parameters alpha = 0.01 epochs = 1 batchsize = 200 weights = [] errors = [] batch_number = 0 K = 1 #creating datasets train_ds = \ tf.data.Dataset.from_tensor_slices((trX, trY)).batch(batchsize) for epoch in range(epochs): for batch_x, batch_y in train_ds: batch_number += 1 for i_sample in range(batchsize): for k in range(K): v0_state = batch_x[i_sample] h0_state = hidden_layer(v0_state, W, hb) v1_state = reconstructed_output(h0_state, W, vb) h1_state = hidden_layer(v1_state, W, hb) delta_W = tf.matmul(tf.transpose([v0_state]), h0_state) - tf.matmul(tf.transpose([v1_state]), h1_state) W = W + alpha * delta_W vb = vb + alpha * tf.reduce_mean(v0_state - v1_state, 0) hb = hb + alpha * tf.reduce_mean(h0_state - h1_state, 0) v0_state = v1_state if i_sample == batchsize-1: err = error(batch_x[i_sample], v1_state) errors.append(err) weights.append(W) print ( 'Epoch: %d' % epoch, "batch #: %i " % batch_number, "of %i" % int(60e3/batchsize), "sample #: %i" % i_sample, 'reconstruction error: %f' % err) ``` Let's take a look at the errors at the end of each batch: ``` plt.plot(errors) plt.xlabel("Batch Number") plt.ylabel("Error") plt.show() ``` What is the final weight matrix $W$ after training? ``` print(W.numpy()) # a weight matrix of shape (50,784) ``` <a id="ref5"></a> <h3>Learned features</h3> We can take each hidden unit and visualize the connections between that hidden unit and each element in the input vector. In our case, we have 50 hidden units. Lets visualize those. Let's plot the current weights: <b>tile_raster_images</b> helps in generating an easy to grasp image from a set of samples or weights. It transforms the <b>uw</b> (with one flattened image per row of size 784), into an array (of size $28\times28$) in which images are reshaped and laid out like tiles on a floor. ``` tile_raster_images(X=W.numpy().T, img_shape=(28, 28), tile_shape=(5, 10), tile_spacing=(1, 1)) import matplotlib.pyplot as plt from PIL import Image %matplotlib inline image = Image.fromarray(tile_raster_images(X=W.numpy().T, img_shape=(28, 28) ,tile_shape=(5, 10), tile_spacing=(1, 1))) ### Plot image plt.rcParams['figure.figsize'] = (18.0, 18.0) imgplot = plt.imshow(image) imgplot.set_cmap('gray') ``` Each tile in the above visualization corresponds to a vector of connections between a hidden unit and visible layer's units. Let's look at one of the learned weights corresponding to one of hidden units for example. In this particular square, the gray color represents weight = 0, and the whiter it is, the more positive the weights are (closer to 1). Conversely, the darker pixels are, the more negative the weights. The positive pixels will increase the probability of activation in hidden units (after multiplying by input/visible pixels), and negative pixels will decrease the probability of a unit hidden to be 1 (activated). So, why is this important? So we can see that this specific square (hidden unit) can detect a feature (e.g. a "/" shape) and if it exists in the input. ``` from PIL import Image image = Image.fromarray(tile_raster_images(X =W.numpy().T[10:11], img_shape=(28, 28),tile_shape=(1, 1), tile_spacing=(1, 1))) ### Plot image plt.rcParams['figure.figsize'] = (4.0, 4.0) imgplot = plt.imshow(image) imgplot.set_cmap('gray') ``` Let's look at the reconstruction of an image now. Imagine that we have a destructed image of figure 3. Lets see if our trained network can fix it: First we plot the image: ``` !wget -O destructed3.jpg https://ibm.box.com/shared/static/vvm1b63uvuxq88vbw9znpwu5ol380mco.jpg img = Image.open('destructed3.jpg') img ``` Now let's pass this image through the neural net: ``` # convert the image to a 1d numpy array sample_case = np.array(img.convert('I').resize((28,28))).ravel().reshape((1, -1))/255.0 sample_case = tf.cast(sample_case, dtype=tf.float32) ``` Feed the sample case into the network and reconstruct the output: ``` hh0_p = tf.nn.sigmoid(tf.matmul(sample_case, W) + hb) hh0_s = tf.round(hh0_p) print("Probability nodes in hidden layer:" ,hh0_p) print("activated nodes in hidden layer:" ,hh0_s) # reconstruct vv1_p = tf.nn.sigmoid(tf.matmul(hh0_s, tf.transpose(W)) + vb) print(vv1_p) #rec_prob = sess.run(vv1_p, feed_dict={ hh0_s: hh0_s_val, W: prv_w, vb: prv_vb}) ``` Here we plot the reconstructed image: ``` img = Image.fromarray(tile_raster_images(X=vv1_p.numpy(), img_shape=(28, 28),tile_shape=(1, 1), tile_spacing=(1, 1))) plt.rcParams['figure.figsize'] = (4.0, 4.0) imgplot = plt.imshow(img) imgplot.set_cmap('gray') ``` <hr> ## Want to learn more? Also, you can use **Watson Studio** to run these notebooks faster with bigger datasets.**Watson Studio** is IBM’s leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, **Watson Studio** enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of **Watson Studio** users today with a free account at [Watson Studio](https://cocl.us/ML0120EN_DSX).This is the end of this lesson. Thank you for reading this notebook, and good luck on your studies. ### Thanks for completing this lesson! Notebook created by: <a href = "https://ca.linkedin.com/in/saeedaghabozorgi?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01">Saeed Aghabozorgi</a> Updated to TF 2.X by <a href="https://ca.linkedin.com/in/nilmeier?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01"> Jerome Nilmeier</a><br /> ### References: [https://en.wikipedia.org/wiki/Restricted_Boltzmann_machine](https://en.wikipedia.org/wiki/Restricted_Boltzmann_machine?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01)\ [http://deeplearning.net/tutorial/rbm.html](http://deeplearning.net/tutorial/rbm.html?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01)\ [http://www.cs.utoronto.ca/\~hinton/absps/netflixICML.pdf](http://www.cs.utoronto.ca/\~hinton/absps/netflixICML.pdf?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01)<br> <http://imonad.com/rbm/restricted-boltzmann-machine/> <hr> Copyright © 2018 [Cognitive Class](https://cocl.us/DX0108EN_CC). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDL0120ENSkillsNetwork20629446-2021-01-01).
github_jupyter
### Simulation Part ``` %pylab inline from scipy.interpolate import interpn from helpFunctions import surfacePlot import numpy as np from multiprocessing import Pool from functools import partial import warnings import math warnings.filterwarnings("ignore") np.printoptions(precision=2) # time line T_min = 0 T_max = 70 T_R = 45 # discounting factor beta = 1/(1+0.02) # utility function parameter gamma = 2 # relative importance of housing consumption and non durable consumption alpha = 0.8 # parameter used to calculate the housing consumption kappa = 0.3 # depreciation parameter delta = 0.025 # housing parameter chi = 0.3 # uB associated parameter B = 2 # # minimum consumption # c_bar = 3 # constant cost c_h = 20.0 # All the money amount are denoted in thousand dollars earningShock = [0.8,1.2] # Define transition matrix of economical states # GOOD -> GOOD 0.8, BAD -> BAD 0.6 Ps = np.array([[0.6, 0.4],[0.2, 0.8]]) # current risk free interest rate r_b = np.array([0.01 ,0.03]) # stock return depends on current and future econ states # r_k = np.array([[-0.2, 0.15],[-0.15, 0.2]]) r_k = np.array([[-0.15, 0.20],[-0.15, 0.20]]) # expected return on stock market # r_bar = 0.0667 r_bar = 0.02 # probability of survival Pa = np.load("prob.npy") # deterministic income detEarning = np.load("detEarning.npy") # probability of employment transition Pe[s, s_next, e, e_next] Pe = np.array([[[[0.3, 0.7], [0.1, 0.9]], [[0.25, 0.75], [0.05, 0.95]]], [[[0.25, 0.75], [0.05, 0.95]], [[0.2, 0.8], [0.01, 0.99]]]]) # tax rate before and after retirement tau_L = 0.2 tau_R = 0.1 # constant state variables: Purchase value 250k, down payment 50k, mortgage 200k, interest rate 3.6%, # 55 payment period, 8.4k per period. One housing unit is roughly 1 square feet. Housing price 0.25k/sf # some variables associate with 401k amount Nt = [np.sum(Pa[t:]) for t in range(T_max-T_min)] Dt = [np.ceil(((1+r_bar)**N - 1)/(r_bar*(1+r_bar)**N)) for N in Nt] # wealth discretization ws = np.array([10,25,50,75,100,125,150,175,200,250,500,750,1000,1500,3000]) w_grid_size = len(ws) # 401k amount discretization ns = np.array([1, 5, 10, 15, 25, 40, 65, 100, 150, 300, 400,1000]) n_grid_size = len(ns) # Mortgage amount, * 0.25 is the housing price per unit Ms = np.array([50, 100, 150, 200, 350])*0.25 M_grid_size = len(Ms) # Improvement amount gs = np.array([0,25,50,75,100]) g_grid_size = len(gs) points = (ws,ns,Ms,gs) # housing unit H = 400 # mortgate rate rh = 0.036 # mortgate payment m = 3 # housing price constant pt = 250/1000 # 30k rent 1000 sf pr = 30/1000 xgrid = np.array([[w, n, M, g_lag, e, s] for w in ws for n in ns for M in Ms for g_lag in gs for e in [0,1] for s in [0,1] ]).reshape((w_grid_size, n_grid_size,M_grid_size,g_grid_size,2,2,6)) import quantecon as qe import timeit from sklearn.neighbors import KNeighborsRegressor as KN mc = qe.MarkovChain(Ps) #Vgrid = np.load("Vgrid_i.npy") cgrid = np.load("cgrid_i.npy") bgrid = np.load("bgrid_i.npy") kgrid = np.load("kgrid_i.npy") igrid = np.load("igrid_i.npy") qgrid = np.load("qgrid_i.npy") def action(t, x): w, n, M, g_lag, e, s = x c = interpn(points, cgrid[:,:,:,:,e,s,t], x[:4], method = "nearest", bounds_error = False, fill_value = None)[0] b = interpn(points, bgrid[:,:,:,:,e,s,t], x[:4], method = "nearest", bounds_error = False, fill_value = None)[0] k = interpn(points, kgrid[:,:,:,:,e,s,t], x[:4], method = "nearest", bounds_error = False, fill_value = None)[0] i = interpn(points, igrid[:,:,:,:,e,s,t], x[:4], method = "nearest", bounds_error = False, fill_value = None)[0] q = interpn(points, qgrid[:,:,:,:,e,s,t], x[:4], method = "nearest", bounds_error = False, fill_value = None)[0] return (c,b,k,i,q) #Define the earning function, which applies for both employment and unemployment, good econ state and bad econ state def y(t, x): w, n, M, g_lag, e, s = x if t <= T_R: welfare = 5 return detEarning[t] * earningShock[int(s)] * e + (1-e) * welfare else: return detEarning[t] #Define the evolution of the amount in 401k account def gn(t, n, x, s_next): w, n, M, g_lag, e, s = x if t <= T_R and e == 1: # if the person is employed, then 5 percent of his income goes into 401k i = 0.05 n_cur = n + y(t, x) * i elif t <= T_R and e == 0: # if the perons is unemployed, then n does not change n_cur = n else: # t > T_R, n/discounting amount will be withdraw from the 401k n_cur = n - n/Dt[t] return (1+r_k[int(s), s_next])*n_cur def transition(x, a, t, s_next): ''' Input: state and action and time Output: possible future states and corresponding probability ''' w, n, M, g_lag, e, s = x c,b,k,i,q = a # variables used to collect possible states and probabilities x_next = [] prob_next = [] M_next = M*(1+rh) - m if q == 1: g = (1-delta)*g_lag + i else: g = (1-delta)*g_lag w_next = b*(1+r_b[int(s)]) + k*(1+r_k[int(s), s_next]) n_next = gn(t, n, x, s_next) if t >= T_R: e_next = 0 return [w_next, n_next, M_next, g, s_next, e_next] else: for e_next in [0,1]: x_next.append([w_next, n_next, M_next, g, s_next, e_next]) prob_next.append(Pe[int(s),s_next,int(e),e_next]) return x_next[np.random.choice(2, 1, p = prob_next)[0]] ''' Start with: Ms = 800 * 0.25 w = 20 n = 0 g_lag = 0 e = 1 s = 1 1000 agents for 1 economy, 100 economies. use numpy array to contain the variable change: wealth, rFund, Mortgage, hImprov, employment, sState, salary, consumption, hConsumption, bond, stock, improve, hPercentage, life. Shape: (T_max-T_min, numAgents*numEcons) ''' x0 = [20, 0, 80, 0, 1, 1] numAgents = 1000 numEcons = 500 import random as rd EconStates = [mc.simulate(ts_length=T_max - T_min, init=0) for _ in range(numEcons)] def simulation(i): track = np.zeros((T_max - T_min,14)) econState = EconStates[i//numAgents] alive = True x = x0 for t in range(len(econState)-1): if rd.random() > Pa[t]: alive = False if alive: track[t, 0] = x[0] track[t, 1] = x[1] track[t, 2] = x[2] track[t, 3] = x[3] track[t, 4] = x[4] track[t, 5] = x[5] track[t, 6] = y(t,x) a = action(t, x) track[t, 7] = a[0] track[t, 9] = a[1] track[t, 10] = a[2] track[t, 11] = a[3] track[t, 12] = a[4] track[t, 13] = 1 # calculate housing consumption if a[4] == 1: h = H + (1-delta)*x[3] + a[3] Vh = (1+kappa)*h else: h = H + (1-delta)*x[3] Vh = (1-kappa)*(h-(1-a[4])*H) track[t, 8] = Vh s_next = econState[t+1] x = transition(x, a, t, s_next) return track %%time pool = Pool() agentsHistory = pool.map(simulation, list(range(numAgents*numEcons))) pool.close() len(agentsHistory) np.save("agents", np.array(agentsHistory)) agents = np.load("agents.npy") wealth = np.zeros((T_max-T_min, numAgents*numEcons)) rFund = np.zeros((T_max-T_min, numAgents*numEcons)) Mortgage = np.zeros((T_max-T_min, numAgents*numEcons)) hImprove = np.zeros((T_max-T_min, numAgents*numEcons)) employment = np.zeros((T_max-T_min, numAgents*numEcons)) sState = np.zeros((T_max-T_min, numAgents*numEcons)) salary = np.zeros((T_max-T_min, numAgents*numEcons)) consumption = np.zeros((T_max-T_min, numAgents*numEcons)) hConsumption = np.zeros((T_max-T_min, numAgents*numEcons)) bond = np.zeros((T_max-T_min, numAgents*numEcons)) stock = np.zeros((T_max-T_min, numAgents*numEcons)) improve = np.zeros((T_max-T_min, numAgents*numEcons)) hPer = np.zeros((T_max-T_min, numAgents*numEcons)) life = np.zeros((T_max-T_min, numAgents*numEcons)) def separateAttributes(agents): for i in range(numAgents*numEcons): wealth[:,i] = agents[i][:,0] rFund[:,i] = agents[i][:,1] Mortgage[:,i] = agents[i][:,2] hImprove[:,i] = agents[i][:,3] employment[:,i] = agents[i][:,4] sState[:,i] = agents[i][:,5] salary[:,i] = agents[i][:,6] consumption[:,i] = agents[i][:,7] hConsumption[:,i] = agents[i][:,8] bond[:,i] = agents[i][:,9] stock[:,i] = agents[i][:,10] improve[:,i] = agents[i][:,11] hPer[:,i] = agents[i][:,12] life[:,i] = agents[i][:,13] separateAttributes(agents) np.save("wealth", wealth) np.save("rFund", rFund) np.save("Mortgage", Mortgage) np.save("hImprov", hImprove) np.save("employment", employment) np.save("sState", sState) np.save("salary", salary) np.save("consumption", consumption) np.save("hConsumption", hConsumption) np.save("bond", bond) np.save("stock", stock) np.save("improve", improve) np.save("hPer", hPer) np.save("life", life) ``` ### Summary Plot ``` wealth = np.load("wealth.npy") rFund = np.load("rFund.npy") Mortgage = np.load("Mortgage.npy") hImprove = np.load("hImprov.npy") employment = np.load("employment.npy") sState = np.load("sState.npy") salary = np.load("salary.npy") consumption = np.load("consumption.npy") hConsumption = np.load("hConsumption.npy") bond = np.load("bond.npy") stock = np.load("stock.npy") improve = np.load("improve.npy") hPer = np.load("hPer.npy") life = np.load("life.npy") # Population during the entire simulation period plt.plot(np.mean(life, axis = 1)) def quantileForPeopleWholive(attribute, quantiles = [0.25, 0.5, 0.75]): qList = [] for i in range(69): if len(np.where(life[i,:] == 1)[0]) == 0: qList.append(np.array([0] * len(quantiles))) else: qList.append(np.quantile(attribute[i, np.where(life[i,:] == 1)], q = quantiles)) return np.array(qList) def meanForPeopleWholive(attribute): means = [] for i in range(69): if len(np.where(life[i,:] == 1)[0]) == 0: means.append(np.array([0])) else: means.append(np.mean(attribute[i, np.where(life[i,:] == 1)])) return np.array(means) # plot the 0.25, 0.5, 0.75 quantiles of housing consumption plt.plot(quantileForPeopleWholive(hConsumption)) # plot the 0.25, 0.5, 0.75 quantiles of wealth plt.plot(quantileForPeopleWholive(wealth)) # plot the 0.25, 0.5, 0.75 quantiles of 401k amount plt.plot(quantileForPeopleWholive(rFund)) # plot the 0.25, 0.5, 0.75 quantiles of Mortgage amount plt.plot(quantileForPeopleWholive(Mortgage)) # plot the 0.25, 0.5, 0.75 quantiles of housing improvement plt.plot(quantileForPeopleWholive(hImprove)) # plot the 0.25, 0.5, 0.75 quantiles of consumption plt.plot(quantileForPeopleWholive(consumption)) # plot the 0.25, 0.5, 0.75 quantiles of bond plt.plot(quantileForPeopleWholive(bond)) # plot the 0.25, 0.5, 0.75 quantiles of stock plt.plot(quantileForPeopleWholive(stock)) # plot the 0.25, 0.5, 0.75 quantiles of housing improvement at this one episode plt.plot(quantileForPeopleWholive(improve)) # plot the 0.25, 0.5, 0.75 quantiles of house occupation p, is p == 1 means no renting out plt.plot(quantileForPeopleWholive(hPer)) # plot the 0.25, 0.5, 0.75 quantiles of wealth plt.figure(figsize = [14,8]) plt.plot(meanForPeopleWholive(wealth), label = "wealth") # plt.plot(meanForPeopleWholive(rFund), label = "rFund") plt.plot(meanForPeopleWholive(consumption), label = "Consumption") plt.plot(meanForPeopleWholive(bond), label = "Bond") plt.plot(meanForPeopleWholive(stock), label = "Stock") #plt.plot(meanForPeopleWholive(rFund), label = "401k") plt.legend() ```
github_jupyter
# Burgers Optimization with a Differentiable Physics Gradient To illustrate the process of computing gradients in a _differentiable physics_ (DP) setting, we target the same inverse problem (the reconstruction task) used for the PINN example in {doc}`physicalloss-code`. The choice of DP as a method has some immediate implications: we start with a discretized PDE, and the evolution of the system is now fully determined by the resulting numerical solver. Hence, the only real unknown is the initial state. We will still need to re-compute all the states between the initial and target state many times, just now we won't need an NN for this step. Instead, we can rely on our discretized model. Also, as we choose an initial discretization for the DP approach, the unknown initial state consists of the sampling points of the involved physical fields, and we can simply represent these unknowns as floating point variables. Hence, even for the initial state we do not need to set up an NN. Thus, our Burgers reconstruction problem reduces to a gradient-based optimization without any NN when solving it with DP. Nonetheless, it's a very good starting point to illustrate the process. First, we'll set up our discretized simulation. Here we can employ phiflow, as shown in the overview section on _Burgers forward simulations_. [[run in colab]](https://colab.research.google.com/github/tum-pbs/pbdl-book/blob/main/diffphys-code-burgers.ipynb) ## Initialization phiflow directly gives us a sequence of differentiable operations, provided that we don't use the _numpy_ backend. The important step here is to include `phi.tf.flow` instad of `phi.flow` (for _pytorch_ you could use `phi.torch.flow`). So, as a first step, let's set up some constants, and initialize a `velocity` field with zeros, and our constraint at $t=0.5$ (step 16), now as a `CenteredGrid` in phiflow. Both are using periodic boundary conditions (via `extrapolation.PERIODIC`) and a spatial discretization of $\Delta x = 1/128$. ``` #!pip install --upgrade --quiet phiflow from phi.tf.flow import * N = 128 DX = 2/N STEPS = 32 DT = 1/STEPS NU = 0.01/(N*np.pi) # allocate velocity grid velocity = CenteredGrid(0, extrapolation.PERIODIC, x=N, bounds=Box[-1:1]) # and a grid with the reference solution REFERENCE_DATA = math.tensor([0.008612174447657694, 0.02584669669548606, 0.043136357266407785, 0.060491074685516746, 0.07793926183951633, 0.0954779141740818, 0.11311894389663882, 0.1308497114054023, 0.14867023658641343, 0.1665634396808965, 0.18452263429574314, 0.20253084411376132, 0.22057828799835133, 0.23865132431365316, 0.25673879161339097, 0.27483167307082423, 0.2929182325574904, 0.3109944766354339, 0.3290477753208284, 0.34707880794585116, 0.36507311960102307, 0.38303584302507954, 0.40094962955534186, 0.4188235294008765, 0.4366357052408043, 0.45439856841363885, 0.4720845505219581, 0.4897081943759776, 0.5072391070000235, 0.5247011051514834, 0.542067187709797, 0.5593576751669057, 0.5765465453632126, 0.5936507311857876, 0.6106452944663003, 0.6275435911624945, 0.6443221318186165, 0.6609900633731869, 0.67752574922899, 0.6939334022562877, 0.7101938106059631, 0.7263049537163667, 0.7422506131457406, 0.7580207366534812, 0.7736033721649875, 0.7889776974379873, 0.8041371279965555, 0.8190465276590387, 0.8337064887158392, 0.8480617965162781, 0.8621229412131242, 0.8758057344502199, 0.8891341984763013, 0.9019806505391214, 0.9143881632159129, 0.9261597966464793, 0.9373647624856912, 0.9476871303793314, 0.9572273019669029, 0.9654367940878237, 0.9724097482283165, 0.9767381835635638, 0.9669484658390122, 0.659083299684951, -0.659083180712816, -0.9669485121167052, -0.9767382069792288, -0.9724097635533602, -0.9654367970450167, -0.9572273263645859, -0.9476871280825523, -0.9373647681120841, -0.9261598056102645, -0.9143881718456056, -0.9019807055316369, -0.8891341634240081, -0.8758057205293912, -0.8621229450911845, -0.8480618138204272, -0.833706571569058, -0.8190466131476127, -0.8041372124868691, -0.7889777195422356, -0.7736033858767385, -0.758020740007683, -0.7422507481169578, -0.7263049162371344, -0.7101938950789042, -0.6939334061553678, -0.677525822052029, -0.6609901538934517, -0.6443222327338847, -0.6275436932970322, -0.6106454472814152, -0.5936507836778451, -0.5765466491708988, -0.5593578078967361, -0.5420672759411125, -0.5247011730988912, -0.5072391580614087, -0.4897082914472909, -0.47208460952428394, -0.4543985995006753, -0.4366355580500639, -0.41882350871539187, -0.40094955631843376, -0.38303594105786365, -0.36507302109186685, -0.3470786936847069, -0.3290476440540586, -0.31099441589505206, -0.2929180880304103, -0.27483158663081614, -0.2567388003912687, -0.2386513127155433, -0.22057831776499126, -0.20253089403524566, -0.18452269630486776, -0.1665634500729787, -0.14867027528284874, -0.13084990929476334, -0.1131191325854089, -0.09547794429803691, -0.07793928430794522, -0.06049114408297565, -0.0431364527809777, -0.025846763281087953, -0.00861212501518312] , math.spatial('x')) SOLUTION_T16 = CenteredGrid( REFERENCE_DATA, extrapolation.PERIODIC, x=N, bounds=Box[-1:1]) ``` We can verify that the fields of our simulation are now backed by TensorFlow. ``` type(velocity.values.native()) ``` ## Gradients The `record_gradients` function of phiflow triggers the generation of a gradient tape to compute gradients of a simulation via `math.gradients(loss, values)`. To use it for the Burgers case we need to specify a loss function: we want the solution at $t=0.5$ to match the reference data. Thus we simply compute an $L^2$ difference between step number 16 and our constraint array as `loss`. Afterwards, we evaluate the gradient of the initial velocity state `velocity` with respect to this loss. ``` velocities = [velocity] with math.record_gradients(velocity.values): for time_step in range(STEPS): v1 = diffuse.explicit(1.0*velocities[-1], NU, DT, substeps=1) v2 = advect.semi_lagrangian(v1, v1, DT) velocities.append(v2) loss = field.l2_loss(velocities[16] - SOLUTION_T16)*2./N # MSE grad = math.gradients(loss, velocity.values) print('Loss: %f' % (loss)) ``` Because we're only constraining time step 16, we could actually omit steps 17 to 31 in this setup. They don't have any degrees of freedom and are not constrained in any way. However, for fairness regarding a comparison with the previous PINN case, we include them. Note that we've done a lot of calculations here: first the 32 steps of our simulation, and then another 16 steps backwards from the loss. They were recorded by the gradient tape, and used to backpropagate the loss to the initial state of the simulation. Not surprisingly, because we're starting from zero, there's also a significant initial error of ca. 0.38 for the 16th simulation step. So what do we get as a gradient here? It has the same dimensions as the velocity, and we can easily visualize it: Starting from the zero state for `velocity` (shown in blue), the first gradient is shown as a green line below. If you compare it with the solution it points in the opposite direction, as expected. The solution is much larger in magnitude, so we omit it here (see the next graph). ``` import pylab as plt fig = plt.figure().gca() pltx = np.linspace(-1,1,N) # first gradient fig.plot(pltx, grad.numpy('x') , lw=2, color='green', label="Gradient") fig.plot(pltx, velocity.values.numpy('x'), lw=2, color='mediumblue', label="u at t=0") plt.xlabel('x'); plt.ylabel('u'); plt.legend(); # some (optional) other fields to plot: #fig.plot(pltx, (velocities[16]).values.numpy('x') , lw=2, color='cyan', label="u at t=0.5") #fig.plot(pltx, (SOLUTION_T16).values.numpy('x') , lw=2, color='red', label="solution at t=0.5") #fig.plot(pltx, (velocities[16] - SOLUTION_T16).values.numpy('x') , lw=2, color='blue', label="difference at t=0.5") ``` This gives us a "search direction" for each velocity variable. Based on a linear approximation, the gradient tells us how to change each of them to increase the loss function (gradients _always_ point "upwards"). Thus, we can use the gradient to run an optimization and find an initial state `velocity` that minimizes our loss. ## Optimization Equipped with the gradient we can run a gradient descent optimization. Below, we're using a learning rate of `LR=5`, and we're re-evaluating the loss for the updated state to track convergence. In the following code block, we're additionally saving the gradients in a list called `grads`, such that we can visualize them later on. For a regular optimization we could of course discard the gradient after performing an update of the velocity. ``` LR = 5. grads=[] for optim_step in range(5): velocities = [velocity] with math.record_gradients(velocity.values): for time_step in range(STEPS): v1 = diffuse.explicit(1.0*velocities[-1], NU, DT) v2 = advect.semi_lagrangian(v1, v1, DT) velocities.append(v2) loss = field.l2_loss(velocities[16] - SOLUTION_T16)*2./N # MSE print('Optimization step %d, loss: %f' % (optim_step,loss)) grads.append( math.gradients(loss, velocity.values) ) velocity = velocity - LR * grads[-1] ``` Now we can check well the 16th state of the simulation actually matches the target after the 5 update steps. This is what the loss measures, after all. The next graph shows the constraints (i.e. the solution we'd like to obtain) in green, and the reconstructed state after the initial state `velocity` (which we updated five times via the gradient by now) was updated 16 times by the solver. ``` fig = plt.figure().gca() # target constraint at t=0.5 fig.plot(pltx, SOLUTION_T16.values.numpy('x'), lw=2, color='forestgreen', label="Reference") # optimized state of our simulation after 16 steps fig.plot(pltx, velocities[16].values.numpy('x'), lw=2, color='mediumblue', label="Simulated velocity") plt.xlabel('x'); plt.ylabel('u'); plt.legend(); plt.title("After 5 Optimization Steps at t=0.5"); ``` This seems to be going in the right direction! It's definitely not perfect, but we've only computed 5 GD update steps so far. The two peaks with a positive velocity on the left side of the shock and the negative peak on the right side are starting to show. This is a good indicator that the backpropagation of gradients through all of our 16 simulated steps is behaving correctly, and that it's driving the solution in the right direction. The graph above only hints at how powerful the setup is: the gradient that we obtain from each of the simulation steps (and each operation within them) can easily be chained together into more complex sequences. In the example above, we're backpropagating through all 16 steps of the simulation, and we could easily enlarge this "look-ahead" of the optimization with minor changes to the code. ## More optimization steps Before moving on to more complex physics simulations, or involving NNs, let's finish the optimization task at hand, and run more steps to get a better solution. ``` import time start = time.time() for optim_step in range(45): velocities = [velocity] with math.record_gradients(velocity.values): for time_step in range(STEPS): v1 = diffuse.explicit(1.0*velocities[-1], NU, DT) v2 = advect.semi_lagrangian(v1, v1, DT) velocities.append(v2) loss = field.l2_loss(velocities[16] - SOLUTION_T16)*2./N # MSE if optim_step%5==0: print('Optimization step %d, loss: %f' % (optim_step,loss)) grad = math.gradients(loss, velocity.values) velocity = velocity - LR * grad end = time.time() print("Runtime {:.2f}s".format(end-start)) ``` Thinking back to the PINN version from {doc}`diffphys-code-burgers`, we have a much lower error here after only 50 steps (by ca. an order of magnitude), and the runtime is also lower (roughly by a factor of 1.5 to 2). This behavior stems fro Let's plot again how well our solution at $t=0.5$ (blue) matches the constraints (green) now: ``` fig = plt.figure().gca() fig.plot(pltx, SOLUTION_T16.values.numpy('x'), lw=2, color='forestgreen', label="Reference") fig.plot(pltx, velocities[16].values.numpy('x'), lw=2, color='mediumblue', label="Simulated velocity") plt.xlabel('x'); plt.ylabel('u'); plt.legend(); plt.title("After 50 Optimization Steps at t=0.5"); ``` Not bad. But how well is the initial state recovered via backpropagation through the 16 simulation steps? This is what we're changing, and because it's only indirectly constrained via the observation later in time there is more room to deviate from a desired or expected solution. This is shown in the next plot: ``` fig = plt.figure().gca() pltx = np.linspace(-1,1,N) # ground truth state at time=0 , move down INITIAL_GT = np.asarray( [-np.sin(np.pi * x) for x in np.linspace(-1+DX/2,1-DX/2,N)] ) # 1D numpy array fig.plot(pltx, INITIAL_GT.flatten() , lw=2, color='forestgreen', label="Ground truth initial state") # ground truth initial state of sim fig.plot(pltx, velocity.values.numpy('x'), lw=2, color='mediumblue', label="Optimized initial state") # manual plt.xlabel('x'); plt.ylabel('u'); plt.legend(); plt.title("Initial State After 50 Optimization Steps"); ``` Naturally, this is a tougher task: the optimization receives direct feedback what the state at $t=0.5$ should look like, but due to the non-linear model equation, we typically have a large number of solutions that exactly or numerically very closely satisfy the constraints. Hence, our minimizer does not necessarily find the exact state we started from (we can observe some numerical oscillations from the diffusion operator here with the default settings). However, the solution is still quite close in this Burgers scenario. Before measuring the overall error of the reconstruction, let's visualize the full evolution of our system over time as this also yields the solution in the form of a numpy array that we can compare to the other versions: ``` import pylab def show_state(a): a=np.expand_dims(a, axis=2) for i in range(4): a = np.concatenate( [a,a] , axis=2) a = np.reshape( a, [a.shape[0],a.shape[1]*a.shape[2]] ) fig, axes = pylab.subplots(1, 1, figsize=(16, 5)) im = axes.imshow(a, origin='upper', cmap='inferno') pylab.colorbar(im) # get numpy versions of all states vels = [ x.values.numpy('x,vector') for x in velocities] # concatenate along vector/features dimension vels = np.concatenate(vels, axis=-1) # save for comparison with other methods import os; os.makedirs("./temp",exist_ok=True) np.savez_compressed("./temp/burgers-diffphys-solution.npz", np.reshape(vels,[N,STEPS+1])) # remove batch & channel dimension show_state(vels) ``` ## Physics-informed vs. differentiable physics reconstruction Now we have both versions, the one with the PINN, and the DP version, so let's compare both reconstructions in more detail. (Note: The following cells expect that the Burgers-forward and PINN notebooks were executed in the same environment beforehand.) Let's first look at the solutions side by side. The code below generates an image with 3 versions, from top to bottom: the "ground truth" (GT) solution as given by the regular forward simulation, in the middle the PINN reconstruction, and at the bottom the differentiable physics version. ``` # note, this requires previous runs of the forward-sim & PINN notebooks in the same environment sol_gt=npfile=np.load("./temp/burgers-groundtruth-solution.npz")["arr_0"] sol_pi=npfile=np.load("./temp/burgers-pinn-solution.npz")["arr_0"] sol_dp=npfile=np.load("./temp/burgers-diffphys-solution.npz")["arr_0"] divider = np.ones([10,33])*-1. # we'll sneak in a block of -1s to show a black divider in the image sbs = np.concatenate( [sol_gt, divider, sol_pi, divider, sol_dp], axis=0) print("\nSolutions Ground Truth (top), PINN (middle) , DiffPhys (bottom):") show_state(np.reshape(sbs,[N*3+20,33,1])) ``` It's quite clearly visible here that the PINN solution (in the middle) recovers the overall shape of the solution, hence the temporal constraints are at least partially fulfilled. However, it doesn't manage to capture the amplitudes of the GT solution very well. The reconstruction from the optimization with a differentiable solver (at the bottom) is much closer to the ground truth thanks to an improved flow of gradients over the whole course of the sequence. In addition, it can leverage the grid-based discretization for both forward as well as backward passes, and in this way provide a more accurate signal to the unknown initial state. It is nonetheless visible that the reconstruction lacks certain "sharper" features of the GT version, e.g., visible in the bottom left corner of the solution image. Let's quantify these errors over the whole sequence: ``` err_pi = np.sum( np.abs(sol_pi-sol_gt)) / (STEPS*N) err_dp = np.sum( np.abs(sol_dp-sol_gt)) / (STEPS*N) print("MAE PINN: {:7.5f} \nMAE DP: {:7.5f}".format(err_pi,err_dp)) print("\nError GT to PINN (top) , DiffPhys (bottom):") show_state(np.reshape( np.concatenate([sol_pi-sol_gt, divider, sol_dp-sol_gt],axis=0) ,[N*2+10,33,1])) ``` That's a pretty clear result: the PINN error is almost 4 times higher than the one from the Differentiable Physics (DP) reconstruction. This difference also shows clearly in the jointly visualized image at the bottom: the magnitudes of the errors of the DP reconstruction are much closer to zero, as indicated by the purple color above. A simple direct reconstruction problem like this one is always a good initial test for a DP solver. It can be tested independently before moving on to more complex setups, e.g., coupling it with an NN. If the direct optimization does not converge, there's probably still something fundamentally wrong, and there's no point involving an NN. Now we have a first example to show similarities and differences of the two approaches. In the next section, we'll present a discussion of the findings so far, before moving to more complex cases in the following chapter. ## Next steps As with the PINN version, there's variety of things that can be improved and experimented with using the code above: * You can try to adjust the training parameters to further improve the reconstruction. * As for the PINN case, you can activate a different optimizer, and observe the changing (not necessarily improved) convergence behavior. * Vary the number of steps, or the resolution of the simulation and reconstruction.
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import random from pyBedGraph import BedGraph from pybedtools import BedTool import scipy.stats from scipy.stats import gaussian_kde as kde from matplotlib.colors import Normalize from matplotlib import cm from collections import Counter import networkx as nx def modLog(num, denom): if num==0 or denom==0: return 0 else: return float(format(np.log2(num/denom), '.4f')) def ShannonEnt(probList): """Compute entropy for a list of probabilities.""" if sum(probList)!=1: ## input is count or frequency instead of probability probList = [i/sum(probList) for i in probList] entropy = sum([x*modLog(1,x) for x in probList]) return float(format(entropy, '.6f')) def normShannonEnt(probList): """Compute normalized entropy for a list of probabilities.""" if sum(probList) != 1: ## input is count or frequency instead of probability probList = [i/sum(probList) for i in probList] entropy = sum([x*modLog(1,x) for x in probList])/np.log2(len(probList)) if len(probList) == 1: entropy = 0 return float(format(entropy, '.6f')) def read_regionfile(directory, file_name): with open(directory + file_name) as f: gems = {} for line in f: tmp = line.strip().split("\t") gemid = tmp[4] if gemid in gems.keys(): gems[gemid].append(tmp[5:]) else: gems[gemid] = [tmp[5:]] return gems # all fragments within a complex w/ same GEM ID def read_raidfile(directory, file_name): with open(directory + file_name) as f: raids = {} for line in f: tmp = line.strip().split("\t") tmp[1] = int(tmp[1]) tmp[2] = int(tmp[2]) raids[tmp[3]] = tmp[0:3] return raids def read_elementsfile(directory, file_name): with open(directory + file_name) as f: elements = {} ebyraid = {} for line in f: tmp = line.strip().split("\t") eid = tmp[5] raidid = tmp[17] tmp[1] = int(tmp[1]) tmp[2] = int(tmp[2]) if tmp[4] != ".": tmp.append(tmp[4]) # super-enhancer tmp.append('SE') else: if tmp[12] != '.' and float(tmp[13])>1: # active promoter tmp.append(tmp[12]) # add gene name tmp.append('P') else: tmp.append('E') tmp.append('E') if eid in elements.keys(): elements[eid].append(tmp) else: elements[eid] = tmp if raidid in ebyraid.keys(): ebyraid[raidid].append(eid) else: ebyraid[raidid] = [eid] sebyraid = dict([(key, list(dict.fromkeys([elements[x][4] for x in val if elements[x][4]!="."]))) for key, val in ebyraid.items()]) elements_se = {} for k,v in elements.items(): if v[3] != ".": if v[4] in elements_se.keys(): elements_se[v[4]].append(v[5]) else: elements_se[v[4]] = [v[5]] return elements, ebyraid, sebyraid, elements_se def read_loopfile(directory, file_name): with open(directory + file_name) as f: loops = {} for line in f: tmp = line.strip().split("\t") lid = tmp[10] rid = tmp[16] petcnt = int(tmp[6]) lpair = lid+","+rid if lpair in loops.keys(): loops[lpair] += petcnt else: loops[lpair] = petcnt return loops def get_lpbyse(loops, elements): lpby1se = {} lpby2se = {} for key, val in loops.items(): lanc = key.split(",")[0] lse = elements[lanc][4] lgene = elements[lanc][12] ltpm = elements[lanc][13] ranc = key.split(",")[1] rse = elements[ranc][4] rgene = elements[ranc][12] rtpm = elements[ranc][13] dist = elements[ranc][1]-elements[lanc][2] list2add = [lanc, lse, lgene, ltpm, ranc, rse, rgene, rtpm, val, dist] if lse=='.' and rse != ".": # only 1 se; Right anchor overlaps SE if rse in lpby1se.keys(): lpby1se[rse].append(list2add) else: lpby1se[rse] = [list2add] if lse!='.' and rse==".": # only 1 se; Left anchor overlaps SE if lse in lpby1se.keys(): lpby1se[lse].append(list2add) else: lpby1se[lse] = [list2add] if lse!='.' and rse!='.' and lse != rse: # 2 se concat = lse+';'+rse if concat in lpby2se.keys(): lpby2se[concat].append(list2add) else: lpby2se[concat] = [list2add] return lpby1se, lpby2se def plot_boxplot(dataset, dlabel, clr, tit, ylab, fig_name): fig = plt.figure(figsize = (8,6)) medianprops = dict(linewidth = 3, color=clr) i=0 boxprops = dict(linewidth = 1.5) toplot = [np.asarray([]) for i in range(len(dataset))] for d in dataset: #medianprops = dict(linewidth = 3, color=colcode[i]) datax = toplot datax[i] = np.asarray(dataset[i]) plt.boxplot(datax, widths = 0.6, medianprops = medianprops, boxprops = boxprops) i +=1 plt.xticks([i for i in range(1, len(dataset)+1)], dlabel, fontsize = 18) plt.yticks(fontsize = 18) plt.ylabel(ylab, fontsize = 18) #plt.ylim(bottom=2.5) plt.title(tit, fontsize = 18) plt.savefig(fig_name+'.pdf', dpi=150, bbox_inches="tight") plt.show() plt.close() def get_nodes(cldict): nodes = list(dict.fromkeys(list(cldict.values()))) nodecolors = [] nodecoldict = {} for x in nodes: if x.split("-")[1][0] == "S": # is super-enhancer nodecolors.append("darkorchid") nodecoldict[x] = "darkorchid" elif x.split("-")[1][0] == "E": # is intermediate element and is enhancer nodecolors.append("orange") nodecoldict[x] = "orange" elif x.split("-")[1][0] == "G": # is target gene nodecolors.append("green") nodecoldict[x] = "green" elif x.split("-")[1][0] == "O": # is intermediate element and is other super-enhancer nodecolors.append("darkorchid") nodecoldict[x] = "darkorchid" elif x.split("-")[1][0] == "P": # is intermediate element and is promoter nodecolors.append("green") nodecoldict[x] = "green" return nodes, nodecolors, nodecoldict def get_graph(cldict, compbycl): G = nx.Graph() nodes, nodecolors, nodecoldict = get_nodes(cldict) G.add_nodes_from(nodes) compbyclpair = {} edgetriplet = [] for key, val in compbycl.items(): vert = key.split(",") left = vert[0] right = vert[-1] edgetriplet.append([left, right, val]) if left != right: # exclude self-loops pair = left+","+right if pair in compbyclpair.keys(): compbyclpair[pair] += val else: compbyclpair[pair] = val for k,v in compbyclpair.items(): l = k.split(",")[0] r = k.split(",")[1] G.add_weighted_edges_from([(l,r,v)]) return nodes, nodecolors, nodecoldict, G, edgetriplet def get_compbychr(rnapiir): compbychr = {} for k, v in rnapiir.items(): tmp = [x[5] for x in v if x[5] != "."] chrom = [x[0] for x in v if x[5] != "."] if len(tmp) > 1: # at least 2 fragments overlapping elements if chrom[0] in compbychr.keys(): compbychr[chrom[0]].append(tmp) else: compbychr[chrom[0]] = [tmp] return compbychr def get_compcnt(se, target, elements_se, elements, compbychr): cnt = 0 selist = elements_se[se] sedict = dict.fromkeys(selist, 0) chrom = elements[target][0] for x in compbychr[chrom]: if target in x: for y in selist: if y in x: cnt += 1 sedict[y] += 1 return cnt, sedict def get_target(lpby1se, elements_se, elements, compbychr): setarget = {} for k, v in lpby1se.items(): for x in v: if x[1] == "." and x[3] != "." or x[5] == "." and x[7] != ".": if x[1] == ".": target = x[0] tpm = float(x[3]) elif x[5] == ".": target = x[4] tpm = float(x[7]) cmpcnt, sedict = get_compcnt(k, target, elements_se, elements, compbychr) if x[9] > 150000 and x[9] < 6000000 and cmpcnt > 0 and tpm > 1: # distance > 150 kb & < 6 Mbps if k in setarget.keys(): if setarget[k][0][1] == ".": currtpm = float(setarget[k][0][3]) else: currtpm = float(setarget[k][0][7]) if currtpm < tpm: # if expression is lower, replace setarget[k] = [x] else: setarget[k] = [x] return setarget def se2target_elements(setarget, elements, elements_se, compbychr): elist = list(elements.keys()) for k, v in setarget.items(): if v[0][1] == ".": # right super enhancer end = elements_se[v[0][5]][-1] start = v[0][0] target = start elif v[0][5] == ".": # left super enhancer start = elements_se[v[0][1]][0] end = v[0][4] target = end startindx = elist.index(start) endindx = elist.index(end) path = [] for i in range(startindx, endindx+1): tmp = elements[elist[i]] if tmp[4] != "." or tmp[2]-tmp[1] > 628: # either super-enhancer constituents or peak > 628 bp path.append(elist[i]) clusters = [] dum = [path[0]] for j in range(len(path)-1): nextstart = elements[path[j+1]][1] currend = elements[path[j]][2] nextse = elements[path[j+1]][4] currse = elements[path[j]][4] if nextstart-currend < 3000 or currse == nextse and currse != ".": # either nearby or same SE ID dum.append(path[j+1]) else: clusters.append(dum) dum = [path[j+1]] clusters.append(dum) cnt, sedict = get_compcnt(k, target, elements_se, elements, compbychr) setarget[k].append(path) setarget[k].append(clusters) setarget[k].append(sedict) setarget[k].append(cnt) return setarget def extract_compbyelm(tlist, elements, compbychr): extracted = [] chrom = elements[tlist[0]][0] for x in compbychr[chrom]: boolean = [i in tlist for i in x] # for each fragment, indicate if overlaps with elements of interest true_elm = [x[j] for j in range(len(boolean)) if boolean[j]==True] if len(true_elm) > 1: extracted.append(",".join(true_elm)) return extracted def get_compcntedges(G): alledges = list(G.edges.data()) se2gene = 0 se2enh = 0 se2prom = 0 enh2gene = 0 enh2enh = 0 enh2prom = 0 prom2prom = 0 prom2gene = 0 for x in alledges: left = x[0].split("-")[1][0] right = x[1].split("-")[1][0] pair = [left, right] if "S" in pair and "E" in pair: se2enh += x[2]['weight'] elif "S" in pair and "G" in pair: se2gene += x[2]['weight'] elif "S" in pair and "P" in pair: se2prom += x[2]['weight'] elif "G" in pair and "E" in pair: enh2gene += x[2]['weight'] elif pair[0]=="E" and pair[1]=="E": enh2enh += x[2]['weight'] elif "E" in pair and "P" in pair: enh2prom += x[2]['weight'] elif pair[0]=="P" and pair[1]=="P": prom2prom += x[2]['weight'] elif "P" in pair and "G" in pair: prom2gene += x[2]['weight'] tot = sum([x[2]['weight'] for x in alledges]) return se2gene, se2enh, se2prom, enh2gene, enh2enh, enh2prom, prom2prom, prom2gene, tot def get_cldict(test, elements): # test is one of setarget if test[0][1] != ".": # SE on left sepos = "L" gene = test[0][6]+"; "+test[0][7]+"; " + str(test[0][9]) seid = test[0][1] elif test[0][5] != ".": # SE on right sepos = "R" gene = test[0][2] + "; " + test[0][3] + "; " + str(test[0][9]) seid = test[0][5] cldict = {} for i in range(len(test[2])): #print(i) states = [elements[y][19] for y in test[2][i]] if 'SE' in states: label = 'OSE' elif 'P' in states: label = 'P' else: label = 'E' for x in test[2][i]: if sepos == "L" and i == 0: # left-most element & super-enhancer cldict[x] = "CL" + str(i) + "-" + seid if sepos == "L" and i == len(test[2])-1: # left-most element & target gene cldict[x] = "CL" + str(i) + "-G; " + gene if sepos == "R" and i == len(test[2])-1: # right-most element & super-enhancer cldict[x] = "CL" + str(i) + "-" + seid if sepos == "R" and i == 0: # right-most element & target gene cldict[x] = "CL" + str(i) + "-G; " + gene elif i != 0 and i != len(test[2])-1: ## intermediate elements cldict[x] = "CL" + str(i) + "-" + label + str(i-1) return cldict def get_summ_se2otherfrags(compbycl_dict): se2g_summ = [] # 1 frag in SE, 2, 3, 4, 5 to target gene se2p_summ = [] # 1 frag in SE, 2, 3, 4, 5 to promoter (intermediate element) se2e_summ = [] # 1 frag in SE, 2, 3, 4, 5 to enhancer (intermediate element) se2o_summ = [] # 1 frag in SE, 2, 3, 4, 5 to other super-enhancer (intermediate element) for key, val in compbycl_dict.items(): tcounts = val secntlistg = [0, 0, 0, 0, 0] secntlistp = [0, 0, 0, 0, 0] secntliste = [0, 0, 0, 0, 0] secntlisto = [0, 0, 0, 0, 0] for k, v in tcounts.items(): frags = k.split(",") fannot = [x.split("-")[1][0] for x in frags] #print(fannot) secnt = len([x for x in fannot if x=='S']) gcnt = len([x for x in fannot if x=='G']) ecnt = len([x for x in fannot if x=='E']) pcnt = len([x for x in fannot if x=='P']) ocnt = len([x for x in fannot if x=='O']) if secnt > 0: if gcnt > 0: secntlistg[secnt-1] += v if pcnt > 0: secntlistp[secnt-1] += v if ecnt > 0: secntliste[secnt-1] += v if ocnt > 0: secntlisto[secnt-1] += v if sum(secntlistg)>0: se2g_summ.append([x/sum(secntlistg) for x in secntlistg]) if sum(secntlistp)>0: se2p_summ.append([x/sum(secntlistp) for x in secntlistp]) if sum(secntliste)>0: se2e_summ.append([x/sum(secntliste) for x in secntliste]) if sum(secntlisto)>0: se2o_summ.append([x/sum(secntlisto) for x in secntlisto]) return se2g_summ, se2p_summ, se2e_summ, se2o_summ def plot_graph(nodes, nodecolors, G, ind, seid, coord, sedict, dist, cntse2g): se2gene, se2enh, se2prom, enh2gene, enh2enh, enh2prom, prom2prom, prom2gene, tot = get_compcntedges(G) plt.figure(figsize=(12, 12)) #colors = range(len(list(G.edges))) edges,weights = zip(*nx.get_edge_attributes(G,'weight').items()) nx.draw_circular(G, nodelist = nodes, with_labels=True, font_weight='bold', node_color=nodecolors, edge_color=weights, edge_cmap = plt.cm.Reds, width = 3) tit = "super-enhancer_graphs_plot"+str(ind) +"_"+ coord.split(":")[0]+"-"+coord.split(":")[1] + "_"+seid + "_" + str(len(nodes)) + "nodes" plt.title(seid+"; "+str(len(nodes)) + " nodes; " +"\n"+ coord + " (" + str(dist) + " bps)" + "\n" + str(cntse2g) + " complexes from SE to gene \n"+ str(sedict) + "\n" + "SE to gene: " + str(se2gene)+ " complexes \n" + "SE to enhancer: " + str(se2enh) + " complexes \n" + "SE to prom: " + str(se2prom)+ " complexes \n" + "Enhancer to gene: " + str(enh2gene) + " complexes \n" + "Enh to Enh: " + str(enh2enh) + " complexes \n" + "Enh to prom: " + str(enh2prom) + " complexes \n" + "Prom to prom: " + str(prom2prom) + " complexes \n" + "Prom to gene: " + str(prom2gene) + " complexes \n" + "Total: " + str(tot) +" complexes", fontsize=14) plt.savefig(directory+tit+'.png', dpi=300, bbox_inches='tight') #plt.show() plt.close() def position_MultiPartiteGraph( Graph, Parts , n_enh): # Graph is a networkX Graph object, where the nodes have attribute 'agentType' with part name as a value # Parts is a list of names for the parts (to be shown as columns) # returns list of dictionaries with keys being networkX Nodes, values being x,y coordinates for plottingxPos = {} xPos = {} yPos = {} for index1, agentType in enumerate(Parts): if agentType == 'SE': xPos[agentType] = index1 yPos[agentType] = int(n_enh/2) elif agentType == 'Target': xPos[agentType] = index1 yPos[agentType] = int(n_enh/2) else: xPos[agentType] = index1 yPos[agentType] = 0 pos = {} for node, attrDict in Graph.nodes(data=True): agentType = attrDict['agentType'] # print ('node: %s\tagentType: %s' % (node, agentType)) # print ('\t(x,y): (%d,%d)' % (xPos[agentType], yPos[agentType])) pos[node] = (xPos[agentType], yPos[agentType]) yPos[agentType] += 1 return pos def plot_tripartite(G, nodes, tit): TG = nx.Graph() TG.add_nodes_from([x for x in nodes if x.split("-")[1][0]=="S"], agentType='SE') # Add the node attribute "bipartite" TG.add_nodes_from([x for x in nodes if x.split("-")[1][0]!="S" and x.split("-")[1][0]!="G"], agentType='Enh') TG.add_nodes_from([x for x in nodes if x.split("-")[1][0]=="G"], agentType='Target') for x in G.edges.data(): TG.add_weighted_edges_from([(x[0], x[1], x[2]['weight'])]) edges,weights = zip(*nx.get_edge_attributes(TG,'weight').items()) nx.draw(TG,pos=position_MultiPartiteGraph(TG, ['SE', 'Enh', 'Target'], len(nodes)-2), nodelist = nodes, with_labels=True, font_weight='bold', node_color=nodecolors, edge_color=weights, edge_cmap = plt.cm.Reds, width = 3) plt.savefig(directory+tit+'.png', dpi=300, bbox_inches='tight') #plt.show() plt.close() def get_clbed(cllist, cldict, elements): # cllist is setarget[k][2] bed = [] for x in cllist: chrom = elements[x[0]][0] start = elements[x[0]][1] end = elements[x[-1]][2] eid = cldict[x[0]] bed.append([chrom, start, end, eid]) return bed def plot_barchart(x, y, ylab, clr, tit, fig_name): fig = plt.figure(figsize = (18,6)) plt.bar(x, y, width = 0.7, color = clr) plt.xticks(fontsize = 18) plt.yticks(fontsize = 18) plt.ylabel(ylab, fontsize = 18) plt.title(tit, fontsize = 18) plt.savefig(fig_name+'.png', dpi=150, bbox_inches="tight") #plt.show() plt.close() def plot_aggregate(obsProb, xlimit, obsCol, obsColDark, fig_title, fig_name): obsProb_adj = obsProb obsProbMean = list(np.average(np.array(obsProb_adj), axis = 0)) print([round(x,3) for x in obsProbMean]) ## plotting begins x = np.arange(0,xlimit) fig = plt.figure(figsize = (10,5)) ax = fig.add_subplot(111) std1 = np.std(np.array([x[:xlimit] for x in obsProb_adj]), axis = 0) #rects1 = ax.bar(x, obsProbMean[:xlimit], width = 0.35, color = obsCol, align = 'center', yerr = std1) rects1 = ax.bar(x, obsProbMean[:xlimit], width = 0.6, color = obsCol, align = 'center') plt.title(fig_title + "\n"+"Prob: "+ ", ".join([str(round(x,3)) for x in obsProbMean]), fontsize = 17) plt.xlabel("Categories", fontsize = 16) plt.ylabel("Proportion of \n RNAPII ChIA-Drop complexes", fontsize = 16) plt.xticks(np.arange(9),["SS", "ES","PS", "GS", "EP", "PP", "GP", "EG", "EE"], fontsize=16) plt.yticks(fontsize = 15) plt.savefig(directory+fig_name+'.png', dpi=300, bbox_inches='tight') plt.show() plt.close() def get_summ_9categories(edgetrip_dict): summ_raw = {} # 0: within SE, 1: SE2enh, 2: Enh2Enh, 3: Enh2Gene, 4: SE2gene for k, v in edgetrip_dict.items(): tmpdict = dict.fromkeys(["SS","ES", "PS", "GS", "EP", "PP", "GP", "EG", "EE"], 0) for x in v: e1 = x[0].split("-")[1][0] e2 = x[1].split("-")[1][0] tstr = ''.join(sorted([e1, e2])) if tstr in tmpdict.keys(): tmpdict[tstr] += x[2] summ_raw[k] = list(tmpdict.values()) summ_normalized = [] for x in list(summ_raw.values()): tot = sum(x) if tot > 0: summ_normalized.append([y/tot for y in x]) return summ_raw, summ_normalized def compute_degree_cent(g_dict): ## compute degree centrality se_deg = [] tg_deg = [] enh_deg = [] prom_deg = [] ose_deg = [] for k, v in g_dict.items(): G = v degrees = G.degree(weight = 'weight') tot = sum([d for n,d in degrees]) N = len(degrees) if tot > 0: for n,d in degrees: if n.split("-")[1][0] == "S": # is super-enhancer #se_deg.append(d/(N-1)) se_deg.append(d/(tot)) elif n.split("-")[1][0] == "E": # is enhancer #enh_deg.append(d/(N-1)) enh_deg.append(d/(tot)) elif n.split("-")[1][0] == "G": # is target gene #tg_deg.append(d/(N-1)) tg_deg.append(d/(tot)) elif n.split("-")[1][0] == "P": # is promoter #prom_deg.append(d/(N-1)) prom_deg.append(d/(tot)) elif n.split("-")[1][0] == "O": # is other super-enhancer #ose_deg.append(d/(N-1)) ose_deg.append(d/(tot)) return se_deg, ose_deg, enh_deg, prom_deg, tg_deg def get_clcoord(cldict, elements): clcoord = {} for k,v in cldict.items(): chrom = elements[k][0] start = elements[k][1] end = elements[k][2] if v in clcoord.keys(): if clcoord[v][1] > start: clcoord[v][1] = start if clcoord[v][2] < end: clcoord[v][2] = end else: clcoord[v] = [chrom, start, end] return clcoord def plot_1hist(x1,bin_lims, lab1, clr1, tit, xlab, fig_name): bin_centers = 0.5*(bin_lims[:-1]+bin_lims[1:]) bin_widths = bin_lims[1:]-bin_lims[:-1] hist1, _ = np.histogram(x1, bins=bin_lims) ##normalizing hist1b = hist1/np.max(hist1) fig, (ax2) = plt.subplots(nrows = 1, ncols = 1, figsize=(8, 6)) ax2.bar(bin_centers, hist1b, width = bin_widths, align = 'center', label = lab1, color = clr1, alpha = 0.2) ax2.legend(loc = 'upper right', fontsize = 18) plt.title(tit, fontsize = 18) plt.xlabel(xlab, fontsize = 18) plt.ylabel("Relative Proportion", fontsize = 18) plt.savefig(fig_name+'.pdf', dpi=300) plt.show() #plot_linear(G, nodes, cldict, ind, seid, coord, sedict, dist, cntse2g) def get_linear_positions(clcoord): left = min([x[1] for x in list(clcoord.values())]) right = max([x[2] for x in list(clcoord.values())]) span = right - left positions = {} for k,v in clcoord.items(): positions[k] = [int((v[1]+v[2])/2)-left, 0] return positions def plot_linear(G, nodes, cldict, ind, seid, coord, sedict, dist, cntse2g): clcoord = get_clcoord(cldict, elements) positions = get_linear_positions(clcoord) maxdeg = max([d[1] for d in G.degree(weight = 'weight')]) nodesizes = [int(d[1]*1000/maxdeg) for d in G.degree(weight = 'weight')] LG = nx.MultiDiGraph() LG.add_nodes_from(nodes) maxweight = max([x[2]['weight'] for x in G.edges.data()]) #Reds = cm.get_cmap('Reds', maxweight) for x in G.edges.data(): radius = (int(x[1].split("-")[0].split("CL")[1])-int(x[0].split("-")[0].split("CL")[1]))/(3*len(nodes)) #ecol = Reds(x[2]['weight']) ecol = 'navy' #if x[2]['weight'] >= 1 and x[2]['weight'] <= 20: # 1-20 complexes # width = x[2]['weight']/4 #elif x[2]['weight'] >= 21 and x[2]['weight'] <= 30: #21-30 complexes # width = 6 #elif x[2]['weight'] >= 31: # >= 31 complexes # width = 7 #width = x[2]['weight']*5/maxweight width = min(x[2]['weight']*8/maxweight, 7) if x[0].split("-")[1][0] == "S" or x[1].split("-")[1][0] == "S": radius = -radius LG.add_edge(x[0], x[1], weight = x[2]['weight'], rad=radius, width = width, col = ecol) plt.figure(figsize=(15, 18)) edges,weights = zip(*nx.get_edge_attributes(LG,'weight').items()) nx.draw_networkx_nodes(LG, pos = positions, nodelist = nodes, node_size = nodesizes, node_shape='o', with_labels=True, font_weight='bold', node_color=nodecolors) #nx.draw_networkx_labels(LG,pos,font_size=16) for edge in LG.edges(data=True): nx.draw_networkx_edges(LG, positions, edge_color = edge[2]["col"], arrowstyle="-", width = edge[2]["width"], edgelist=[(edge[0],edge[1])], connectionstyle=f'arc3, rad = {edge[2]["rad"]}') se2gene, se2enh, se2prom, enh2gene, enh2enh, enh2prom, prom2prom, prom2gene, tot = get_compcntedges(G) tit = "super-enhancer_linear-graphs_plot"+str(ind) +"_"+ coord.split(":")[0]+"-"+coord.split(":")[1] + "_"+seid + "_" + str(len(nodes)) + "nodes" plt.title(seid+"; "+str(len(nodes)) + " nodes; " +"\n"+ coord + " (" + str(dist) + " bps)" + "\n" + "Color bar max: " + str(maxweight) + "\n" + nodes[0] +"\n" +nodes[-1] + "\n" + str(cntse2g) + " complexes from SE to gene \n"+ str(sedict) + "\n" + "SE to gene: " + str(se2gene)+ " complexes \n" + "SE to enhancer: " + str(se2enh) + " complexes \n" + "SE to prom: " + str(se2prom)+ " complexes \n" + "Enhancer to gene: " + str(enh2gene) + " complexes \n" + "Enh to Enh: " + str(enh2enh) + " complexes \n" + "Enh to prom: " + str(enh2prom) + " complexes \n" + "Prom to prom: " + str(prom2prom) + " complexes \n" + "Prom to gene: " + str(prom2gene) + " complexes \n" + "Total: " + str(tot) +" complexes", fontsize=14) plt.savefig(directory+tit+'.png', dpi=300, bbox_inches='tight') #plt.show() plt.close() def write_result(directory, out_list, out_name): with open(directory+out_name, 'a') as file1: for i in range(len(out_list)): file1.write('\t'.join(map(str, out_list[i])) + '\n') file1.close() directory='/Users/kimm/Desktop/GM12878_files/' cohesin_rfile='GM12878-cohesin-pooled_comp_FDR_0.2_PASS.RNAPII-peaksoverlap.region' rnapii_rfile='GM12878-RNAPII-pooledv2_comp_FDR_0.2_PASS.RNAPII-peaksoverlap.region' elements_file='RNAPII-ChIA-PET-drop_peaks_merge500bp-superenhancer_const_chromHMM_ENCFF879KFK_RAID_20200729.bed' loop_file='LHG0035N_0035V_0045V.e500.clusters.cis.BE5.RNAPIIpeak.bothpksupport.bedpe' raid_file='GM12878_RAID_20200627.bed' raids = read_raidfile(directory, raid_file) loops = read_loopfile(directory, loop_file) rnapiir = read_regionfile(directory, rnapii_rfile) compbychr = get_compbychr(rnapiir) elements, ebyraid, sebyraid, elements_se = read_elementsfile(directory, elements_file) # elements, elem. by RAID, super-enh by RAID lpby1se, lpby2se = get_lpbyse(loops, elements) setarget = get_target(lpby1se, elements_se, elements, compbychr) len(setarget) setarget = se2target_elements(setarget, elements, elements_se, compbychr) ind = 0 summ_dict = {} edgetrip_dict = {} g_dict = {} compbycl_dict = {} intermprom = [] bed = [] table = [] # SE ID, target, TPM, span, coord, # interm. P, # interm. E, # interm. OSE, # se2gene, se2enh, se2prom, enh2gene, enh2enh, enh2prom, prom2prom, prom2gene, tot for k, v in setarget.items(): #print(k) if v[0][5] == k: # left target gene = v[0][2] tpm = float(v[0][3]) elif v[0][1] == k: # right target gene = v[0][6] tpm = float(v[0][7]) extracted = Counter(extract_compbyelm(v[1], elements, compbychr)) cldict = get_cldict(v, elements) compbycl = {} for k2, v2 in extracted.items(): klist = k2.split(",") cstr = ",".join([cldict[x] for x in klist]) if cstr in compbycl.keys(): compbycl[cstr] += v2 else: compbycl[cstr] = v2 bed.extend(get_clbed(v[2], cldict, elements)) nodes, nodecolors, nodecoldict, G, edgetriplet = get_graph(cldict, compbycl) se2gene, se2enh, se2prom, enh2gene, enh2enh, enh2prom, prom2prom, prom2gene, tot = get_compcntedges(G) summ_dict[k] = [se2gene, se2enh, se2prom, enh2gene, enh2enh, enh2prom, prom2prom, prom2gene] edgetrip_dict[k] = edgetriplet g_dict[k] = G compbycl_dict[k] = compbycl chrom = elements[v[1][0]][0] start = elements[v[1][0]][1] end = elements[v[1][-1]][2] dist = end-start seid = k sedict = v[3] cntse2g = v[4] coord = chrom +":" + str(start)+"-"+str(end) elmstates = [x.split("-")[1][0] for x in nodes] genelist = [] for i in range(1,len(v[2])-1): genelist.extend([elements[y][12] for y in v[2][i] if elements[y][19]=="P"]) genelistp = list(dict.fromkeys(sorted(genelist))) # list of intermediary promoter gene names for x in genelistp: intermprom.append([k, gene, tpm, dist, coord, x]) tmptable = [k, gene, tpm, dist, coord] tmptable.append(len([x for x in elmstates if x=="P"])) tmptable.append(len([x for x in elmstates if x=="E"])) tmptable.append(len([x for x in elmstates if x=="O"])) tmptable.extend([se2gene, se2enh, se2prom, enh2gene, enh2enh, enh2prom, prom2prom, prom2gene, tot]) table.append(tmptable) del tmptable if tot > 0: #plot_graph(nodes, nodecolors, G, ind, seid, coord, sedict, dist, cntse2g) tit = "super-enhancer_tripartite_graphs_plot"+str(ind) +"_"+ coord.split(":")[0]+"-"+coord.split(":")[1] + "_"+seid + "_" + str(len(nodes)) + "nodes" #plot_tripartite(G, nodes, tit) #plot_linear(G, nodes, cldict, ind, seid, coord, sedict, dist, cntse2g) ind += 1 del G write_result(directory, table, "GM12878_RNAPII_superenhancer_summarytable_20200824.txt") # check weight counts for k in edgetrip_dict.keys(): edgedict_sum = sum([x[2] for x in edgetrip_dict[k] if x[0] != x[1]]) graph_sum = sum([y['weight'] for y in [x[2] for x in g_dict[k].edges.data()]]) if edgedict_sum != graph_sum: print(k) se2g_summ, se2p_summ, se2e_summ, se2o_summ = get_summ_se2otherfrags(compbycl_dict) print("SE to other SE:") for i in range(5): print(str(i+1)+ " frag in SE: " + str(round(np.mean([x[i] for x in se2p_summ])*100, 3))+ " %") se2f_summ = se2g_summ se2f_summ.extend(se2p_summ) se2f_summ.extend(se2e_summ) se2f_summ.extend(se2o_summ) len(se2f_summ) print("SE to other functional elements:") for i in range(5): print(str(i+1)+ " frag in SE: " + str(round(np.mean([x[i] for x in se2f_summ])*100, 3))+ " %") totcnt = 0 for k,v in compbycl_dict.items(): tcounts = v for k1,v1 in tcounts.items(): totcnt += v1 totcnt se2f_raw = [0,0,0,0] # 1 frag in SE, 2, 3, 4, 5 to target gene for key, val in compbycl_dict.items(): tcounts = val for k, v in tcounts.items(): frags = k.split(",") fannot = [x.split("-")[1][0] for x in frags] #print(fannot) secnt = len([x for x in fannot if x=='S']) gcnt = len([x for x in fannot if x=='G']) ecnt = len([x for x in fannot if x=='E']) pcnt = len([x for x in fannot if x=='P']) ocnt = len([x for x in fannot if x=='O']) fcnt = gcnt + ecnt + pcnt + ocnt if fcnt > 0: if secnt == 1: se2f_raw[0] += v elif secnt == 2: se2f_raw[1] += v elif secnt == 3: se2f_raw[2] += v elif secnt > 3: se2f_raw[3] += v se2f_raw sum(se2f_raw) summ_raw, summ_normalized = get_summ_9categories(edgetrip_dict) ## plot degrees for each SE for k, v in g_dict.items(): G = v degrees = G.degree(weight = 'weight') degpair = sorted([[n,d] for n,d in degrees], key=lambda x: int(x[0].split("-")[0].split("CL")[1])) plot_barchart([x[0].split("-")[1] for x in degpair], [x[1] for x in degpair], "Degrees", "#590059", k + ": degrees", directory+k+"_degrees_barchart_20200820") se_deg, ose_deg, enh_deg, prom_deg, tg_deg = compute_degree_cent(g_dict) tit = "Normalized degrees RNAPII ChIA-Drop \n" + \ "SE: n = " + str(len(se_deg))+ "; mean = " + str(round(np.mean(se_deg), 2)) + "; median = " + str(round(np.median(se_deg), 2)) + "\n" \ "Other SE: n = " + str(len(ose_deg))+ "; mean = " + str(round(np.mean(ose_deg), 2)) + "; median = " + str(round(np.median(ose_deg), 2)) + "\n" \ "Enhancers: n = " + str(len(enh_deg))+ "; mean = " + str(round(np.mean(enh_deg), 2)) + "; median = " + str(round(np.median(enh_deg), 2)) + "\n" \ "Promoters: n = " + str(len(prom_deg))+ "; mean = " + str(round(np.mean(prom_deg), 2)) + "; median = " + str(round(np.median(prom_deg), 2)) + "\n" \ "Target gene: n = " + str(len(tg_deg))+ "; mean = " + str(round(np.mean(tg_deg), 2)) + "; median = " + str(round(np.median(tg_deg), 2)) + "\n" \ plot_boxplot([se_deg, ose_deg, enh_deg, prom_deg, tg_deg], ["SE", "Other SE", "Enh", "Prom", "Target Gene"], "#590059", tit, "Normalized degrees", "GM12878_RNAPII_SE_degree_centrality_normby-nEdges_boxplot_20200824") ```
github_jupyter
### Source: 1) https://github.com/eitanrich/torch-mfa 2) https://github.com/eitanrich/gans-n-gmms ### Getting: CelebA Dataset ``` !wget -q https://raw.githubusercontent.com/sayantanauddy/vae_lightning/main/data.py ``` ### Getting helper functions ``` !wget -q https://raw.githubusercontent.com/probml/pyprobml/master/scripts/mfa_celeba_helpers.py ``` ### Get the Kaggle api token and upload it to colab. Follow the instructions [here](https://github.com/Kaggle/kaggle-api#api-credentials). ``` !pip install kaggle from google.colab import files uploaded = files.upload() !mkdir /root/.kaggle !cp kaggle.json /root/.kaggle/kaggle.json !chmod 600 /root/.kaggle/kaggle.json ``` ### Train and saving the checkpoint ``` !pip install torchvision !pip install pytorch-lightning import sys, os import torch from torchvision.datasets import CelebA, MNIST import torchvision.transforms as transforms from pytorch_lightning import LightningDataModule, LightningModule, Trainer from torch.utils.data import DataLoader, random_split import numpy as np from matplotlib import pyplot as plt from imageio import imwrite from packaging import version from mfa_celeba_helpers import * from data import CelebADataset, CelebADataModule """ MFA model training (data fitting) example. Note that actual EM (and SGD) training code are part of the MFA class itself. """ def main(argv): assert version.parse(torch.__version__) >= version.parse("1.2.0") dataset = argv[1] if len(argv) == 2 else "celeba" print("Preparing dataset and parameters for", dataset, "...") if dataset == "celeba": image_shape = [64, 64, 3] # The input image shape n_components = 300 # Number of components in the mixture model n_factors = 10 # Number of factors - the latent dimension (same for all components) batch_size = 1000 # The EM batch size num_iterations = 30 # Number of EM iterations (=epochs) feature_sampling = 0.2 # For faster responsibilities calculation, randomly sample the coordinates (or False) mfa_sgd_epochs = 0 # Perform additional training with diagonal (per-pixel) covariance, using SGD init_method = "rnd_samples" # Initialize each component from few random samples using PPCA trans = transforms.Compose( [ CropTransform((25, 50, 25 + 128, 50 + 128)), transforms.Resize(image_shape[0]), transforms.ToTensor(), ReshapeTransform([-1]), ] ) train_set = CelebADataset(root="./data", split="train", transform=trans, download=True) test_set = CelebADataset(root="./data", split="test", transform=trans, download=True) elif dataset == "mnist": image_shape = [28, 28] # The input image shape n_components = 50 # Number of components in the mixture model n_factors = 6 # Number of factors - the latent dimension (same for all components) batch_size = 1000 # The EM batch size num_iterations = 30 # Number of EM iterations (=epochs) feature_sampling = False # For faster responsibilities calculation, randomly sample the coordinates (or False) mfa_sgd_epochs = 0 # Perform additional training with diagonal (per-pixel) covariance, using SGD init_method = "kmeans" # Initialize by using k-means clustering trans = transforms.Compose([transforms.ToTensor(), ReshapeTransform([-1])]) train_set = MNIST(root="./data", train=True, transform=trans, download=True) test_set = MNIST(root="./data", train=False, transform=trans, download=True) else: assert False, "Unknown dataset: " + dataset device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_dir = "./models/" + dataset os.makedirs(model_dir, exist_ok=True) figures_dir = "./figures/" + dataset os.makedirs(figures_dir, exist_ok=True) model_name = "c_{}_l_{}_init_{}".format(n_components, n_factors, init_method) print("Defining the MFA model...") model = MFA( n_components=n_components, n_features=np.prod(image_shape), n_factors=n_factors, init_method=init_method ).to(device=device) print("EM fitting: {} components / {} factors / batch size {} ...".format(n_components, n_factors, batch_size)) ll_log = model.batch_fit( train_set, test_set, batch_size=batch_size, max_iterations=num_iterations, feature_sampling=feature_sampling ) if mfa_sgd_epochs > 0: print("Continuing training using SGD with diagonal (instead of isotropic) noise covariance...") model.isotropic_noise = False ll_log_sgd = model.sgd_mfa_train( train_set, test_size=256, max_epochs=mfa_sgd_epochs, feature_sampling=feature_sampling ) ll_log += ll_log_sgd print("Saving the model...") torch.save(model.state_dict(), os.path.join(model_dir, "model_" + model_name + ".pth")) print("Visualizing the trained model...") model_image = visualize_model(model, image_shape=image_shape, end_component=10) imwrite(os.path.join(figures_dir, "model_" + model_name + ".jpg"), model_image) print("Generating random samples...") rnd_samples, _ = model.sample(100, with_noise=False) mosaic = samples_to_mosaic(rnd_samples, image_shape=image_shape) imwrite(os.path.join(figures_dir, "samples_" + model_name + ".jpg"), mosaic) print("Plotting test log-likelihood graph...") plt.plot(ll_log, label="c{}_l{}_b{}".format(n_components, n_factors, batch_size)) plt.grid(True) plt.savefig(os.path.join(figures_dir, "training_graph_" + model_name + ".jpg")) print("Done") if __name__ == "__main__": main(sys.argv) ```
github_jupyter
# Classifying Fashion-MNIST Now it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world. <img src='assets/fashion-mnist-sprite.png' width=500px> In this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this. First off, let's load the dataset through torchvision. ``` import torch from torchvision import datasets, transforms import helper # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True) ``` Here we can see one of the images. ``` image, label = next(iter(trainloader)) helper.imshow(image[0,:]); ``` ## Building the network Here you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers. ``` from torch import nn, optim import torch.nn.functional as F # TODO: Define your network architecture here class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.log_softmax(self.fc4(x), dim=1) return x ``` # Train the network Now you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.html#loss-functions) (something like `nn.CrossEntropyLoss` or `nn.NLLLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`). Then write the training code. Remember the training pass is a fairly straightforward process: * Make a forward pass through the network to get the logits * Use the logits to calculate the loss * Perform a backward pass through the network with `loss.backward()` to calculate the gradients * Take a step with the optimizer to update the weights By adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4. ``` # TODO: Create the network, define the criterion and optimizer model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) # TODO: Train the network here epochs = 5 for e in range(epochs): running_loss = 0 for images, labels in trainloader: log_ps = model(images) loss = criterion(log_ps, labels) optimizer.zero_grad() loss.backward() optimizer.step() running_loss += loss.item() else: print(f"Training loss: {running_loss/len(trainloader)}") %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper # Test out your network! dataiter = iter(testloader) images, labels = dataiter.next() img = images[1] # TODO: Calculate the class probabilities (softmax) for img ps = torch.exp(model(img)) # Plot the image and probabilities helper.view_classify(img, ps, version='Fashion') ```
github_jupyter
``` # Automatically reload imported modules that are changed outside this notebook %load_ext autoreload %autoreload 2 # More pixels in figures import matplotlib.pyplot as plt %matplotlib inline plt.rcParams["figure.dpi"] = 200 # Init PRNG with fixed seed for reproducibility import numpy as np np_rng = np.random.default_rng(1) import tensorflow as tf tf.random.set_seed(np_rng.integers(0, tf.int64.max)) ``` # Common Voice spoken language identification with a neural network **2020-11-08** This example is a thorough, but simple walk-through on how to do everything from loading mp3-files containing speech to preprocessing and transforming the speech data into something we can feed to a neural network classifier. Deep learning based speech analysis is a vast research topic and there are countless techniques that could possibly be applied to improve the results of this example. This example tries to avoid going into too much detail into these techniques and instead focuses on getting an end-to-end classification pipeline up and running with a small dataset. ## Data This example uses open speech data downloaded from the [Mozilla Common Voice](https://commonvoice.mozilla.org/en/datasets) project. See the readme file for downloading the data. In addition to the space needed for the downloaded data, you will need at least 10 GiB of free disk space for caching (can be disabled). ``` import urllib.parse from IPython.display import display, Markdown languages = """ et mn ta tr """.split() languages = sorted(l.strip() for l in languages) display(Markdown("### Languages")) display(Markdown('\n'.join("* `{}`".format(l) for l in languages))) bcp47_validator_url = 'https://schneegans.de/lv/?tags=' display(Markdown("See [this tool]({}) for a description of the BCP-47 language codes." .format(bcp47_validator_url + urllib.parse.quote('\n'.join(languages))))) ``` ## Loading the metadata We start by preprocessing the Common Voice metadata files. Update `datadir` and `workdir` to match your setup. All output will be written to `workdir`. ``` import os workdir = "/data/exp/cv4" datadir = "/mnt/data/speech/common-voice/downloads/2020/cv-corpus" print("work dir:", workdir) print("data source dir:", datadir) os.makedirs(workdir, exist_ok=True) assert os.path.isdir(datadir), datadir + " does not exist" ``` Common Voice metadata is distributed as `tsv` files and all audio samples are mp3-files under `clips`. ``` dirs = sorted((f for f in os.scandir(datadir) if f.is_dir()), key=lambda f: f.name) print(datadir) for d in dirs: if d.name in languages: print(' ', d.name) for f in os.scandir(d): print(' ', f.name) missing_languages = set(languages) - set(d.name for d in dirs) assert missing_languages == set(), "missing languages: {}".format(missing_languages) ``` There's plenty of metadata, but it seems that the train-dev-test split has been predefined so lets use that. [pandas](https://pandas.pydata.org/pandas-docs/stable/index.html) makes it easy to read, filter, and manipulate metadata in tables. Lets try to preprocess all metadata here so we don't have to worry about it later. ``` import pandas as pd from IPython.display import display, Markdown # Lexicographic order of labels as a fixed index target to label mapping target2lang = tuple(sorted(languages)) lang2target = {lang: target for target, lang in enumerate(target2lang)} print("lang2target:", lang2target) print("target2lang:", target2lang) def expand_metadata(row): """ Update dataframe row by generating a unique utterance id, expanding the absolute path to the mp3 file, and adding an integer target for the label. """ row.id = "{:s}_{:s}".format( row.path.split(".mp3", 1)[0].split("common_voice_", 1)[1], row.split) row.path = os.path.join(datadir, row.lang, "clips", row.path) row.target = lang2target[row.lang] return row def tsv_to_lang_dataframe(lang, split): """ Given a language and dataset split (train, dev, test), load the Common Voice metadata tsv-file from disk into a pandas.DataFrame. Preprocess all rows by dropping unneeded columns and adding new metadata. """ df = pd.read_csv( os.path.join(datadir, lang, split + ".tsv"), sep='\t', # We only need these columns from the metadata usecols=("client_id", "path", "sentence")) # Add language label as column df.insert(len(df.columns), "lang", lang) # Add split name to every row for easier filtering df.insert(len(df.columns), "split", split) # Add placeholders for integer targets and utterance ids generated row-wise df.insert(len(df.columns), "target", -1) df.insert(len(df.columns), "id", "") # Create new metadata columns df = df.transform(expand_metadata, axis=1) return df split_names = ("train", "dev", "test") # Concatenate metadata for all 4 languages into a single table for each split splits = [pd.concat([tsv_to_lang_dataframe(lang, split) for lang in target2lang]) for split in split_names] # Concatenate split metadata into a single table, indexed by utterance ids meta = (pd.concat(splits) .set_index("id", drop=True, verify_integrity=True) .sort_index()) del splits for split in split_names: display(Markdown("### " + split)) display(meta[meta["split"]==split]) ``` ### Checking that all splits are disjoint by speaker To ensure our neural network will learn what language is being spoken and not who is speaking, we want to test it on data that does not have any voices present in the training data. The `client_id` should correspond to a unique, pseudonymized identifier for every speaker. Lets check all splits are disjoint by speaker id. ``` def assert_splits_disjoint_by_speaker(meta): split2spk = {split: set(meta[meta["split"]==split].client_id.to_numpy()) for split in split_names} for split, spk in split2spk.items(): print("split {} has {} speakers".format(split, len(spk))) print() print("asserting all are disjoint") assert split2spk["train"] & split2spk["test"] == set(), "train and test, mutual speakers" assert split2spk["train"] & split2spk["dev"] == set(), "train and dev, mutual speakers" assert split2spk["dev"] & split2spk["test"] == set(), "dev and test, mutual speakers" print("ok") assert_splits_disjoint_by_speaker(meta) ``` We can see that none of the speakers are in two or more dataset splits. We also see that the test set has a lot of unique speakers who are not in the training set. This is good because we want to test that our neural network classifier knows how to classify input from unknown speakers. ### Checking that all audio files exist ``` for uttid, row in meta.iterrows(): assert os.path.exists(row["path"]), row["path"] + " does not exist" print("ok") ``` ## Balancing the language distribution Lets see how many samples we have per language. ``` import seaborn as sns sns.set(rc={'figure.figsize': (8, 6)}) ax = sns.countplot( x="split", order=split_names, hue="lang", hue_order=target2lang, data=meta) ax.set_title("Total amount of audio samples") plt.show() ``` We can see that the amount of samples with Mongolian, Tamil, and Turkish speech are quite balanced, but we have significantly larger amounts of Estonian speech. More data is of course always better, but if there is too much of one label compared to the others, our neural network might overfit on this label. But these are only the counts of audio files, how much speech do we have in total per language? We need to read every file to get a reliable answer. See also [SoX](http://sox.sourceforge.net/Main/HomePage) for a good command line tool. ``` import miniaudio meta["duration"] = np.array([ miniaudio.mp3_get_file_info(path).duration for path in meta.path], np.float32) meta def plot_duration_distribution(data): sns.set(rc={'figure.figsize': (8, 6)}) ax = sns.boxplot( x="split", order=split_names, y="duration", hue="lang", hue_order=target2lang, data=data) ax.set_title("Median audio file duration in seconds") plt.show() ax = sns.barplot( x="split", order=split_names, y="duration", hue="lang", hue_order=target2lang, data=data, ci=None, estimator=np.sum) ax.set_title("Total amount of audio in seconds") plt.show() plot_duration_distribution(meta) ``` The median length of Estonian samples is approx. 2.5 seconds greater compared to Turkish samples, which have the shortest median length. We can also see that the total amount of Estonian speech is much larger compared to other languages in our datasets. Notice also the significant amount of outliers with long durations in the Tamil and Turkish datasets. Lets do simple random oversampling for the training split using this approach: 1. Select the target language according to maximum total amount of speech in seconds (Estonian). 2. Compute differences in total durations between the target language and the three other languages. 3. Compute median signal length by language. 4. Compute sample sizes by dividing the duration deltas with median signal lengths, separately for each language. 5. Draw samples with replacement from the metadata separately for each language. 6. Merge samples with rest of the metadata and verify there are no duplicate ids. ``` def random_oversampling(meta): groupby_lang = meta[["lang", "duration"]].groupby("lang") total_dur = groupby_lang.sum() target_lang = total_dur.idxmax()[0] print("target lang:", target_lang) print("total durations:") display(total_dur) total_dur_delta = total_dur.loc[target_lang] - total_dur print("total duration delta to target lang:") display(total_dur_delta) median_dur = groupby_lang.median() print("median durations:") display(median_dur) sample_sizes = (total_dur_delta / median_dur).astype(np.int32) print("median duration weighted sample sizes based on total duration differences:") display(sample_sizes) samples = [] for lang in groupby_lang.groups: sample_size = sample_sizes.loc[lang][0] sample = (meta[meta["lang"]==lang] .sample(n=sample_size, replace=True, random_state=np_rng.bit_generator) .reset_index() .transform(update_sample_id, axis=1)) samples.append(sample) return pd.concat(samples).set_index("id", drop=True, verify_integrity=True) def update_sample_id(row): row["id"] = "{}_copy_{}".format(row["id"], row.name) return row # Augment training set metadata meta = pd.concat([random_oversampling(meta[meta["split"]=="train"]), meta]).sort_index() assert not meta.isna().any(axis=None), "NaNs in metadata after augmentation" plot_duration_distribution(meta) assert_splits_disjoint_by_speaker(meta) meta ``` Speech data augmentation is a common research topic. There are [better](https://www.isca-speech.org/archive/interspeech_2015/papers/i15_3586.pdf) ways to augment data than the simple duplication of metadata rows we did here. One approach (which we won't be doing here) which is easy to implement and might work well is to take copies of signals and make them randomly a bit faster or slower. For example, draw randomly speed ratios from `[0.9, 1.1]` and resample the signal by multiplying its sample rate with the random ratio. ## Inspecting the audio Lets take a look at the speech data and listen to a few randomly picked samples from each label. We pick 2 random samples for each language from the training set. ``` samples = (meta[meta["split"]=="train"] .groupby("lang") .sample(n=2, random_state=np_rng.bit_generator)) samples ``` Then lets read the mp3-files from disk, plot the signals, and listen to the audio. ``` from IPython.display import display, Audio, HTML import scipy.signal def read_mp3(path, resample_rate=16000): if isinstance(path, bytes): # If path is a tf.string tensor, it will be in bytes path = path.decode("utf-8") f = miniaudio.mp3_read_file_f32(path) # Downsample to target rate, 16 kHz is commonly used for speech data new_len = round(len(f.samples) * float(resample_rate) / f.sample_rate) signal = scipy.signal.resample(f.samples, new_len) # Normalize to [-1, 1] signal /= np.abs(signal).max() return signal, resample_rate def embed_audio(signal, rate): display(Audio(data=signal, rate=rate, embed=True, normalize=False)) def plot_signal(data, figsize=(6, 0.5), **kwargs): ax = sns.lineplot(data=data, lw=0.1, **kwargs) ax.set_axis_off() ax.margins(0) plt.gcf().set_size_inches(*figsize) plt.show() def plot_separator(): display(HTML(data="<hr style='border: 2px solid'>")) for sentence, lang, clip_path in samples[["sentence", "lang", "path"]].to_numpy(): signal, rate = read_mp3(clip_path) plot_signal(signal) print("length: {} sec".format(signal.size / rate)) print("lang:", lang) print("sentence:", sentence) embed_audio(signal, rate) plot_separator() ``` One of the most challenging aspects of the Mozilla Common Voice dataset is that the audio quality varies greatly: different microphones, background noise, user is speaking close to the device or far away etc. It is difficult to ensure that a neural network will learn to classify different languages as opposed to classifying distinct acoustic artefacts from specific microphones. There's a [vast amount of research](https://www.isca-speech.org/archive/Interspeech_2020/) being done on developing techniques for solving these kind of problems. However, these are well out of scope for this simple example and we won't be studying them here. ## Spectral representations It is usually not possible (at least not yet in 2020) to detect languages directly from the waveform. Instead, the [fast Fourier transform](https://en.wikipedia.org/wiki/Short-time_Fourier_transform) (FFT) is applied on small, overlapping windows of the signal to get a 2-dimensional representation of energies in different frequency bands. See [this](https://wiki.aalto.fi/display/ITSP/Spectrogram+and+the+STFT) for further details. However, output from the FFT is usually not usable directly and must be refined. Lets begin by selecting the first signal from our random sample and extract the power spectrogram. ### Power spectrogram ``` from lidbox.features.audio import spectrograms def plot_spectrogram(S, cmap="viridis", figsize=None, **kwargs): if figsize is None: figsize = S.shape[0]/50, S.shape[1]/50 ax = sns.heatmap(S.T, cbar=False, cmap=cmap, **kwargs) ax.invert_yaxis() ax.set_axis_off() ax.margins(0) plt.gcf().set_size_inches(*figsize) plt.show() sample = samples[["sentence", "lang", "path"]].to_numpy()[0] sentence, lang, clip_path = sample signal, rate = read_mp3(clip_path) plot_signal(signal) powspec = spectrograms([signal], rate)[0] plot_spectrogram(powspec.numpy()) ``` This representation is very sparse, with zeros everywhere except in the lowest frequency bands. The main problem here is that relative differences between energy values are very large, making it different to compare large changes in energy. These differences can be reduced by mapping the values onto a logarithmic scale. The [decibel-scale](https://en.wikipedia.org/wiki/Decibel) is a common choice. We will use the maximum value of `powspec` as the reference power ($\text{P}_0$). ### Decibel-scale spectrogram ``` from lidbox.features.audio import power_to_db dbspec = power_to_db([powspec])[0] plot_spectrogram(dbspec.numpy()) ``` This is an improvement, but the representation is still rather sparse. We also see that most speech information is in the lower bands, with a bit of energy in the higher frequencies. A common approach is to "squeeze together" the y-axis of all frequency bands by using a different scale, such as the [Mel-scale](https://en.wikipedia.org/wiki/Mel_scale). Lets "squeeze" the current 256 frequency bins into 40 Mel-bins. ### Log-scale Mel-spectrogram **Note** that we are scaling different things here. The Mel-scale warps the frequency bins (y-axis), while the logarithm is used to reduce relative differences between individual spectrogram values (pixels). ``` from lidbox.features.audio import linear_to_mel def logmelspectrograms(signals, rate): powspecs = spectrograms(signals, rate) melspecs = linear_to_mel(powspecs, rate, num_mel_bins=40) return tf.math.log(melspecs + 1e-6) logmelspec = logmelspectrograms([signal], rate)[0] plot_spectrogram(logmelspec.numpy()) ``` One common normalization technique is frequency channel standardization, i.e. normalization of rows to zero mean and unit variance. ``` from lidbox.features import cmvn logmelspec_mv = cmvn([logmelspec])[0] plot_spectrogram(logmelspec_mv.numpy()) ``` Or only mean-normalization if you think the variances contain important information. ``` logmelspec_m = cmvn([logmelspec], normalize_variance=False)[0] plot_spectrogram(logmelspec_m.numpy()) ``` ## Cepstral representations Another common representation are the Mel-frequency cepstral coefficients (MFCC), which are obtained by applying the [discrete cosine transform](https://en.wikipedia.org/wiki/Discrete_cosine_transform) on the log-scale Mel-spectrogram. ### MFCC ``` def plot_cepstra(X, figsize=None): if not figsize: figsize = (X.shape[0]/50, X.shape[1]/20) plot_spectrogram(X, cmap="RdBu_r", figsize=figsize) mfcc = tf.signal.mfccs_from_log_mel_spectrograms([logmelspec])[0] plot_cepstra(mfcc.numpy()) ``` Most of the information is concentrated in the lower coefficients. It is common to drop the 0th coefficient and select a subset starting at 1, e.g. 1 to 20. See [this post](http://practicalcryptography.com/miscellaneous/machine-learning/guide-mel-frequency-cepstral-coefficients-mfccs/) for more details. ``` mfcc = mfcc[:,1:21] plot_cepstra(mfcc.numpy()) ``` Now we have a very compact representation, but most of the variance is still in the lower coefficients and overshadows the smaller changes in higher coefficients. We can normalize the MFCC matrix row-wise by standardizing each row to zero mean and unit variance. This is commonly called cepstral mean and variance normalization (CMVN). ### MFCC + CMVN ``` mfcc_cmvn = cmvn([mfcc])[0] plot_cepstra(mfcc_cmvn.numpy()) ``` ### Which one is best? Speech feature extraction is a large, active research topic and it is impossible to choose one representation that would work well in all situations. Common choices in state-of-the-art spoken language identification are log-scale Mel-spectrograms and MFCCs, with different normalization approaches. For example, [here](https://github.com/swshon/dialectID_e2e) is an experiment in Arabic dialect identification, where log-scale Mel-spectra (referred to as FBANK) produced slightly better results compared to MFCCs. It is not obvious when to choose which representation, or if we should even use the FFT at all. You can read [this post](https://haythamfayek.com/2016/04/21/speech-processing-for-machine-learning.html) for a more detailed discussion. ## Voice activity detection It is common for speech datasets to contain audio samples with short segments of silence or sounds that are not speech. Since these are usually irrelevant for making a language classification decision, we would prefer to discard such segments. This is called voice activity detection (VAD) and it is another large, active research area. [Here](https://wiki.aalto.fi/pages/viewpage.action?pageId=151500905) is a brief overview of VAD. Non-speech segments can be either noise or silence. Separating non-speech noise from speech is non-trivial but possible, for example with [neural networks](https://www.isca-speech.org/archive/Interspeech_2019/pdfs/1354.pdf). Silence, on the other hand, shows up as zeros in our speech representations, since these segments contain lower energy values compared to segments with speech. Such non-speech segments are therefore easy to detect and discard, for example by comparing the energy of the segment to the average energy of the whole sample. If the samples in our example do not contain much background noise, a simple energy-based VAD technique should be enough to drop all silent segments. We'll use the [root mean square](https://en.wikipedia.org/wiki/Root_mean_square) (RMS) energy to detect short silence segments. `lidbox` has a simple energy-based VAD function, which we will use as follows: 1. Divide the signal into non-overlapping 10 ms long windows. 2. Compute RMS of each window. 3. Reduce all window RMS values by averaging to get a single mean RMS value. 4. Set a decision threshold at 0.1 for marking silence windows. In other words, if the window RMS is less than 0.1 of the mean RMS, mark the window as silence. ``` from lidbox.features.audio import framewise_rms_energy_vad_decisions import matplotlib.patches as patches sentence, lang, clip_path = sample signal, rate = read_mp3(clip_path) window_ms = tf.constant(10, tf.int32) window_frame_length = (window_ms * rate) // 1000 # Get binary VAD decisions for each 10 ms window vad_1 = framewise_rms_energy_vad_decisions( signal=signal, sample_rate=rate, frame_step_ms=window_ms, strength=0.1) # Plot unfiltered signal sns.set(rc={'figure.figsize': (6, 0.5)}) ax = sns.lineplot(data=signal, lw=0.1, legend=None) ax.set_axis_off() ax.margins(0) # Plot shaded area over samples marked as not speech (VAD == 0) for x, is_speech in enumerate(vad_1.numpy()): if not is_speech: rect = patches.Rectangle( (x*window_frame_length, -1), window_frame_length, 2, linewidth=0, color='gray', alpha=0.2) ax.add_patch(rect) plt.show() print("lang:", lang) print("sentence: '{}'".format(sentence)) embed_audio(signal, rate) # Partition the signal into 10 ms windows to match the VAD decisions windows = tf.signal.frame(signal, window_frame_length, window_frame_length) # Filter signal with VAD decision == 1 (remove gray areas) filtered_signal = tf.reshape(windows[vad_1], [-1]) plot_signal(filtered_signal) print("dropped {:d} out of {:d} frames, leaving {:.3f} of the original signal".format( signal.shape[0] - filtered_signal.shape[0], signal.shape[0], filtered_signal.shape[0]/signal.shape[0])) embed_audio(filtered_signal, rate) ``` The filtered signal has less silence, but some of the pauses between words sound too short and unnatural. We would prefer not to remove small pauses that normally occur between words, so lets say all pauses shorter than 300 ms should not be filtered out. Lets also move all VAD code into a function. ``` def remove_silence(signal, rate): window_ms = tf.constant(10, tf.int32) window_frames = (window_ms * rate) // 1000 # Get binary VAD decisions for each 10 ms window vad_1 = framewise_rms_energy_vad_decisions( signal=signal, sample_rate=rate, frame_step_ms=window_ms, # Do not return VAD = 0 decisions for sequences shorter than 300 ms min_non_speech_ms=300, strength=0.1) # Partition the signal into 10 ms windows to match the VAD decisions windows = tf.signal.frame(signal, window_frames, window_frames) # Filter signal with VAD decision == 1 return tf.reshape(windows[vad_1], [-1]) sentence, lang, clip_path = sample signal, rate = read_mp3(clip_path) filtered_signal = remove_silence(signal, rate) plot_signal(filtered_signal) print("dropped {:d} out of {:d} frames, leaving {:.3f} of the original signal".format( signal.shape[0] - filtered_signal.shape[0], signal.shape[0], filtered_signal.shape[0]/signal.shape[0])) print("lang:", lang) print("sentence: '{}'".format(sentence)) embed_audio(filtered_signal, rate) ``` We dropped some silence segments but left most of the speech intact, perhaps this is enough for our example. Although this VAD approach is simple and works ok for our data, it will not work for speech data with non-speech sounds in the background like music or noise. For such data we might need more powerful VAD filters such as neural networks that have been trained on a speech vs non-speech classification task with large amounts of different noise. But lets not add more complexity to our example. We'll use the RMS based filter for all other signals too. ## Comparison of representations Lets extract these features for all signals in our random sample. ``` for sentence, lang, clip_path in samples[["sentence", "lang", "path"]].to_numpy(): signal_before_vad, rate = read_mp3(clip_path) signal = remove_silence(signal_before_vad, rate) logmelspec = logmelspectrograms([signal], rate)[0] logmelspec_mvn = cmvn([logmelspec], normalize_variance=False)[0] mfcc = tf.signal.mfccs_from_log_mel_spectrograms([logmelspec])[0] mfcc = mfcc[:,1:21] mfcc_cmvn = cmvn([mfcc])[0] plot_width = logmelspec.shape[0]/50 plot_signal(signal.numpy(), figsize=(plot_width, .6)) print("VAD: {} -> {} sec".format( signal_before_vad.size / rate, signal.numpy().size / rate)) print("lang:", lang) print("sentence:", sentence) embed_audio(signal.numpy(), rate) plot_spectrogram(logmelspec_mvn.numpy(), figsize=(plot_width, 1.2)) plot_cepstra(mfcc_cmvn.numpy(), figsize=(plot_width, .6)) plot_separator() ``` ## Loading the samples to a `tf.data.Dataset` iterator Our dataset is relatively small (2.5 GiB) and we might be able to read all files into signals and keep them in main memory. However, most speech datasets are much larger due to the amount of data needed for training neural network models that would be of any practical use. We need some kind of lazy iteration or streaming solution that views only one part of the dataset at a time. One such solution is to represent the dataset as a [TensorFlow iterator](https://www.tensorflow.org/api_docs/python/tf/data/Dataset), which evaluates its contents only when they are needed, similar to the [MapReduce](https://en.wikipedia.org/wiki/MapReduce) programming model for big data. The downside with lazy iteration or streaming is that we lose the capability of doing random access by row id. However, this shouldn't be a problem since we can always keep the whole metadata table in memory and do random access on its rows whenever needed. Another benefit of TensorFlow dataset iterators is that we can map arbitrary [`tf.function`](https://www.tensorflow.org/api_docs/python/tf/function)s over the dataset and TensorFlow will automatically parallelize the computations and place them on different devices, such as the GPU. The core architecture of `lidbox` has been organized around the `tf.data.Dataset` API, leaving all the heavy lifting for TensorFlow to handle. But before we load all our speech data, lets warmup with our small random sample of 8 rows. ``` samples ``` Lets load it into a `tf.data.Dataset`. ``` def metadata_to_dataset_input(meta): # Create a mapping from column names to all values under the column as tensors return { "id": tf.constant(meta.index, tf.string), "path": tf.constant(meta.path, tf.string), "lang": tf.constant(meta.lang, tf.string), "target": tf.constant(meta.target, tf.int32), "split": tf.constant(meta.split, tf.string), } sample_ds = tf.data.Dataset.from_tensor_slices(metadata_to_dataset_input(samples)) sample_ds ``` All elements produced by the `Dataset` iterator are `dict`s of (string, Tensor) pairs, where the string denotes the metadata type. Although the `Dataset` object is primarily for automating large-scale data processing pipelines, it is easy to extract all elements as `numpy`-values: ``` for x in sample_ds.as_numpy_iterator(): display(x) ``` ### Reading audio files Lets load the signals by [mapping](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map) a file reading function for each element over the whole dataset. We'll add a `tf.data.Dataset` function wrapper on top of `read_mp3`, which we defined earlier. TensorFlow will infer the input and output values of the wrapper as tensors from the type signature of dataset elements. We must use `tf.numpy_function` if we want to allow calling the non-TensorFlow function `read_mp3` also from inside the graph environment. It might not be as efficient as using TensorFlow ops but reading a file would have a lot of latency anyway so this is not such a big hit for performance. Besides, we can always hide the latency by reading several files in parallel. ``` def read_mp3_wrapper(x): signal, sample_rate = tf.numpy_function( # Function read_mp3, # Argument list [x["path"]], # Return value types [tf.float32, tf.int64]) return dict(x, signal=signal, sample_rate=tf.cast(sample_rate, tf.int32)) for x in sample_ds.map(read_mp3_wrapper).as_numpy_iterator(): print("id: {}".format(x["id"].decode("utf-8"))) print("signal.shape: {}, sample rate: {}".format(x["signal"].shape, x["sample_rate"])) print() ``` ### Removing silence and extracting features Organizing all preprocessing steps as functions that can be mapped over the `Dataset` object allows us to represent complex transformations easily. ``` def remove_silence_wrapper(x): return dict(x, signal=remove_silence(x["signal"], x["sample_rate"])) def batch_extract_features(x): with tf.device("GPU"): signals, rates = x["signal"], x["sample_rate"] logmelspecs = logmelspectrograms(signals, rates[0]) logmelspecs_smn = cmvn(logmelspecs, normalize_variance=False) mfccs = tf.signal.mfccs_from_log_mel_spectrograms(logmelspecs) mfccs = mfccs[...,1:21] mfccs_cmvn = cmvn(mfccs) return dict(x, logmelspec=logmelspecs_smn, mfcc=mfccs_cmvn) features_ds = (sample_ds.map(read_mp3_wrapper) .map(remove_silence_wrapper) .batch(1) .map(batch_extract_features) .unbatch()) for x in features_ds.as_numpy_iterator(): print(x["id"]) for k in ("signal", "logmelspec", "mfcc"): print("{}.shape: {}".format(k, x[k].shape)) print() ``` ### Inspecting dataset contents in TensorBoard `lidbox` has a helper function for dumping element information into [`TensorBoard`](https://www.tensorflow.org/tensorboard) summaries. This converts all 2D features into images, writes signals as audio summaries, and extracts utterance ids. ``` import lidbox.data.steps as ds_steps cachedir = os.path.join(workdir, "cache") _ = ds_steps.consume_to_tensorboard( # Rename logmelspec as 'input', these will be plotted as images ds=features_ds.map(lambda x: dict(x, input=x["logmelspec"])), summary_dir=os.path.join(cachedir, "tensorboard", "data", "sample"), config={"batch_size": 1, "image_size_multiplier": 4}) ``` Open a terminal and launch TensorBoard to view the summaries written to `$wrkdir/cache/tensorboard/dataset/sample`: ``` tensorboard --logdir /data/exp/cv4/cache/tensorboard ``` Then open the url in a browser and inspect the contents. You can leave the server running, since we'll log the training progress to the same directory. ## Loading all data We'll now begin loading everything from disk and preparing a pipeline from mp3-filepaths to neural network input. We'll use the autotune feature of `tf.data` to allow TensorFlow figure out automatically how much of the pipeline should be split up into parallel calls. ``` import lidbox.data.steps as ds_steps TF_AUTOTUNE = tf.data.experimental.AUTOTUNE def signal_is_not_empty(x): return tf.size(x["signal"]) > 0 def pipeline_from_metadata(data, shuffle=False): if shuffle: # Shuffle metadata to get an even distribution of labels data = data.sample(frac=1, random_state=np_rng.bit_generator) ds = ( # Initialize dataset from metadata tf.data.Dataset.from_tensor_slices(metadata_to_dataset_input(data)) # Read mp3 files from disk in parallel .map(read_mp3_wrapper, num_parallel_calls=TF_AUTOTUNE) # Apply RMS VAD to drop silence from all signals .map(remove_silence_wrapper, num_parallel_calls=TF_AUTOTUNE) # Drop signals that VAD removed completely .filter(signal_is_not_empty) # Extract features in parallel .batch(1) .map(batch_extract_features, num_parallel_calls=TF_AUTOTUNE) .unbatch() ) return ds # Mapping from dataset split names to tf.data.Dataset objects split2ds = { split: pipeline_from_metadata(meta[meta["split"]==split], shuffle=split=="train") for split in split_names } ``` ### Testing pipeline performance Note that we only constructed the pipeline with all steps we want to compute. All TensorFlow ops are computed only when elements are requested from the iterator. Lets iterate over the training dataset from first to last element to ensure the pipeline will not be a performance bottleneck during training. ``` _ = ds_steps.consume(split2ds["train"], log_interval=2000) ``` ### Caching pipeline state We can [cache](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#cache) the iterator state as a single binary file at arbitrary stages. This allows us to automatically skip all steps that precede the call to `tf.Dataset.cache`. Lets cache the training dataset and iterate again over all elements to fill the cache. **Note** that you will still be storing all data on the disk (4.6 GiB new data), so this optimization is a space-time tradeoff. ``` os.makedirs(os.path.join(cachedir, "data")) split2ds["train"] = split2ds["train"].cache(os.path.join(cachedir, "data", "train")) _ = ds_steps.consume(split2ds["train"], log_interval=2000) ``` If we iterate over the dataset again, TensorFlow should read all elements from the cache file. ``` _ = ds_steps.consume(split2ds["train"], log_interval=2000) ``` As a side note, if your training environment has fast read-write access to a file system configured for reading and writing very large files, this optimization can be a very significant performance improvement. **Note** also that all usual problems related to cache invalidation apply. When caching extracted features and metadata to disk, be extra careful in your experiments to ensure you are not interpreting results computed on data from some outdated cache. ### Dumping a few batches to TensorBoard Lets extract 100 first elements of every split to TensorBoard. ``` for split, ds in split2ds.items(): _ = ds_steps.consume_to_tensorboard( ds.map(lambda x: dict(x, input=x["logmelspec"])), os.path.join(cachedir, "tensorboard", "data", split), {"batch_size": 1, "image_size_multiplier": 2, "num_batches": 100}, exist_ok=True) ``` ## Training a supervised, neural network language classifier We have now configured an efficient data pipeline and extracted some data samples to summary files for TensorBoard. It is time to train a classifier on the data. ### Drop metadata from dataset During training, we only need a tuple of model input and targets. We can therefore drop everything else from the dataset elements just before training starts. This is also a good place to decide if we want to train on MFCCs or Mel-spectra. ``` model_input_type = "logmelspec" def as_model_input(x): return x[model_input_type], x["target"] train_ds_demo = list(split2ds["train"] .map(as_model_input) .shuffle(100) .take(6) .as_numpy_iterator()) for input, target in train_ds_demo: print(input.shape, target2lang[target]) if model_input_type == "mfcc": plot_cepstra(input) else: plot_spectrogram(input) plot_separator() ``` ### Asserting all input is valid Since the training dataset is cached, we can quickly iterate over all elements and check that we don't have any NaNs or negative targets. ``` def assert_finite(x, y): tf.debugging.assert_all_finite(x, "non-finite input") tf.debugging.assert_non_negative(y, "negative target") return x, y _ = ds_steps.consume(split2ds["train"].map(as_model_input).map(assert_finite), log_interval=5000) ``` It is also easy to compute stats on the dataset elements. For example finding global minimum and maximum values of the inputs. ``` x_min = split2ds["train"].map(as_model_input).reduce( tf.float32.max, lambda acc, elem: tf.math.minimum(acc, tf.math.reduce_min(elem[0]))) x_max = split2ds["train"].map(as_model_input).reduce( tf.float32.min, lambda acc, elem: tf.math.maximum(acc, tf.math.reduce_max(elem[0]))) print("input tensor global minimum: {}, maximum: {}".format(x_min.numpy(), x_max.numpy())) ``` ### Selecting a model architecture `lidbox` provides a small set of neural network model architectures out of the box. Many of these architectures have good results in the literature for different datasets. These models have been implemented in Keras, so you could replace the model we are using here with anything you want. The ["x-vector"](http://danielpovey.com/files/2018_odyssey_xvector_lid.pdf) architecture has worked well in speaker and language identification so lets create an untrained Keras x-vector model. One of its core features is learning fixed length vector representations (x-vectors) for input of arbitrary length. These vectors are extracted from the first fully connected layer (`segment1`), without activation. This opens up opportunities for doing all kinds of statistical analysis on these vectors, but that's out of scope for our example. We'll try to regularize the network by adding frequency [channel dropout](https://dl.acm.org/doi/abs/10.1016/j.patrec.2017.09.023) with probability 0.8. In other words, during training we set input rows randomly to zeros with probability 0.8. This might avoid overfitting the network on frequency channels containing noise that is irrelevant for deciding the language. ``` import lidbox.models.xvector as xvector def create_model(num_freq_bins, num_labels): model = xvector.create([None, num_freq_bins], num_labels, channel_dropout_rate=0.8) model.compile( loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5)) return model model = create_model( num_freq_bins=20 if model_input_type == "mfcc" else 40, num_labels=len(target2lang)) model.summary() ``` ### Channel dropout demo Here's what happens to the input during training. ``` channel_dropout = tf.keras.layers.SpatialDropout1D(model.get_layer("channel_dropout").rate) for input, target in train_ds_demo: print(input.shape, target2lang[target]) input = channel_dropout(tf.expand_dims(input, 0), training=True)[0].numpy() if model_input_type == "mfcc": plot_cepstra(input) else: plot_spectrogram(input) plot_separator() ``` ### Training the classifier The validation set is needed after every epoch, so we might as well cache it. **Note** that this writes 2.5 GiB of additional data to disk the first time the validation set is iterated over, i.e. at the end of epoch 1. Also, we can't use batches since our input is of different lengths (perhaps with [ragged tensors](https://www.tensorflow.org/versions/r2.3/api_docs/python/tf/data/experimental/dense_to_ragged_batch)). ``` callbacks = [ # Write scalar metrics and network weights to TensorBoard tf.keras.callbacks.TensorBoard( log_dir=os.path.join(cachedir, "tensorboard", model.name), update_freq="epoch", write_images=True, profile_batch=0, ), # Stop training if validation loss has not improved from the global minimum in 10 epochs tf.keras.callbacks.EarlyStopping( monitor='val_loss', patience=10, ), # Write model weights to cache everytime we get a new global minimum loss value tf.keras.callbacks.ModelCheckpoint( os.path.join(cachedir, "model", model.name), monitor='val_loss', save_weights_only=True, save_best_only=True, verbose=1, ), ] train_ds = split2ds["train"].map(as_model_input).shuffle(1000) dev_ds = split2ds["dev"].cache(os.path.join(cachedir, "data", "dev")).map(as_model_input) history = model.fit( train_ds.batch(1), validation_data=dev_ds.batch(1), callbacks=callbacks, verbose=2, epochs=100) ``` ## Evaluating the classifier Lets run all test set samples through our trained model by loading the best weights from the cache. ``` from lidbox.util import predict_with_model test_ds = split2ds["test"].map(lambda x: dict(x, input=x["logmelspec"])).batch(1) _ = model.load_weights(os.path.join(cachedir, "model", model.name)) utt2pred = predict_with_model(model, test_ds) test_meta = meta[meta["split"]=="test"] assert not test_meta.join(utt2pred).isna().any(axis=None), "missing predictions" test_meta = test_meta.join(utt2pred) test_meta ``` ### Average detection cost ($\text{C}_\text{avg}$) The de facto standard metric for evaluating spoken language classifiers might be the *average detection cost* ($\text{C}_\text{avg}$), which has been refined to its current form during past [language recognition competitions](https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=925272). `lidbox` provides this metric as a `tf.keras.Metric` subclass. Scikit-learn provides other commonly used metrics so there is no need to manually compute those. ``` from lidbox.util import classification_report from lidbox.visualize import draw_confusion_matrix true_sparse = test_meta.target.to_numpy(np.int32) pred_dense = np.stack(test_meta.prediction) pred_sparse = pred_dense.argmax(axis=1).astype(np.int32) report = classification_report(true_sparse, pred_dense, lang2target) for m in ("avg_detection_cost", "avg_equal_error_rate", "accuracy"): print("{}: {:.3f}".format(m, report[m])) lang_metrics = pd.DataFrame.from_dict({k: v for k, v in report.items() if k in lang2target}) lang_metrics["mean"] = lang_metrics.mean(axis=1) display(lang_metrics.T) fig, ax = draw_confusion_matrix(report["confusion_matrix"], lang2target) ``` ## Conclusions This was an example on deep learning based simple spoken language identification of 4 different languages from the Mozilla Common Voice free speech datasets. We managed to train a model that adequately recognizes languages spoken by the test set speakers. However, there is clearly room for improvement. We did simple random oversampling to balance the language distribution in the training set, but perhaps there are better ways to do this. We also did not tune optimization hyperparameters or try different neural network architectures or layer combinations. It might also be possible to increase robustness by audio feature engineering, such as [random FIR filtering](https://www.isca-speech.org/archive/Interspeech_2018/abstracts/1047.html) to simulate microphone differences.
github_jupyter
### Load Dataset ``` import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns # 노트북 안에 그래프를 그리기 위해 %matplotlib inline # 그래프에서 마이너스 폰트 깨지는 문제에 대한 대처 mpl.rcParams['axes.unicode_minus'] = False import warnings warnings.filterwarnings('ignore') train = pd.read_csv("data/Bike Sharing Demand/train.csv", parse_dates=["datetime"]) train.shape test = pd.read_csv("data/Bike Sharing Demand/test.csv", parse_dates=["datetime"]) test.shape ``` ### Feature Engineering ``` train["year"] = train["datetime"].dt.year train["month"] = train["datetime"].dt.month train["day"] = train["datetime"].dt.day train["hour"] = train["datetime"].dt.hour train["minute"] = train["datetime"].dt.minute train["second"] = train["datetime"].dt.second train["dayofweek"] = train["datetime"].dt.dayofweek train.shape test["year"] = test["datetime"].dt.year test["month"] = test["datetime"].dt.month test["day"] = test["datetime"].dt.day test["hour"] = test["datetime"].dt.hour test["minute"] = test["datetime"].dt.minute test["second"] = test["datetime"].dt.second test["dayofweek"] = test["datetime"].dt.dayofweek test.shape # widspeed 풍속에 0 값이 가장 많다. => 잘못 기록된 데이터를 고쳐 줄 필요가 있음 fig, axes = plt.subplots(nrows=2) fig.set_size_inches(18,10) plt.sca(axes[0]) plt.xticks(rotation=30, ha='right') axes[0].set(ylabel='Count',title="train windspeed") sns.countplot(data=train, x="windspeed", ax=axes[0]) plt.sca(axes[1]) plt.xticks(rotation=30, ha='right') axes[1].set(ylabel='Count',title="test windspeed") sns.countplot(data=test, x="windspeed", ax=axes[1]) # 풍속의 0값에 특정 값을 넣어준다. # 평균을 구해 일괄적으로 넣어줄 수도 있지만, 예측의 정확도를 높이는 데 도움이 될것 같진 않다. # train.loc[train["windspeed"] == 0, "windspeed"] = train["windspeed"].mean() # test.loc[train["windspeed"] == 0, "windspeed"] = train["windspeed"].mean() # 풍속이 0인것과 아닌 것의 세트를 나누어 준다. trainWind0 = train.loc[train['windspeed'] == 0] trainWindNot0 = train.loc[train['windspeed'] != 0] print(trainWind0.shape) print(trainWindNot0.shape) # 그래서 머신러닝으로 예측을 해서 풍속을 넣어주도록 한다. from sklearn.ensemble import RandomForestClassifier def predict_windspeed(data): # 풍속이 0인것과 아닌 것을 나누어 준다. dataWind0 = data.loc[data['windspeed'] == 0] dataWindNot0 = data.loc[data['windspeed'] != 0] # 풍속을 예측할 피처를 선택한다. wCol = ["season", "weather", "humidity", "month", "temp", "year", "atemp"] # 풍속이 0이 아닌 데이터들의 타입을 스트링으로 바꿔준다. dataWindNot0["windspeed"] = dataWindNot0["windspeed"].astype("str") # 랜덤포레스트 분류기를 사용한다. rfModel_wind = RandomForestClassifier() # wCol에 있는 피처의 값을 바탕으로 풍속을 학습시킨다. rfModel_wind.fit(dataWindNot0[wCol], dataWindNot0["windspeed"]) # 학습한 값을 바탕으로 풍속이 0으로 기록 된 데이터의 풍속을 예측한다. wind0Values = rfModel_wind.predict(X = dataWind0[wCol]) # 값을 다 예측 후 비교해 보기 위해 # 예측한 값을 넣어 줄 데이터 프레임을 새로 만든다. predictWind0 = dataWind0 predictWindNot0 = dataWindNot0 # 값이 0으로 기록 된 풍속에 대해 예측한 값을 넣어준다. predictWind0["windspeed"] = wind0Values # dataWindNot0 0이 아닌 풍속이 있는 데이터프레임에 예측한 값이 있는 데이터프레임을 합쳐준다. data = predictWindNot0.append(predictWind0) # 풍속의 데이터타입을 float으로 지정해 준다. data["windspeed"] = data["windspeed"].astype("float") data.reset_index(inplace=True) data.drop('index', inplace=True, axis=1) return data # 0값을 조정한다. train = predict_windspeed(train) # test = predict_windspeed(test) # widspeed 의 0값을 조정한 데이터를 시각화 fig, ax1 = plt.subplots() fig.set_size_inches(18,6) plt.sca(ax1) plt.xticks(rotation=30, ha='right') ax1.set(ylabel='Count',title="train windspeed") sns.countplot(data=train, x="windspeed", ax=ax1) ``` ### Feature Selection ##### 신호와 잡음을 구분해야 한다. ##### 피처가 많다고 해서 무조건 좋은 성능을 내지 않는다. ##### 피처를 하나씩 추가하고 변경해 가면서 성능이 좋지 않은 피처는 제거하도록 한다. ``` # 연속형 feature와 범주형 feature # 연속형 feature = ["temp","humidity","windspeed","atemp"] # 범주형 feature의 type을 category로 변경 해 준다. categorical_feature_names = ["season","holiday","workingday","weather", "dayofweek","month","year","hour"] for var in categorical_feature_names: train[var] = train[var].astype("category") test[var] = test[var].astype("category") feature_names = ["season", "weather", "temp", "atemp", "humidity", "windspeed", "year", "hour", "dayofweek", "holiday", "workingday"] feature_names X_train = train[feature_names] print(X_train.shape) X_train.head() X_test = test[feature_names] print(X_test.shape) X_test.head() label_name = "count" y_train = train[label_name] print(y_train.shape) y_train.head() ``` ### Score ### RMSLE ``` from sklearn.metrics import make_scorer def rmsle(predicted_values, actual_values): # 넘파이로 배열 형태로 바꿔준다. predicted_values = np.array(predicted_values) actual_values = np.array(actual_values) # 예측값과 실제 값에 1을 더하고 로그를 씌워준다. log_predict = np.log(predicted_values + 1) log_actual = np.log(actual_values + 1) # 위에서 계산한 예측값에서 실제값을 빼주고 제곱을 해준다. difference = log_predict - log_actual # difference = (log_predict - log_actual) ** 2 difference = np.square(difference) # 평균을 낸다. mean_difference = difference.mean() # 다시 루트를 씌운다. score = np.sqrt(mean_difference) return score rmsle_scorer = make_scorer(rmsle) rmsle_scorer ``` ### Cross Validation 교차 검증 ``` from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score k_fold = KFold(n_splits=10, shuffle=True, random_state=0) ``` ### RandomForest ``` from sklearn.ensemble import RandomForestRegressor max_depth_list = [] model = RandomForestRegressor(n_estimators=100, n_jobs=-1, random_state=0) model %time score = cross_val_score(model, X_train, y_train, cv=k_fold, scoring=rmsle_scorer) score = score.mean() # 0에 근접할수록 좋은 데이터 print("Score= {0:.5f}".format(score)) ``` ### Train ``` # 학습시킴, 피팅(옷을 맞출 때 사용하는 피팅을 생각함) - 피처와 레이블을 넣어주면 알아서 학습을 함 model.fit(X_train, y_train) # 예측 predictions = model.predict(X_test) print(predictions.shape) predictions[0:10] # 예측한 데이터를 시각화 해본다. fig,(ax1,ax2)= plt.subplots(ncols=2) fig.set_size_inches(12,5) sns.distplot(y_train,ax=ax1,bins=50) ax1.set(title="train") sns.distplot(predictions,ax=ax2,bins=50) ax2.set(title="test") ``` ### Submit ``` submission = pd.read_csv("data/Bike Sharing Demand/sampleSubmission.csv") submission submission["count"] = predictions print(submission.shape) submission.head() submission.to_csv("data/Score_{0:.5f}_submission.csv".format(score), index=False) ``` ### 참고 EDA & Ensemble Model (Top 10 Percentile) | Kaggle How to finish top 10 percentile in Bike Sharing Demand Competition In Kaggle? (part -1) How to finish top 10 percentile in Bike Sharing Demand Competition In Kaggle? (part -2)
github_jupyter
# 数学函数、字符串和对象 ## 本章介绍Python函数来执行常见的数学运算 - 函数是完成一个特殊任务的一组语句,可以理解为一个函数相当于一个小功能,但是在开发中,需要注意一个函数的长度最好不要超过一屏 - Python中的内置函数是不需要Import导入的 <img src="../Photo/15.png"></img> ``` a = -10 print(abs(a)) max b = -10.1 print(abs(b)) c = 0 print(abs(c)) max(1, 2, 3, 4, 5) min(1, 2, 3, 4, 5) min(1, 2, 3, -4, 5) for i in range(10): print(i) pow(2, 4, 2) # 幂指数运算,第三个参数是取模运算 round(10.67, 1) # 一个参数就是四舍五入,保留小数位数 round(6.9) round(8.123456,3) import random a = random.randint(0,1000) print("随机数为",a) print("已经生成一个0-1000随机数") print("请输入你猜测的数字大小") zz = int(input("please input a num : " )) if a<zz: print("大了") if a>zz: print("小了") if a==zz: print("猜中了") import time start = time.time() num = 0 for i in range(1000000): num +=i end = time.time() print(end - start) ``` ## 尝试练习Python内置函数 ## Python中的math模块提供了许多数学函数 <img src="../Photo/16.png"></img> <img src="../Photo/17.png"></img> ``` #L = (yloga+(1-y)log(1-a)) import math #y=0 #a=1 import math # 导入数学包 a1 = math.fabs(-2) print(a1) print(math.log(2.71828)) print(math.asin(1.0)) b1 = math.cos(math.radians(90)) # cos代入的是弧度值,very important! print(b1) c1 = 3.1415926 print(math.degrees(c1)) math.sqrt(9) math.sin(2 * math.pi) math.cos(2 * math.pi) min(2, 2, 1) math.log(math.e ** 2) math.exp(1) max(2, 3, 4) math.ceil(-2.5) # 验证码系统 first_num, second_num = 3, 4 print('验证码', first_num ,'+', second_num, '= ?') answer = eval(input('写出结果: ')) if answer == first_num + second_num: print('验证码正确') else: print('验证码错误') import random import math first_num, second_num = 3, 4 list = ['+', '-', '*', '/'] randl = random.randint(0, 3) if list[randl]=='+': print('验证码', first_num ,'+', second_num, '= ?') right_answer = first_num + second_num elif list[randl]=='-': print('验证码', first_num ,'-', second_num, '= ?') right_answer = first_num - second_num elif list[randl]=='-': print('验证码', first_num ,'*', second_num, '= ?') right_answer = first_num * second_num else: print('验证码', first_num ,'/', second_num, '= ?') right_answer = first_num / second_num answer = eval(input('写出结果: ')) if answer == right_answer: print('验证码正确') else: print('验证码错误') # 验证码系统 import random first_num = random.randint(0, 9) second_num = random.randint(0, 9) fuhao = random.randint(0, 3) if fuhao==0: print('验证码', first_num ,'+', second_num, '= ?') right_answer = first_num + second_num elif fuhao==1: print('验证码', first_num ,'-', second_num, '= ?') right_answer = first_num - second_num elif fuhao==2: print('验证码', first_num ,'*', second_num, '= ?') right_answer = first_num * second_num else: print('验证码', first_num ,'/', second_num, '= ?') right_answer = first_num / second_num answer = eval(input('写出结果: ')) if answer == right_answer: print('验证码正确') else: print('验证码错误') import random list = ['+', '-', '*', '/'] c = random.sample(list, 1) print(c) import random import math first_num = random.randint(0, 9) second_num = random.randint(0, 9) list = ['+', '-', '*', '/'] fuhao = random.sample(list, 1) if fuhao=='+': print('验证码', first_num ,'+', second_num, '= ?') right_answer = first_num + second_num elif fuhao=='-': print('验证码', first_num ,'-', second_num, '= ?') right_answer = first_num - second_num elif fuhao=='-': print('验证码', first_num ,'*', second_num, '= ?') right_answer = first_num * second_num else: print('验证码', first_num ,'/', second_num, '= ?') right_answer = first_num / second_num answer = eval(input('写出结果: ')) if answer == right_answer: print('验证码正确') else: print('验证码错误') import PIL ``` ## 两个数学常量PI和e,可以通过使用math.pi 和math.e调用 ``` import math print(math.pi) print(math.e) ``` ## EP: - 通过math库,写一个程序,使得用户输入三个顶点(x,y)返回三个角度 - 注意:Python计算角度为弧度制,需要将其转换为角度 <img src="../Photo/18.png"> ``` import math x1 = int(input("请输入A点x1坐标: ")) y1 = int(input("请输入A点y1坐标: ")) x2 = int(input("请输入B点x1坐标: ")) y2 = int(input("请输入B点y1坐标: ")) x3 = int(input("请输入C点x1坐标: ")) y3 = int(input("请输入C点y1坐标: ")) c = math.sqrt((x2-x1)**2+(y2-y1)**2) b = math.sqrt((x3-x1)**2+(y3-y1)**2) a = math.sqrt((x3-x2)**2+(y3-y2)**2) A = math.degrees(math.acos((a*a-b*b-c*c)/(-2*b*c))) B = math.degrees(math.acos((b*b-a*a-c*c)/(-2*a*c))) print(A) import math x1, y1 = eval(input('输入A点坐标:')) x2, y2 = eval(input('输入B点坐标:')) x3, y3 = eval(input('输入C点坐标:')) a = math.sqrt((x1 - x2) ** 2 + (y1 - y2) ** 2) b = math.sqrt((x1 - x3) ** 2 + (y1 - y3) ** 2) c = math.sqrt((x2 - x3) ** 2 + (y2 - y3) ** 2) A = math.degrees(math.acos((a * a - b * b - c * c) / (-2 * b * c))) B = math.degrees(math.acos((b * b - a * a - c * c) / (-2 * a * c))) C = math.degrees(math.acos((c * c - b * b - a * a) / (-2 * a * b))) print('三角形的三个角分别为', A, B, C) ``` ## 字符串和字符 - 在Python中,字符串必须是在单引号或者双引号内,在多段换行的字符串中可以使用“”“ - 在使用”“”时,给予其变量则变为字符串,否则当多行注释使用 ``` z = "hhhhh我好帅" a = "h" for i in z: print(i) count = 0 print(a) if i == a: count+=1 print(count) a = 'joker' b = "Kate" c = """在Python中,字符串必须是在单引号或者双引号内,在多段换行的字符串中可以使用“”“ 在使用”“”时,给予其变量则变为字符串,否则当多行注释使用""" #字符串有多行时,添加三个单引号或者三个双引号 """在Python中,字符串必须是在单引号或者双引号内,在多段换行的字符串中可以使用“”“ 在使用”“”时,给予其变量则变为字符串,否则当多行注释使用""" #三引号可以表示多行注释 # 当6个引号没有赋值时,那么它是注释的作用 # 6个引号的作用,多行文本 print(type(a), type(b), type(c)) ``` ## ASCII码与Unicode码 - <img src="../Photo/19.png"></img> - <img src="../Photo/20.png"></img> - <img src="../Photo/21.png"></img> ## 函数ord、chr - ord 返回ASCII码值 - chr 返回字符 ``` joker = 'A' ord(joker) print(ord('q'), ord('Z')) print(chr(65)) print(chr(90)) import numpy as np np.nonzero(1) cs = "1558789464@qq.com" print("加密后:",end = '') for i in cs: zz = ord(i) + 1 print(str(zz),end = '') ``` ## EP: - 利用ord与chr进行简单邮箱加密 ``` email = 'maomaochong@163.com' # 邮箱加密过程 j = 0 for i in email: text = ord(i) + 1 re_text = chr(text) print(re_text) import hashlib str1 = 'this is a test.' h1 = hashlib.md5() h1.update(str1.encode(encoding = 'utf-8')) print('MD5加密之后为:', h1.hexdigest()) ``` ## 转义序列 \ - a = "He said,"Johon's program is easy to read"" - 转掉原来的意思 - 一般情况下,只有当语句与默认语句相撞的时候,就需要转义 ``` a = "He said,\"Johon's program is easy to read\"" #z正则表达式中常用转义字符\ print(a) ``` ## 高级print - 参数 end: 以什么方式结束打印 - 默认换行打印 ``` email = 'maomaochong@163.com' # 邮箱加密过程 j = 0 for i in email: text = ord(i) + 1 re_text = chr(text) print(re_text, end = '') ``` ## 函数str - 将类型强制转换成字符串类型 - 其他一些以后会学到(list,set,tuple...) ``` a = 100.12 type(str(a)) ``` ## 字符串连接操作 - 直接使用 “+” - join() 函数 ``` a1 = 'www.baidu.com/image.page=' a2 = '1' for i in range(0, 10): a2 = a1 + str(i) print(a2) %time '' .join(('a','b')) a = "abc" b = "def" %time print(a+b) joint = '^' %time joint.join(('a', 'b', 'c', 'd')) # join的参数需要在一个元组之中 %time '*'.join(('a', 'b', 'c', 'd')) # join的参数需要在一个元组之中 %time 'A' + 'B' + 'C' ``` ## EP: - 将 “Welcome” “to” "Python" 拼接 - 将int型 100 与 “joker is a bad man” 拼接 - 从控制台读取字符串 > 输入一个名字返回夸奖此人是一个帅哥 ``` a = "Welcome" b = "to" c = "Python" print(''.join((a,'\t',b,'\t',c))) a = 100 b = "jocker is a bad man" print('' .join((str(a),'\t',b))) text1 = ' '.join(('Welcome', 'to', 'Python')) i = 100 text2 = str(i) text3 = ' '.join((text2, 'Joker is a bad man')) print(text1, '\n', text2 ,'\n', text3) name = input('输入名字:') text = ' '.join((name, 'is a good boy.')) print(text) ``` ## 实例研究:最小数量硬币 - 开发一个程序,让用户输入总金额,这是一个用美元和美分表示的浮点值,返回一个由美元、两角五分的硬币、一角的硬币、五分硬币、以及美分个数 <img src="../Photo/22.png"></img> ``` amount = eval(input('Enter an amount, for example 11.56: ')) fenshuAmount = int(amount * 100) dollorAmount = fenshuAmount // 100 remainDollorAmount = fenshuAmount % 100 jiaoAmount = remainDollorAmount // 25 remainJiaoAmount = remainDollorAmount % 25 fenAmount = remainJiaoAmount // 10 remainFenAmount = remainJiaoAmount % 10 fenAmount2 = remainFenAmount // 5 remainFenAmount2 = remainFenAmount % 5 fenFinalAmount = remainFenAmount2 print('美元个数为',dollorAmount,'\n', '两角五分硬币个数为', jiaoAmount, '\n','一角个数为', fenAmount, '\n','五美分个数为', fenAmount2,'\n', '一美分个数为',fenFinalAmount) amount = eval(input('Ennter an amount,for example 11.56:')) remainingAmount = int(amount * 100) print(remainingAmount) numberOfOneDollars = remainingAmount //100 remainingAmount = remainingAmount % 100 numberOfQuarters = remainingAmount // 25 remainingAmount = remainingAmount % 25 numberOfDimes = remainingAmount // 10 remainingAmount = remainingAmount % 10 numberOfNickls = remainingAmount // 5 remainingAmount = remainingAmount % 5 numberOfPenies = remainingAmount print(numberOfOneDollars,numberOfQuarters,numberOfDimes,numberOfNickls,numberOfPenies) ``` - Python弱项,对于浮点型的处理并不是很好,但是处理数据的时候使用的是Numpy类型 <img src="../Photo/23.png"></img> ``` remainingAmount = eval(input('Ennter an amount,for example 11.56:')) print(remainingAmount) numberOfOneDollars = remainingAmount //100 remainingAmount = remainingAmount % 100 numberOfQuarters = remainingAmount // 25 remainingAmount = remainingAmount % 25 numberOfDimes = remainingAmount // 10 remainingAmount = remainingAmount % 10 numberOfNickls = remainingAmount // 5 remainingAmount = remainingAmount % 5 numberOfPenies = remainingAmount print(numberOfOneDollars,numberOfQuarters,numberOfDimes,numberOfNickls,numberOfPenies) ``` ## id与type - id 查看内存地址,在判断语句中将会使用 - type 查看元素类型 ``` a = 100 id(a) id(True) 100 == 100 112345678800000000 is '112345678800000000' 112345678800000000 is 112345678800000000 a = True b = False print(id(a), id(b)) a is b ``` ## 其他格式化语句见书 # Homework - 1 <img src="../Photo/24.png"><img> <img src="../Photo/25.png"><img> ``` import math r = float(input("please input the r: ")) s = 2*r*math.sin(math.pi / 5) area = 5*s*s/(4*math.tan(math.pi / 5)) print(round(area,2)) ``` - 2 <img src="../Photo/26.png"><img> ``` import math x1 = math.radians(float(input("please input the x1: "))) y1 = math.radians(float(input("please input the y1: "))) x2 = math.radians(float(input("please input the x2: "))) y2 = math.radians(float(input("please input the y2: "))) radius = 6371.01 d = radius*math.acos(math.sin(x1)*math.sin(x2)+math.cos(x1)*math.cos(x2)*math.cos(y1-y2)) print(d) ``` - 3 <img src="../Photo/27.png"><img> ``` import math s = float(input("please input the s")) area = 5*s*s/(4*math.tan(math.pi/5)) print(area) ``` - 4 <img src="../Photo/28.png"><img> ``` import math n = int(input("please input the n")) s = float(input("please input the s")) area = n*s*s/(4*math.tan(math.pi/n)) print(area) ``` - 5 <img src="../Photo/29.png"><img> <img src="../Photo/30.png"><img> ``` a = int(input("please input a asciinum: ")) print(chr(a)) ``` - 6 <img src="../Photo/31.png"><img> ``` name = input("please input ypur name: ") worked = float(input("please input you hours worked: ")) pay = float(input("please input you hourly pay rate: ")) federal = float(input("please input tax withholding: ")) state = float(input("please input statx tax: ")) Cross = worked * pay Federal = (worked * pay)*1/5 State = (worked * pay)*9/100 total = Federal + State Pay = (worked*pay) - total print("Employee Name: "+name) print("Hours Worked: ",worked) print("Pay rate: ",pay) print("Gross Pay: ",Cross) print("Deductions:") print("Federal Withholding(20%): ",Federal ) print("State Withholding(9%): ",round(State,2)) print("Total Deduction: ",round(total,2)) print("Net Pay: ",round(Pay,2)) ``` - 7 <img src="../Photo/32.png"><img> ``` a = (input("please input a 4 number: ")) result = a[::-1] print(result) ``` - 8 进阶: > 加密一串文本,并将解密后的文件写入本地保存 ``` a = 'abcd' for i in a: text = ord(i) + 1 re_text = chr(text) print(re_text,end = '') outputfile = open("day02.txt",'w') outputfile.write(reves_text) outputfile.close() outputfile = open("day02.txt",'w') outputfile.write(reves_text) outputfile.close() ```
github_jupyter
##### Copyright 2019 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the "License"); ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # 텐서플로 2.0 시작하기: 전문가용 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/alpha/tutorials/quickstart/advanced"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />TensorFlow.org에서 보기</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ko/alpha/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행 하기</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/ko/alpha/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />깃허브(GitHub) 소스 보기</a> </td> </table> Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도 불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다. 이 번역에 개선할 부분이 있다면 [tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다. 문서 번역이나 리뷰에 참여하려면 [docs-ko@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ko)로 메일을 보내주시기 바랍니다. 이 문서는 [구글 코랩](https://colab.research.google.com/notebooks/welcome.ipynb)(Colaboratory) 노트북 파일입니다. 파이썬 프로그램을 브라우저에서 직접 실행할 수 있기 때문에 텐서플로를 배우고 사용하기 좋은 도구입니다: 1. 파이썬 런타임(runtime)에 연결하세요: 메뉴 막대의 오른쪽 상단에서 *CONNECT*를 선택하세요. 2. 노트북의 모든 코드 셀(cell)을 실행하세요: *Runtime* > *Run all*을 선택하세요. 더 많은 예제와 자세한 안내는 [텐서플로 튜토리얼](https://www.tensorflow.org/alpha/tutorials/)을 참고하세요. 먼저 프로그램에 텐서플로 라이브러리를 임포트합니다: ``` from __future__ import absolute_import, division, print_function, unicode_literals !pip install tensorflow-gpu==2.0.0-alpha0 import tensorflow as tf from tensorflow.keras.layers import Dense, Flatten, Conv2D from tensorflow.keras import Model ``` [MNIST 데이터셋](http://yann.lecun.com/exdb/mnist/)을 로드하여 준비합니다. ``` mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 # 채널 차원을 추가합니다. x_train = x_train[..., tf.newaxis] x_test = x_test[..., tf.newaxis] ``` tf.data를 사용하여 데이터셋을 섞고 배치를 만듭니다: ``` train_ds = tf.data.Dataset.from_tensor_slices( (x_train, y_train)).shuffle(10000).batch(32) test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32) ``` 케라스(Keras)의 [모델 서브클래싱(subclassing) API](https://www.tensorflow.org/guide/keras#model_subclassing)를 사용하여 `tf.keras` 모델을 만듭니다: ``` class MyModel(Model): def __init__(self): super(MyModel, self).__init__() self.conv1 = Conv2D(32, 3, activation='relu') self.flatten = Flatten() self.d1 = Dense(128, activation='relu') self.d2 = Dense(10, activation='softmax') def call(self, x): x = self.conv1(x) x = self.flatten(x) x = self.d1(x) return self.d2(x) model = MyModel() ``` 훈련에 필요한 옵티마이저(optimizer)와 손실 함수를 선택합니다: ``` loss_object = tf.keras.losses.SparseCategoricalCrossentropy() optimizer = tf.keras.optimizers.Adam() ``` 모델의 손실과 성능을 측정할 지표를 선택합니다. 에포크가 진행되는 동안 수집된 측정 지표를 바탕으로 최종 결과를 출력합니다. ``` train_loss = tf.keras.metrics.Mean(name='train_loss') train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy') test_loss = tf.keras.metrics.Mean(name='test_loss') test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy') ``` `tf.GradientTape`를 사용하여 모델을 훈련합니다: ``` @tf.function def train_step(images, labels): with tf.GradientTape() as tape: predictions = model(images) loss = loss_object(labels, predictions) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) train_loss(loss) train_accuracy(labels, predictions) ``` 이제 모델을 테스트합니다: ``` @tf.function def test_step(images, labels): predictions = model(images) t_loss = loss_object(labels, predictions) test_loss(t_loss) test_accuracy(labels, predictions) EPOCHS = 5 for epoch in range(EPOCHS): for images, labels in train_ds: train_step(images, labels) for test_images, test_labels in test_ds: test_step(test_images, test_labels) template = '에포크: {}, 손실: {}, 정확도: {}, 테스트 손실: {}, 테스트 정확도: {}' print (template.format(epoch+1, train_loss.result(), train_accuracy.result()*100, test_loss.result(), test_accuracy.result()*100)) ``` 훈련된 이미지 분류기는 이 데이터셋에서 약 98%의 정확도를 달성합니다. 더 자세한 내용은 [TensorFlow 튜토리얼](https://www.tensorflow.org/alpha/tutorials/)을 참고하세요.
github_jupyter
<img src="https://cybersecurity-excellence-awards.com/wp-content/uploads/2017/06/366812.png"> <h1><center>Darwin Supervised Classification Model Building </center></h1> Prior to getting started, there are a few things you want to do: 1. Set the dataset path. 2. Enter your username and password to ensure that you're able to log in successfully Once you're up and running, here are a few things to be mindful of: 1. For every run, look up the job status (i.e. requested, failed, running, completed) and wait for job to complete before proceeding. 2. If you're not satisfied with your model and think that Darwin can do better by exploring a larger search space, use the resume function. ## Import libraries ``` # Import necessary libraries import warnings warnings.filterwarnings("ignore", message="numpy.dtype size changed") %matplotlib inline import matplotlib.pyplot as plt import pandas as pd from IPython.display import Image from time import sleep import os import numpy as np from sklearn.metrics import classification_report from amb_sdk.sdk import DarwinSdk ``` ## Setup **Login to Darwin**<br> Enter your registered username and password below to login to Darwin. ``` # Login ds = DarwinSdk() ds.set_url('https://amb-trial-api.sparkcognition.com/v1/') status, msg = ds.auth_login_user('username', 'password') if not status: print(msg) ``` **Data Path** <br> In the cell below, set the path to your dataset, the default is Darwin's example datasets ``` path = '../../sets/' ``` ## Data Upload and Clean **Read dataset and view a file snippet** After setting up the dataset path, the next step is to upload the dataset from your local device to the server. <br> In the cell below, you need to specify the dataset_name if you want to use your own data. ``` dataset_name = 'cancer_train.csv' df = pd.read_csv(os.path.join(path, dataset_name)) df.head() ``` **Upload dataset to Darwin** ``` # Upload dataset status, dataset = ds.upload_dataset(os.path.join(path, dataset_name)) if not status: print(dataset) ``` **Clean dataset** ``` # clean dataset target = "Diagnosis" status, job_id = ds.clean_data(dataset_name, target = target) if status: ds.wait_for_job(job_id['job_name']) else: print(job_id) ``` ## Create and Train Model We will now build a model that will learn the class labels in the target column.<br> In the default cancer dataset, the target column is "Diagnosis". <br> You will have to specify your own target name for your custom dataset. <br> You can also increase max_train_time for longer training. ``` model = target + "_model0" status, job_id = ds.create_model(dataset_names = dataset_name, \ model_name = model, \ max_train_time = '00:02') if status: ds.wait_for_job(job_id['job_name']) else: print(job_id) ``` ## Extra Training (Optional) Run the following cell for extra training, no need to specify parameters ``` # Train some more status, job_id = ds.resume_training_model(dataset_names = dataset_name, model_name = model, max_train_time = '00:05') if status: ds.wait_for_job(job_id['job_name']) else: print(job_id) ``` ## Analyze Model Analyze model provides feature importance ranked by the model. <br> It indicates a general view of which features pose a bigger impact on the model ``` # Retrieve feature importance of built model status, artifact = ds.analyze_model(model) sleep(1) if status: ds.wait_for_job(artifact['job_name']) else: print(artifact) status, feature_importance = ds.download_artifact(artifact['artifact_name']) ``` Show the 10 most important features of the model. ``` feature_importance[:10] ``` ## Predictions **Perform model prediction on the the training dataset.** ``` status, artifact = ds.run_model(dataset_name, model) sleep(1) ds.wait_for_job(artifact['job_name']) ``` Download predictions from Darwin's server. ``` status, prediction = ds.download_artifact(artifact['artifact_name']) prediction.head() ``` Create plots comparing predictions with actual target ``` unq = prediction[target].unique()[::-1] p = np.zeros((len(prediction),)) a = np.zeros((len(prediction),)) for i,q in enumerate(unq): p += i*(prediction[target] == q).values a += i*(df[target] == q).values #Plot predictions vs actual plt.plot(a) plt.plot(p) plt.legend(['Actual','Predicted']) plt.yticks([i for i in range(len(unq))],[q for q in unq]); print(classification_report(df[target], prediction[target])) ``` **Perform model prediction on a test dataset that wasn't used in training.** <br> Upload test dataset ``` test_data = 'cancer_test.csv' status, dataset = ds.upload_dataset(os.path.join(path, test_data)) if not status: print(dataset) ``` clean test data ``` # clean test dataset status, job_id = ds.clean_data(test_data, target = target, model_name = model) if status: ds.wait_for_job(job_id['job_name']) else: print(job_id) ``` Run model on test dataset. ``` status, artifact = ds.run_model(test_data, model) sleep(1) ds.wait_for_job(artifact['job_name']) ``` Create plots comparing predictions with actual target ``` # Create plots comparing predictions with actual target status, prediction = ds.download_artifact(artifact['artifact_name']) df = pd.read_csv(os.path.join(path,test_data)) unq = prediction[target].unique()[::-1] p = np.zeros((len(prediction),)) a = np.zeros((len(prediction),)) for i,q in enumerate(unq): p += i*(prediction[target] == q).values a += i*(df[target] == q).values #Plot predictions vs actual plt.plot(a) plt.plot(p) plt.legend(['Actual','Predicted']) plt.yticks([i for i in range(len(unq))],[q for q in unq]); print(classification_report(df[target], prediction[target])) ``` ## Find out which machine learning model did Darwin use: ``` status, model_type = ds.lookup_model_name(model) print(model_type['description']['best_genome']) ```
github_jupyter
# Hashing ## Lineare Sondierung Bei einer Kollision versuchen wir die nächste freie Stelle in unserer Hashtabelle zu suchen. Dieses Verhalten wird durch die Formel:<br> $h(k, i) = (h'(k) + i) \bmod m$ mit $h'(k) = k \bmod m$<br> m ist die Größe der Hashtabelle ausgedrückt.<br> Im ersten Durchlauf ist i = 0. Tritt eine Kollision auf, so berechnen wir einen neuen Schlüssel, in dem wir zum eigentlichen Ergebnis der Hashfunktion h' i addieren. Hierdurch können aber Adressen entstehen, die größer sind als der Adressraum. Deshalb benötigen wir auch in h ein 'mod m'. <br> Beispielsequenz:<br> 10, 22, 31, 4, 15, 28, 17, 88, 59 (9 Elemente) sollen in Hashtabelle mit m=11 eingefügt werden.<br> Rechnung:<br> Element 10: 10 mod 11 = 10 -> 10 kommt an Tabellenposition 10<br> Element 22: 22 mod 11 = 0 -> 22 kommt am Tabellenposition 0<br> Element 31: 31 mod 11 = 9 -> 31 kommt an Tabellenposition 9<br> Element 4: 4 mod 11 = 4 -> 4 kommt an Tabellenposition 4<br> Element 15: 15 mod 11 = 4 -> Kollision!<br> (h'(15) + 1) mod 11 = (4 + 1) mod 11 = 5 -> 15 kommt an Tabellenposition 5<br> ...<br> | adr | value | |-----|-------| | 0 | 22 | | 1 | 88 | | 2 | | | 3 | | | 4 | 4 | | 5 | 15 | | 6 | 28 | | 7 | 17 | | 8 | 59 | | 9 | 31 | | 10 | 10 | ## Quadratische Sondierung Hier benutzen wir stattdessen $h(k,i) = (h'(k) + c_{1}*i + c_{2}*i^{2}) \bmod m$. h' ist das selbe wie beim linearen Sondieren.<br> Wir rechnen wieder mit den Beispielzahlen von oben und wählen $c_{1} = 1$ und $c_{2} = 3$ (das ist willkürlich und kann frei gewählt werden) Rechnung:<br> Element 10: 10 mod 11 = 11 -> 10 kommt an Tabellenposition ... Element 5: 15 mod 11 = 4 -> Kollision! <br> (h'(15) + 1 + 3) mod 11 = (4 + 1 + 3) mod 11 = 8 -> 15 kommt auf Position 8<br> Element 28: 28 mod 11 = 6 -> 28 kommt an Tabellenposition 6<br> Element 17: 17 mod 11 = 6 -> Kollision! <br> (h'(17) + 1 + 3) mod 11 = 10 -> Kollision! <br> (h'(17) + 2 + 12) mod 11 = 9 -> Kollision! <br> (h'(17) + 3 + 27) mod 11 = 3 -> 17 kommtauf Position 3 <br> ... | adr | value | |-----|-------| | 0 | 22 | | 1 | | | 2 | 88 | | 3 | 17 | | 4 | 4 | | 5 | | | 6 | 28 | | 7 | 59 | | 8 | 17 | | 9 | 31 | | 10 | 10 | ## Double Hashing Hier setzen wir die Hashfunktion h aus zwei Hashfunktionen zusammensetzt. $h_{1}$ ist hierbei h' von oben.<br> $h(k,i)=(h_{1}(k) + h_{2}(k)*i) \bmod m$<br> $h_{1}(k) = k \bmod m$<br> $h_{2}(k) = 1 + (k \bmod m-1)$<br> Die Rechnung erfolgt analog zu oben - nur dass wir bei Kollision eben das neue h(k, i) benutzen. | adr | value | |-----|-------| | 0 | 22 | | 1 | | | 2 | 59 | | 3 | 17 | | 4 | 4 | | 5 | 15 | | 6 | 28 | | 7 | 88 | | 8 | | | 9 | 31 | | 10 | 10 |
github_jupyter
# Mandala: self-managing experiments ## What is Mandala? Mandala enables new, simpler patterns for working with complex and evolving computational experiments. It eliminates low-level code and decisions for how to save, load, query, delete and otherwise organize results. To achieve this, it lets computational code "manage itself" by organizing and addressing its own data storage. ```{admonition} Under construction :class: warning This project is under active development ``` ### Features at a glance - **concise**: code computations in pure Python (w/ control flow, collections, ...) -- results are automatically tracked and queriable - **iterate rapidly**: add/edit parameters/logic and rerun code -- past results are loaded on demand, and only new computations are executed - **pattern-match against Python code**: query across complex, branching projects by reusing computational code itself ### Quick start #### Installation ```console pip install git+https://github.com/amakelov/mandala ``` #### Recommended introductions To build some understanding, check these out: - 2-minute introduction: [intro to self-managing code](2mins) - 10-minute introduction: [manage a small ML project](10mins) #### Minimal working examples If you want to jump right into code, below are a few minimal, somewhat interesting examples to play with and extend: ``` from typing import List from mandala.all import * set_logging_level('warning') # create a storage for results storage = Storage(in_memory=True) # can also be persistent (on disk) @op(storage) # memoization decorator def inc(x) -> int: return x + 1 @op(storage) def mean(x:List[int]) -> float: # you can operate on / return collections of memoized results return sum(x) / len(x) with run(storage): # calls inside `run` block are memoized nums = [inc(i) for i in range(5)] result = mean(nums) # memoization composes through lists without copying data print(f'Mean of 5 nums: {result}') # add logic/parameters directly on top of memoized code without re-doing past work with run(storage, lazy=True): nums = [inc(i) for i in range(10)] result = mean(nums) # walk over chains of calls without loading intermediate data # to traverse storage and collect results flexibly with run(storage, lazy=True): nums = [inc(i) for i in range(10)] result = mean(nums) print(f'Reference to mean of 10 nums: {result}') storage.attach(result) # load the value in-place print(f'Loaded mean of 10 nums: {result}') # pattern-match to memoized compositions of calls with query(storage) as q: # this may not make sense unless you read the tutorials i = Query() inc_i = inc(i).named('inc_i') nums = MakeList(containing=inc_i, at_index=0).named('nums') result = mean(nums).named('result') df = q.get_table(inc_i, nums, result) df ``` ## Why Mandala? ### Advantages Compared to other tools for tracking and managing computations, the features that most set Mandala apart are the direct and concise patterns in which complex Python code can interact with its own storage. This manifests in several ways: - **Python code as interface to its own storage**: you just write the code to compute what you want to compute (freely using Python's control flow and collections), and directly add more parameters and logic to it over time. Mandala takes care of the rest: - **the organization of storage mirrors the structure of code**, and Mandala provides you with the tools to make maximum use of this -- retracing memoized code with on-demand data loading, and declarative code-based pattern-matching. - this leads to **simple, intuitive and flexible ways to query and iterate on experiments**, even when their logic gets quite complex -- without any data organization efforts on your part. - it also allows you to **query relationships between any variables in your projects**, even when they are separated by many computational steps -- **without explicitly annotating these relationships**. - **refactor code and data will follow**: Mandala makes it easy to apply familiar software refactorings to code *without* losing the relationship to this code's existing results. This gives you high-level tools to manage the complexity of both the code and its data as the project grows. - **organize all results and their relationships**: Mandala manages all the artifacts produced by computations, not just a set of human-readable metrics. It lets you use pure Python idioms to - compute with **data structures with shared substructure** - **index and view data in multiple ways** and on multiple levels of analysis without storage duplication. This gives you much flexibility in manipulating the contents of storage to express your intent. ### Comparisons Mandala takes inspiration from many other programming tools and concepts. Below is an (incomplete but growing) list of comparisons with relevant tools: - [algebraicjulia](https://www.algebraicjulia.org/): [conjunctive](https://www.algebraicjulia.org/blog/post/2020/12/cset-conjunctive-queries/) [queries](https://www.algebraicjulia.org/blog/post/2020/11/sql-as-hypergraph/) are integral to Mandala's declarative interface, and are generalized in several ways to make them practical for complex experiments: - a single table of values is used to enable polymorphism - operations on lists/dicts are integrated with query construction - queries can use the hierarchical structure of computations - constraints can be partitioned (to avoid interaction) while using some shared base (to enable code reuse) - dynamic query generation can use conditionals to enable disjunctive queries, and even loops (though this quickly becomes inefficient) - [koji](https://arxiv.org/abs/1901.01908) and [content-addressable computation](https://research.protocol.ai/publications/ipfs-fan-a-function-addressable-computation-network/delarocha2021a.pdf): Mandala uses causal hashing to - ensure correct, deterministic and idempotent behavior; - avoid hashing large (or unhashable) Python objects; - avoid discrepancies between object hashes across library versions Mandala can be thought of as a single-node, Python-only implementation of general-purpose content-addressable computation with two extra features: - hierarchical organization of computation, - declarative queries - [funsies](https://github.com/aspuru-guzik-group/funsies) is a workflow engine for Python scripts that also uses causal hashing. Mandala differs by integrating more closely with Python (by using functions instead of scripts as the units of work), and thus enabling more fine-grained control and expressiveness over what gets computed and how. - [joblib.Memory](https://joblib.readthedocs.io/en/latest/memory.html#memory) implements persistent memoization for Python functions that overcomes some of the issues naive implementations have with large and complex Python objects. Mandala augments `joblib.Memory` in some key ways: - memoized calls can be queried/deleted declaratively - collections and memoized functions calling other memoized functions can reuse storage - you can modify and refactor memoized functions while retaining connection to memoized calls - you can avoid the latency of hashing large/complex objects - [incpy](https://dl.acm.org/doi/abs/10.1145/2001420.2001455?casa_token=ahM2UC4Uk-4AAAAA:9lZXVDS7nYEHzHPJk-UCTOAICGb2astAh2hrL00VB125nF6IGG90OwA-ujbe-cIg2hT4T1MOpbE2) augments the Python interpreter with automatic persistent memoization. Mandala also enables automatic persistent memoization, but it is different from `incpy` in some key ways: - uses decorators to explicitly designate memoized functions (which can be good or bad depending on your goals) - allows for lazy retracing of memoized calls - provides additional features like the ones mentioned in the comparison with `joblib.Memory` ### Philosophy When can we declare data management for computational experiments a solved problem? It's unclear how to turn this question into a measurable goal, but there is a somewhat objective *lower bound* on how simple data management can get: > At the end of the day, we have to *at least* write down the (Python) code to express > the computations we want to run, *regardless* of data management concerns. > Can this be *all* the code we have to write, and *still* be able to achieve > the goals of data management? Mandala aims to bring us to this idealized lower bound. It adopts the view that Python itself is flexible and expressive enough to capture our intentions about experiments. There shouldn't be a ton of extra interfaces, concepts and syntax between your thoughts, their expression in code, and its results. By mirroring the structure of computational code in the organization of data, and harmoniously extending Python's tools for capturing intention and managing complexity, we can achieve a more flexible, natural and immediate way to interact with computations. This echoes the design goals of some other tools. For example, [dask](https://dask.org) and [ray](https://ray.io) (both of which Mandala integrates with) aim to let you write Python code the way you are used to, and take care of parallelization for you. ## Limitations This project is under active development, and not ready for production. Its goal so far has been to demonstrate that certain high-level programming patterns are viable by building a sufficiently useful working prototype. Limitations can be summarized as follows: - it is easy to get started, but effective use in complex projects requires some getting used to; - much of the code does what it does in very simple and often inefficient ways; - interfaces and (more importantly) storage formats may change in backward incompatible ways. - bugs likely still exist; That being said, Mandala is already quite usable in many practical situations. Below is a detailed outline of current limitations you should be aware of if you consider using this library in your work. ### "Missing" features There are some things you may be used to seeing in projects like this that currently don't exist: - **functions over scripts**: Mandala focuses on functions as the basic building blocks of experiments as opposed to Python scripts. There is no fundamental conceptual distinction between the two, but: - functions provide a better-behaved interface, especially when it comes to typing, refactoring, and hierarchical organization - using functions makes it much easier to use projects such as [ray](https://www.ray.io/) and [dask](https://dask.org/) alongside Mandala - if you don't need to do something extra complicated involving different Python processes or virtual environments, it is easy to wrap a script as a function that takes in some settings and resource descriptions (e.g., paths to input files) and returns other resource descriptions (e.g., paths to output files). However, the burden of refactoring the script's interface manually and organizing its input/output resources would still be on you. So, always use a function where you can. - **no integration with git**: version control data is not automatically included in Mandala's records at this point, thought this would be an easy addition. There are other programming patterns available for working with multiple versions of code. - **no GUI**: for now, the library leans heavily towards using computational code itself as a highly programmable interface to results, and visualization is left to other tools. ### Acquiring best practices Using some features effectively requires deeper understanding: - **declarative queries**: It's possible to create underconstrained pattern-matching queries which return a number of rows that grows multiplicatively with the numbers of rows of memoization tables of functions in the query. Such queries may take a very long time or run out of RAM even for moderately-sized projects (`sqlite` will usually complain about this at the start of the query). Certain ways to define and compose memoized functions promote such queries, so a good understanding of this issue may be needed depending on the project. - **deletions**: deleting anything from storage is subject to invariants that prevent the existence of "mysterious" objects (ones without a computational history tracing back to user inputs) from existing. This means that you must understand well how deletion works to avoid deleting more things than you really intend. ### Performance The library has not been optimized much for performance. A few things to keep in mind for now: - When using disk-based persistence, Mandala introduces an overhead of a few 10s of ms for each call to a memoized function, on top of any work to serialize inputs/outputs and run the function. - Storing and loading large collections can be slow (a list of 1000 integers already leads to a visible ~1s delay)
github_jupyter
# notebook for processing fully reduced m3 data "triplets" This is a notebook for processing L0 / L1B / L2 triplets (i.e., the observations that got reduced). ## general notes We process the reduced data in triplets simply to improve the metadata on the L0 and L2 products. We convert L1B first to extract several attributes to fill out their metadata. This data is scratched to disk in [./directories/m3/m3_index.csv'](./directories/m3/m3_index.csv), because it also serves as a useful user-facing index to the archive. A complete version of this index is provided in this repository, but this index was originally created during this conversion process, and will be recreated if you run it again. This index is read into the ```m3_index variable``` below; its path is also soft-coded in several ```m3_conversion``` classes, so make sure you change that or feed them the correct path as an argument if you change this location. This notebook does not apply programmatic rules to iterate over the file structure of the mirrored archive. It uses an index that was partly manually generated: [/src/directories/m3/m3_data_mappings.csv](/src/directories/m3/m3_data_mappings.csv). This was manually manipulated to manage several small idiosyncracies in the PDS3 archive. 35 of the V3 L1B products in the PDS3 archive are duplicated: one copy in the correct month-by-year directory, one copy in some incorrect month-by-year directory. We pick the 'first' one in all cases (see the line ```pds3_label_file = input_directory + group_files[product_type][0]``` below). Each pair's members have identical md5sums, so it *probably* doesn't matter which member of the pair we use. ## performance tips The most likely bottlenecks for this process are I/O throughput and CPU. We recommend both using a high-throughput disk and parallelizing this, either using ```pathos``` (vanilla Python ```multiprocessing``` will probably fail during a pickling step) or simply by running multiple copies of this notebook. If you do parallelize this process on a single machine, note that working memory can suddenly catch you off-guard as a constraint. While many of the M3 observational data files are small, some are over 4 GB, and the method presented here requires them to be completely loaded into memory in order to convert them to FITS and strip the prefix tables from the L0 files. When passed ```clean=True```, the ```m3_converter``` observational data writer class constructors aggressively delete data after using it, but this still results in a pretty high -- and spiky -- working memory burden. ``` import datetime as dt import os from types import MappingProxyType from more_itertools import distribute import pandas as pd import sh from m3_bulk import basenamer, make_m3_triplet, \ m3_triplet_bundle_paths, crude_time_log, fix_end_object_tags from m3_conversion import M3L0Converter, M3L1BConverter, M3L2Converter from pvl.decoder import ParseError m3_index = pd.read_csv('./directories/m3/m3_index.csv') # directory of file mappings, grouped into m3 basename clusters file_mappings = pd.read_csv('./directories/m3/m3_data_mappings.csv') file_mappings["basename"] = file_mappings["filepath"].apply(basenamer) basename_groups = list(file_mappings.groupby("basename")) # what kind of files does each pds4 product have? # paths to the locally-written versions are stored in the relevant attributes of # the associated PDSVersionConverter instance. pds4_filetypes = MappingProxyType({ 'l0': ('pds4_label_file', 'clock_file', 'fits_image_file'), 'l1b': ('pds4_label_file', 'loc_file', 'tim_file', 'rdn_file', 'obs_file'), 'l2': ('pds4_label_file', 'sup_file', 'rfl_file') }) # root directories of PDS3 and PDS4 data sets respectively input_directory = '/home/ubuntu/m3_input/' output_directory = '/home/ubuntu/m3_output/' # all the triplets: what we are converting here. reduced_groups = [group for group in basename_groups if len(group[1]) >= 3] # the edr_groups = [group for group in basename_groups if len(group[1]) == 1] # lonesome EDR images triplet_product_types = ('l1b', 'l0', 'l2') # initialize our mapping of product types to # product-writer class constructors. # MappingProxyType is just a safety mechanism # to make sure constructors don't get messed with converters = MappingProxyType({ 'l0': M3L0Converter, 'l1b': M3L1BConverter, 'l2': M3L2Converter }) writers = {} # dict to hold instances of the converter classes # initialize iteration, control execution in whatever way # this is a place to split your index up however you like # if you're parallelizing using multiple copies of this # notebook. chunk_ix_of_this_notebook = 0 total_chunks = 40 chunks = distribute(total_chunks, reduced_groups) # eagerly evaluate so we know how long it is, # and what all is in it if we have an error chunk = list(chunks[chunk_ix_of_this_notebook]) log_string = "_" + str(chunk_ix_of_this_notebook) group_enumerator = enumerate(chunk) for ix, group in group_enumerator: print(ix, len(chunk)) print("beginning product conversion") triplet_start_time = dt.datetime.now() group_files = make_m3_triplet(group) # what are the correct output paths (relative to # the root of the pds4 bundle) for these products? bundle_paths = m3_triplet_bundle_paths(group) for product_type in triplet_product_types: # read the PDS3 product and perform file conversions pds3_label_file = input_directory + group_files[product_type][0] try: writers[product_type] = converters[product_type]( pds3_label_file, suppress_warnings=True, clean=True ) except ParseError: # fix broken END_OBJECT tags in some of the target-mode files print("fixing broken END_OBJECT tags") temp_label_file = fix_end_object_tags(pds3_label_file) writers[product_type] = converters[product_type]( temp_label_file, suppress_warnings=True, clean=True ) os.remove(temp_label_file) # write PDS4 label and product files # don't actually need to shave the extra / here but... # this would be more safely rewritten with PyFilesystem # (see clem-conversion) output_path = output_directory + bundle_paths[product_type][1:] sh.mkdir("-p", output_path) writers[product_type].write_pds4(output_path, write_product_files=True, clean=True) # occasionally (slow but very useful) spot-check with validate tool # note that this just invokes a one-line script at /usr/bin/validate # that links to the local install of the PDS Validate Tool; this # allows us to avoid throwing java stuff all over our environment if ix % 20 == 1: print("1-mod-20th triplet: running Validate Tool") validate_results = sh.validate("-t", writers[product_type].pds4_label_file) with open("validate_dump.txt", "a") as file: file.write(validate_results.stdout.decode()) print("validated successfully") # log transfer crudely crude_time_log( "m3_data_conversion_log" + log_string + ".csv", writers[product_type], str((dt.datetime.now() - triplet_start_time).total_seconds()) ) print( "done with this triplet; total seconds " + str((dt.datetime.now() - triplet_start_time).total_seconds()) ) ```
github_jupyter
``` import os from ipywidgets import Output, HBox, Layout import jupyter_cadquery icon_path = os.path.join(os.path.dirname(jupyter_cadquery.__file__), "icons") ``` # ipywidgets ``` from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets def f(x): return x interact(f, x=10); ``` # pythreejs ``` from pythreejs import * BoxGeometry( width=5, height=10, depth=15, widthSegments=5, heightSegments=10, depthSegments=15) ``` # Sidecar ``` from sidecar import Sidecar from ipywidgets import IntSlider sc = Sidecar(title='Sidecar Output') sl = IntSlider(description='Some slider') with sc: display(sl) ``` # Image Button ``` from jupyter_cadquery.widgets import ImageButton output = Output() def handler(out): def f(b): with out: print("Pressed", b.type) return f def create_button(icon): button = ImageButton( width=36, height=28, image_path="%s/%s.png" % (icon_path, icon), tooltip="Change view to %s" % icon, type=icon ) button.on_click(handler(output)) return button button1 = create_button("fit") button2 = create_button("isometric") HBox([button1, button2, output]) ``` # Tree View ``` from ipywidgets import Checkbox, Layout, HBox, Output from jupyter_cadquery.widgets import TreeView, UNSELECTED, SELECTED, MIXED, EMPTY, state_diff tree = { 'type': 'node', 'name': 'Root', 'id': "n1", 'children': [ {'type': 'leaf', 'name': 'Red box', 'id': "R", 'color': 'rgba(255, 0, 0, 0.6)'}, {'type': 'node', 'name': 'Sub', 'id': "n2", 'children': [ {'type': 'leaf', 'name': 'Green box', 'id': "G", 'color': 'rgba(0, 255, 0, 0.6)'}, {'type': 'leaf', 'name': 'blue box', 'id': "B", 'color': 'rgba(0, 0, 255, 0.6)'}]}, {'type': 'leaf', 'name': 'Yellow box', 'id': "Y", 'color': 'rgba(255, 255, 0, 0.6)'} ]} state = { "R": [EMPTY, UNSELECTED], "G": [UNSELECTED, SELECTED], "B": [SELECTED, UNSELECTED], "Y": [SELECTED, SELECTED] } image_paths = [ {UNSELECTED: "%s/no_shape.png" % icon_path, SELECTED: "%s/shape.png" % icon_path, MIXED: "%s/mix_shape.png" % icon_path, EMPTY: "%s/empty.png" % icon_path}, {UNSELECTED: "%s/no_mesh.png" % icon_path, SELECTED: "%s/mesh.png" % icon_path, MIXED: "%s/mix_mesh.png" % icon_path, EMPTY: "%s/empty.png" % icon_path} ] height = "300px" output = Output(layout=Layout(height=height, width="800px", overflow_y="scroll", overflow_x="scroll")) output.add_class("mac-scrollbar") def handler(out): def f(states): diff = state_diff(states.get("old"), states.get("new")) with out: #print(states.get("old")) #print(states.get("new")) print(diff) return f t = TreeView(image_paths=image_paths, tree=tree, state=state, layout=Layout(height=height, width="200px", overflow_y="scroll", overflow_x="scroll")) t.add_class("mac-scrollbar") t.observe(handler(output), "state") HBox([t, output]) ```
github_jupyter
# Measure Watson Assistant Performance ![overall_measure.png](https://raw.githubusercontent.com/watson-developer-cloud/assistant-improve-recommendations-notebook/master/notebook/imgs/measure_overall.png) ## Introduction This notebook demonstrates how to setup automated metrics that help you measure, monitor, and understand the behavior of your Watson Assistant system. As described in <a href="https://github.com/watson-developer-cloud/assistant-improve-recommendations-notebook/raw/master/notebook/IBM%20Watson%20Assistant%20Continuous%20Improvement%20Best%20Practices.pdf" target="_blank" rel="noopener noreferrer">Watson Assistant Continuous Improvement Best Practices</a>, this is the first step of your continuous improvement process. The goal of this step is to understand where your assistant is doing well vs where it isn’t and to potentially focus your improvement effort to one of the problem areas identified. We define two measures to achieve this goal: **Coverage** and **Effectiveness**. - **Coverage** is the portion of total user messages your assistant is attempting to respond to. - **Effectiveness** refers to how well your assistant is handling the conversations it is attempting to respond to. The pre-requisite for running this notebook is Watson Assistant (formerly Watson Conversation). This notebook assumes familiarity with Watson Assistant and concepts such as workspaces, intents and training examples. ### Programming language and environment Some familiarity with Python is recommended. This notebook runs on Python 3.5 with Default Python 3.5 XS environment. *** ## Table of contents 1. [Configuration and setup](#setup)<br> 1.1 [Import and apply global CSS styles](#css)<br> 1.2 [Install required Python libraries](#python)<br> 1.3 [Import functions used in the notebook](#function)<br> 2. [Load and format data](#load)<br> 2.1 [Option one: from a Watson Assistant instance](#load_remote)<br> 2.2 [Option two: from JSON files](#load_local)<br> 2.3 [Format the log data](#format_data)<br> 3. [Define coverage and effectiveness metrics](#set_metrics)<br> 3.1 [Customize coverage](#set_coverage)<br> 3.2 [Customize effectiveness](#set_effectiveness)<br> 4. [Calculate overall coverage and effectiveness](#overall)<br> 4.1 [Calculate overall metrics](#overall1)<br> 4.2 [Display overall results](#overall2)<br> 5. [Analyze coverage](#msg_analysis)<br> 5.1 [Display overall coverage](#msg_analysis1)<br> 5.2 [Calculate coverage over time](#msg_analysis2)<br> 6. [Analyze effectiveness](#conv_analysis)<br> 6.1 [Generate excel file and upload to our project](#conv_analysis1)<br> 6.2 [Plot breakdown by effectiveness graph](#conv_analysis2)<br> 7. [Root cause analysis of non coverage](#root_cause)<br> 8. [Abandoned and resolved intent analysis](#abandoned_resolved_intents)<br> 8.1 [Count of all started intents](#started_intents)<br> 8.2 [Analyze resolved intents](#resolved_intents)<br> 8.3 [Analyze abandoned intents](#abandoned_intents)<br> 9. [Summary and next steps](#summary)<br> <a id="setup"></a> ## 1. Configuration and Setup In this section, we add data and workspace access credentials, import required libraries and functions. ### <a id="css"></a> 1.1 Import and apply global CSS styles ``` from IPython.display import HTML !curl -O https://raw.githubusercontent.com/watson-developer-cloud/assistant-improve-recommendations-notebook/master/src/main/css/custom.css HTML(open('custom.css', 'r').read()) ``` ### <a id="python"></a> 1.2 Install required Python libraries ``` # install watson-developer-cloud python SDK # After running this cell once, comment out the following code. Packages only need to be installed once. !pip3 install --upgrade "watson-developer-cloud>=1.0,<2.0"; # Import required libraries import pandas as pd import matplotlib.pyplot as plt import json from pandas.io.json import json_normalize from watson_developer_cloud import AssistantV1 import matplotlib.dates as mdates import re from IPython.display import display ``` ### <a id="function"></a> 1.3 Import functions used in the notebook ``` # Import function module files !curl -O https://raw.githubusercontent.com/watson-developer-cloud/assistant-improve-recommendations-notebook/master/src/main/python/cos_op.py !curl -O https://raw.githubusercontent.com/watson-developer-cloud/assistant-improve-recommendations-notebook/master/src/main/python/watson_assistant_func.py !curl -O https://raw.githubusercontent.com/watson-developer-cloud/assistant-improve-recommendations-notebook/master/src/main/python/visualize_func.py !curl -O https://raw.githubusercontent.com/watson-developer-cloud/assistant-improve-recommendations-notebook/master/src/main/python/computation_func.py # Import the visualization related functions from visualize_func import make_pie from visualize_func import coverage_barh from visualize_func import width_bar # Import Cloud Object Storage related functions from cos_op import generate_link from cos_op import generate_excel_measure # Import Watson Assistant related functions from watson_assistant_func import get_logs_jupyter # Import Dataframe computation related functions from computation_func import get_effective_df from computation_func import get_coverage_df from computation_func import chk_is_valid_node from computation_func import format_data ``` ## <a id="load"></a> 2. Load and format data ### <a id="load_remote"></a> 2.1 Option one: from a Watson Assistant instance #### 2.1.1 Add Watson Assistant configuration Provide your Watson Assistant credentials and the workspace id that you want to fetch data from. - For more information about obtaining Watson Assistant credentials, see [Service credentials for Watson services](https://console.bluemix.net/docs/services/watson/getting-started-credentials.html#creating-credentials). - API requests require a version parameter that takes a date in the format version=YYYY-MM-DD. For more information about version, see [Versioning](https://www.ibm.com/watson/developercloud/assistant/api/v1/curl.html?curl#versioning). ``` # Provide credentials to connect to assistant creds = {'username':'YOUR_USERNAME', 'password':'YOUR_PASSWORD', 'version':'2018-05-03'} # Connect to Watson Assistant conversation = AssistantV1(username=creds['username'], password=creds['password'], version=creds['version']) # You only need to give api key and url to connect. If you can connect using this, you will not need the above code # conversation = AssistantV1(version='2019-04-28', # iam_apikey='', # url='') ``` #### 2.1.2 Fetch and load a workspace Fetch the workspace for the workspace id given in `workspace_id` variable. ``` # Provide the workspace id you want to analyze workspace_id = '' if len(workspace_id) > 0: # Fetch the worksapce info. for the input workspace id workspace = conversation.get_workspace(workspace_id = workspace_id, export=True) # Store the workspace details in a dataframe df_workspace = json_normalize(workspace) # Get all intents present in current version of workspace workspace_intents= [intent['intent'] for intent in df_workspace['intents'].values[0]] # Get all dialog nodes present in current version of workspace workspace_nodes= pd.DataFrame(df_workspace['dialog_nodes'].values[0]) # Mark the workspace loaded workspace_loaded = True else: workspace_loaded = False ``` #### 2.1.3 Fetch and load workspace logs Fetch the logs for the workspace id given in `workspace_id` variable. Any necessary filter can be specified in the `filter` variable.<br> Note that if the logs were already fetched in a previous run, it will be read from the a cache file. ``` if len(workspace_id) > 0: # Filter to be applied while fetching logs, e.g., removing empty input 'meta.summary.input_text_length_i>0', 'response_timestamp>=2018-09-18' filter = 'meta.summary.input_text_length_i>0' # Send this info into the get_logs function workspace_creds={'sdk_object':conversation, 'ws_id':workspace['workspace_id'], 'ws_name':workspace['name']} # Fetch the logs for the workspace df = get_logs_jupyter(num_logs=10000, log_list=[], workspace_creds=workspace_creds, log_filter=filter) # Mark the logs loaded logs_loaded = True else: logs_loaded = False ``` ### <a id="load_local"></a> 2.2 Option two: from JSON files #### 2.2.1 Load a workspace JSON file ``` if not workspace_loaded: # The following code is for using demo workspace import requests print('Loading workspace data from Watson developer cloud Github repo ... ', end='') workspace_data = requests.get("https://raw.githubusercontent.com/watson-developer-cloud/assistant-improve-recommendations-notebook/master/notebook/data/workspace.json").text print('completed!') df_workspace = json_normalize(json.loads(workspace_data)) # Specify a workspace JSON file # workspace_file = 'workspace.json' # Store the workspace details in a dataframe # df_workspace = json_normalize(json.load(open(workspace_file))) # Get all intents present in current version of workspace workspace_intents = [intent['intent'] for intent in df_workspace['intents'].values[0]] # Get all dialog nodes present in current version of workspace workspace_nodes = pd.DataFrame(df_workspace['dialog_nodes'].values[0]) Jupyter Notebook Measure Notebook (autosaved) Current Kernel Logo Python 3 File Edit View Insert Cell Kernel Widgets Help Measure Watson Assistant Performance overall_measure.png Introduction This notebook demonstrates how to setup automated metrics that help you measure, monitor, and understand the behavior of your Watson Assistant system. As described in Watson Assistant Continuous Improvement Best Practices, this is the first step of your continuous improvement process. The goal of this step is to understand where your assistant is doing well vs where it isn’t and to potentially focus your improvement effort to one of the problem areas identified. We define two measures to achieve this goal: Coverage and Effectiveness. Coverage is the portion of total user messages your assistant is attempting to respond to. Effectiveness refers to how well your assistant is handling the conversations it is attempting to respond to. The pre-requisite for running this notebook is Watson Assistant (formerly Watson Conversation). This notebook assumes familiarity with Watson Assistant and concepts such as workspaces, intents and training examples. Programming language and environment Some familiarity with Python is recommended. This notebook runs on Python 3.5 with Default Python 3.5 XS environment. Table of contents Configuration and setup 1.1 Import and apply global CSS styles 1.2 Install required Python libraries 1.3 Import functions used in the notebook Load and format data 2.1 Option one: from a Watson Assistant instance 2.2 Option two: from JSON files 2.3 Format the log data Define coverage and effectiveness metrics 3.1 Customize coverage 3.2 Customize effectiveness Calculate overall coverage and effectiveness 4.1 Calculate overall metrics 4.2 Display overall results Analyze coverage 5.1 Display overall coverage 5.2 Calculate coverage over time Analyze effectiveness 6.1 Generate excel file and upload to our project 6.2 Plot breakdown by effectiveness graph Root cause analysis of non coverage Abandoned and resolved intent analysis 8.1 Count of all started intents 8.2 Analyze resolved intents 8.3 Analyze abandoned intents Summary and next steps 1. Configuration and Setup In this section, we add data and workspace access credentials, import required libraries and functions. 1.1 Import and apply global CSS styles from IPython.display import HTML !curl -O https://raw.githubusercontent.com/watson-developer-cloud/assistant-improve-recommendations-notebook/master/src/main/css/custom.css HTML(open('custom.css', 'r').read()) % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 348 100 348 0 0 348 0 0:00:01 --:--:-- 0:00:01 600 1.2 Install required Python libraries # install watson-developer-cloud python SDK # After running this cell once, comment out the following code. Packages only need to be installed once. !pip3 install --upgrade "watson-developer-cloud>=1.0,<2.0"; ​ # Import required libraries import pandas as pd import matplotlib.pyplot as plt import json from pandas.io.json import json_normalize from watson_developer_cloud import AssistantV1 import matplotlib.dates as mdates import re from IPython.display import display Requirement already up-to-date: watson-developer-cloud<2.0,>=1.0 in /usr/local/lib/python3.6/site-packages (1.7.1) Requirement already satisfied, skipping upgrade: service-identity>=17.0.0 in /usr/local/lib/python3.6/site-packages (from watson-developer-cloud<2.0,>=1.0) (17.0.0) Requirement already satisfied, skipping upgrade: pyOpenSSL>=16.2.0 in /usr/local/lib/python3.6/site-packages (from watson-developer-cloud<2.0,>=1.0) (18.0.0) Requirement already satisfied, skipping upgrade: requests<3.0,>=2.0 in /usr/local/lib/python3.6/site-packages (from watson-developer-cloud<2.0,>=1.0) (2.18.4) Requirement already satisfied, skipping upgrade: Twisted>=13.2.0 in /usr/local/lib/python3.6/site-packages (from watson-developer-cloud<2.0,>=1.0) (18.4.0) Requirement already satisfied, skipping upgrade: autobahn>=0.10.9 in /usr/local/lib/python3.6/site-packages (from watson-developer-cloud<2.0,>=1.0) (18.6.1) Requirement already satisfied, skipping upgrade: python-dateutil>=2.5.3 in /usr/local/lib/python3.6/site-packages (from watson-developer-cloud<2.0,>=1.0) (2.6.1) Requirement already satisfied, skipping upgrade: attrs in /usr/local/lib/python3.6/site-packages (from service-identity>=17.0.0->watson-developer-cloud<2.0,>=1.0) (18.1.0) Requirement already satisfied, skipping upgrade: pyasn1-modules in /usr/local/lib/python3.6/site-packages (from service-identity>=17.0.0->watson-developer-cloud<2.0,>=1.0) (0.2.2) Requirement already satisfied, skipping upgrade: pyasn1 in /usr/local/lib/python3.6/site-packages (from service-identity>=17.0.0->watson-developer-cloud<2.0,>=1.0) (0.1.9) Requirement already satisfied, skipping upgrade: cryptography>=2.2.1 in /usr/local/lib/python3.6/site-packages (from pyOpenSSL>=16.2.0->watson-developer-cloud<2.0,>=1.0) (2.2.2) Requirement already satisfied, skipping upgrade: six>=1.5.2 in /usr/local/lib/python3.6/site-packages (from pyOpenSSL>=16.2.0->watson-developer-cloud<2.0,>=1.0) (1.11.0) Requirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/site-packages (from requests<3.0,>=2.0->watson-developer-cloud<2.0,>=1.0) (3.0.4) Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /usr/local/lib/python3.6/site-packages (from requests<3.0,>=2.0->watson-developer-cloud<2.0,>=1.0) (2017.11.5) Requirement already satisfied, skipping upgrade: idna<2.7,>=2.5 in /usr/local/lib/python3.6/site-packages (from requests<3.0,>=2.0->watson-developer-cloud<2.0,>=1.0) (2.6) Requirement already satisfied, skipping upgrade: urllib3<1.23,>=1.21.1 in /usr/local/lib/python3.6/site-packages (from requests<3.0,>=2.0->watson-developer-cloud<2.0,>=1.0) (1.22) Requirement already satisfied, skipping upgrade: hyperlink>=17.1.1 in /usr/local/lib/python3.6/site-packages (from Twisted>=13.2.0->watson-developer-cloud<2.0,>=1.0) (18.0.0) Requirement already satisfied, skipping upgrade: incremental>=16.10.1 in /usr/local/lib/python3.6/site-packages (from Twisted>=13.2.0->watson-developer-cloud<2.0,>=1.0) (17.5.0) Requirement already satisfied, skipping upgrade: constantly>=15.1 in /usr/local/lib/python3.6/site-packages (from Twisted>=13.2.0->watson-developer-cloud<2.0,>=1.0) (15.1.0) Requirement already satisfied, skipping upgrade: Automat>=0.3.0 in /usr/local/lib/python3.6/site-packages (from Twisted>=13.2.0->watson-developer-cloud<2.0,>=1.0) (0.7.0) Requirement already satisfied, skipping upgrade: zope.interface>=4.4.2 in /usr/local/lib/python3.6/site-packages (from Twisted>=13.2.0->watson-developer-cloud<2.0,>=1.0) (4.5.0) Requirement already satisfied, skipping upgrade: txaio>=2.10.0 in /usr/local/lib/python3.6/site-packages (from autobahn>=0.10.9->watson-developer-cloud<2.0,>=1.0) (2.10.0) Requirement already satisfied, skipping upgrade: cffi>=1.7; platform_python_implementation != "PyPy" in /usr/local/lib/python3.6/site-packages (from cryptography>=2.2.1->pyOpenSSL>=16.2.0->watson-developer-cloud<2.0,>=1.0) (1.11.2) Requirement already satisfied, skipping upgrade: asn1crypto>=0.21.0 in /usr/local/lib/python3.6/site-packages (from cryptography>=2.2.1->pyOpenSSL>=16.2.0->watson-developer-cloud<2.0,>=1.0) (0.24.0) Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/site-packages (from zope.interface>=4.4.2->Twisted>=13.2.0->watson-developer-cloud<2.0,>=1.0) (39.0.1) Requirement already satisfied, skipping upgrade: pycparser in /usr/local/lib/python3.6/site-packages (from cffi>=1.7; platform_python_implementation != "PyPy"->cryptography>=2.2.1->pyOpenSSL>=16.2.0->watson-developer-cloud<2.0,>=1.0) (2.18) WARNING: You are using pip version 19.1.1, however version 19.2.2 is available. You should consider upgrading via the 'pip install --upgrade pip' command. :0: UserWarning: You do not have a working installation of the service_identity module: 'cannot import name 'opentype''. Please install it from <https://pypi.python.org/pypi/service_identity> and make sure all of its dependencies are satisfied. Without the service_identity module, Twisted can perform only rudimentary TLS client hostname verification. Many valid certificate/hostname mappings may be rejected. 1.3 Import functions used in the notebook # Import function module files !curl -O https://raw.githubusercontent.com/watson-developer-cloud/assistant-improve-recommendations-notebook/master/src/main/python/cos_op.py !curl -O https://raw.githubusercontent.com/watson-developer-cloud/assistant-improve-recommendations-notebook/master/src/main/python/watson_assistant_func.py !curl -O https://raw.githubusercontent.com/watson-developer-cloud/assistant-improve-recommendations-notebook/master/src/main/python/visualize_func.py !curl -O https://raw.githubusercontent.com/watson-developer-cloud/assistant-improve-recommendations-notebook/master/src/main/python/computation_func.py # Import the visualization related functions from visualize_func import make_pie from visualize_func import coverage_barh from visualize_func import width_bar ​ # Import Cloud Object Storage related functions from cos_op import generate_link from cos_op import generate_excel_measure ​ # Import Watson Assistant related functions from watson_assistant_func import get_logs_jupyter ​ # Import Dataframe computation related functions from computation_func import get_effective_df from computation_func import get_coverage_df from computation_func import chk_is_valid_node from computation_func import format_data % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 7240 100 7240 0 0 7240 0 0:00:01 --:--:-- 0:00:01 14280 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 6952 100 6952 0 0 6952 0 0:00:01 --:--:-- 0:00:01 14605-:--:-- --:--:-- --:--:-- 0 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 10166 100 10166 0 0 10166 0 0:00:01 --:--:-- 0:00:01 19933 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 15862 100 15862 0 0 15862 0 0:00:01 --:--:-- 0:00:01 31224 2. Load and format data 2.1 Option one: from a Watson Assistant instance 2.1.1 Add Watson Assistant configuration Provide your Watson Assistant credentials and the workspace id that you want to fetch data from. For more information about obtaining Watson Assistant credentials, see Service credentials for Watson services. API requests require a version parameter that takes a date in the format version=YYYY-MM-DD. For more information about version, see Versioning. # Provide credentials to connect to assistant creds = {'username':'YOUR_USERNAME', 'password':'YOUR_PASSWORD', 'version':'2018-05-03'} ​ # Connect to Watson Assistant conversation = AssistantV1(username=creds['username'], password=creds['password'], version=creds['version']) ​ # You only need to give api key and url to connect. If you can connect using this, you will not need the above code ​ # conversation = AssistantV1(version='2019-04-28', # iam_apikey='', # url='') 2.1.2 Fetch and load a workspace Fetch the workspace for the workspace id given in workspace_id variable. # Provide the workspace id you want to analyze workspace_id = '' ​ if len(workspace_id) > 0: # Fetch the worksapce info. for the input workspace id workspace = conversation.get_workspace(workspace_id = workspace_id, export=True) ​ # Store the workspace details in a dataframe df_workspace = json_normalize(workspace) ​ # Get all intents present in current version of workspace workspace_intents= [intent['intent'] for intent in df_workspace['intents'].values[0]] ​ # Get all dialog nodes present in current version of workspace workspace_nodes= pd.DataFrame(df_workspace['dialog_nodes'].values[0]) ​ # Mark the workspace loaded workspace_loaded = True else: workspace_loaded = False 2.1.3 Fetch and load workspace logs Fetch the logs for the workspace id given in workspace_id variable. Any necessary filter can be specified in the filter variable. Note that if the logs were already fetched in a previous run, it will be read from the a cache file. if len(workspace_id) > 0: # Filter to be applied while fetching logs, e.g., removing empty input 'meta.summary.input_text_length_i>0', 'response_timestamp>=2018-09-18' filter = 'meta.summary.input_text_length_i>0' ​ # Send this info into the get_logs function workspace_creds={'sdk_object':conversation, 'ws_id':workspace['workspace_id'], 'ws_name':workspace['name']} ​ # Fetch the logs for the workspace df = get_logs_jupyter(num_logs=10000, log_list=[], workspace_creds=workspace_creds, log_filter=filter) ​ # Mark the logs loaded logs_loaded = True else: logs_loaded = False 2.2 Option two: from JSON files 2.2.1 Load a workspace JSON file if not workspace_loaded: # The following code is for using demo workspace import requests print('Loading workspace data from Watson developer cloud Github repo ... ', end='') workspace_data = requests.get("https://raw.githubusercontent.com/watson-developer-cloud/assistant-improve-recommendations-notebook/master/notebook/data/workspace.json").text print('completed!') df_workspace = json_normalize(json.loads(workspace_data)) # Specify a workspace JSON file # workspace_file = 'workspace.json' ​ # Store the workspace details in a dataframe # df_workspace = json_normalize(json.load(open(workspace_file))) ​ # Get all intents present in current version of workspace workspace_intents = [intent['intent'] for intent in df_workspace['intents'].values[0]] ​ # Get all dialog nodes present in current version of workspace workspace_nodes = pd.DataFrame(df_workspace['dialog_nodes'].values[0]) ``` #### 2.2.2 Load a log JSON file ``` if not logs_loaded: # The following code is for using demo logs import requests print('Loading demo log data from Watson developer cloud Github repo ... ', end='') log_raw_data = requests.get("https://raw.githubusercontent.com/watson-developer-cloud/assistant-improve-recommendations-notebook/master/notebook/data/sample_logs.json").text print('completed!') df = pd.DataFrame.from_records(json.loads(log_raw_data)) # # The following code is for loading your log file # # Specify a log JSON file # log_file = 'sample_logs.json' # # Create a dataframe for logs # df = pd.DataFrame.from_records(json.load(open(log_file))) ``` ### <a id="format_data"></a> 2.3 Format the log data ``` # Format the logs data from the workspace df_formated = format_data(df) ``` <a id="set_metrics"></a> ## 3. Define effectiveness and coverage metrics As described in Watson Assistant Continuous Improvement Best Practices, **Effectiveness** and **Coverage** are the two measures that provide a reliable understanding of your assistant’s overall performance. Both of the two measures are customizable based on your preferences. In this section, we provide a guideline for setting each of them. ### <a id="set_coverage"></a> 3.1 Customize coverage Coverage measures your Watson Assistant system at the utterance level. You may include automated metrics that help identify utterences that your service is not answering. Example metrics include: - Confidence threshold - Dialog information For Confidence threshold, you can set a threshold to include utterances with confidence values below this threshold. For more information regarding Confidence, see [Absolute scoring](https://console.bluemix.net/docs/services/conversation/intents.html#absolute-scoring-and-mark-as-irrelevant). For Dialog information, you can specify what the notebook should look for in your logs to determine that a message is not covered by your assistant. - Use the node_ids list to include the identifiers of any dialog nodes you've used to model that a message is out of scope. - Similarly, use the node_names list to include any dialog nodes. - Use node_conditions for dialog conditions that indicate a message is out of scope. Note that these lists are treated as "OR" conditions - any occurrence of any of them will signify that a message is not covered. __Where to find node id, node name, and node condition__? You can find the values of these variables from your workspace JSON file based on following mappings. - node id: `dialog_node` - node name: `title` - node condition: `conditions` You can also find node name, and node condition in your workspace dialog editor. For more information, see [Dialog Nodes].(https://console.bluemix.net/docs/services/conversation/dialog-overview.html#dialog-nodes) Below we provide example code for identifying coverage based on confidence and dialog node. ``` #Specify the confidence threhold you want to look for in the logs confidence_threshold = .20 # Add coverage node ids, if any, to list node_ids = ['node_1_1467910920863', 'node_1_1467919680248'] # Add coverage node names, if any, to list node_names = [] # Add coverage node conditions, if any, to list node_conditions = ['#out_of_scope || #off_topic', 'anything_else'] # Check if the dialog nodes are present in the current version of workspace df_coverage_nodes = chk_is_valid_node(node_ids, node_names, node_conditions, workspace_nodes) df_coverage_nodes ``` ### <a id="set_effectiveness"></a> 3.2 Customize effectiveness Effectiveness measures your Watson Assistant system at the conversation level. You may include automated metrics that help identify problematic conversations. Example metrics include: - Escalations to live agent: conversations escalated to a human agent for quality reasons. - Poor NPS: conversations that received a poor NPS (Net Promoter Score), or other explicit user feedback. - Task not completed: conversations failed to complete the task the user was attempting. - Implicit feedback: conversations containing implicit feedback that suggests failure, such as links provided not being clicked. Below we provide example code for identifying escalation based on intents and dialog information. #### <a id="set_escalation_intent"></a> 3.2.1 Specify intents to identify escalations If you have specific intents that point to escalation or any other effectiveness measure, specify those in `chk_effective_intents` lists below. <br> **Note:** If you don't have specific intents to capture effectiveness, leave chk_effective_intents list empty. ``` # Add your escalation intents to the list chk_effective_intents=['connect_to_agent'] # Store the intents in a dataframe df_chk_effective_intents = pd.DataFrame(chk_effective_intents, columns = ['Intent']) # Add a 'valid' flag to the dataframe df_chk_effective_intents['Valid']= True # Add count column for selected intents df_chk_effective_intents['Count']= 0 # Checking the validity of the specified intents. Look out for the `valid` column in the table displayed below. for intent in chk_effective_intents: # Check if intent is present in workspace if intent not in workspace_intents: # If not present, mark it as 'not valid' df_chk_effective_intents.loc[df_chk_effective_intents['Intent']==intent,['Valid']] = False # Remove intent from the chk_effective_intents list chk_effective_intents.remove(intent) else: # Calculate number of times each intent is hit count = df_formated.loc[df_formated['response.top_intent_intent']==intent]['log_id'].nunique() df_chk_effective_intents.loc[df_chk_effective_intents['Intent']==intent,['Count']] = count # Display intents and validity df_chk_effective_intents ``` #### <a id="set_escalation_dialog"></a> 3.2.2 Specify dialog nodes to identify escalations If you have specific dialog nodes that point to escalation or any other effectiveness measure, you can automated capture them based on three variables: node id, node name, and node condition. - Use the node_ids list to include the identifiers of any dialog nodes you've used to model that a message indicates an escalation. - Similarly, use the node_names list to include dialog nodes. - Use node_conditions for dialog conditions that indicate a message is out of scope. Note that these lists are treated as "OR" conditions - any occurrence of any of them will signify that a message is not covered. __Where to find node id, node name, and node condition__? You can find the values of these variables from your workspace JSON file based on following mappings. - node id: `dialog_node` - node name: `title` - node condition: `conditions` You can also find node name, and node condition in your workspace dialog editor. For more information, see [Dialog Nodes](https://console.bluemix.net/docs/services/conversation/dialog-overview.html#dialog-nodes). **Note:** If your assistant does not incorporate escalations and you do not have any other automated conversation-level quality metrics to identify problematic conversations (e.g., poor NPS, task not completed), you can simply track coverage and average confidence over a recent sample of your entire production logs. Leave an empty list for node_ids, node_names and node_conditions. ``` # Add effectiveness node ids, if any, to list node_ids = [] # Add effectiveness node names, if any, to list node_names = ['not_trained'] # Add effectiveness node conditions, if any, to list node_conditions = ['#connect_to_agent', '#answer_not_helpful'] # If your assistant does not incorporate escalations and you do not have any other automated conversation-level quality metrics, uncomment lines below # node_ids = [] # node_names = [] # node_conditions = [] # Check if the dialog nodes are present in the current version of workspace df_chk_effective_nodes = chk_is_valid_node(node_ids, node_names, node_conditions, workspace_nodes) df_chk_effective_nodes ``` ## 4. Calculate overall coverage and effectiveness<a id="overall"></a> The combination of effectiveness and coverage is very powerful for diagnostics. If your effectiveness and coverage metrics are high, it means that your assistant is responding to most inquiries and responding well. If either effectiveness or coverage are low, the metrics provide you with the information you need to start improving your assistant. ### 4.1 Calculate overall metrics<a id="overall1"></a> ``` df_formated_copy = df_formated.copy(deep = True) # Mark if a message is covered and store results in df_coverage dataframe df_coverage = get_coverage_df(df_formated_copy , df_coverage_nodes, confidence_threshold) # Mark if a conversation is effective and store results in df_coverage dataframe df_effective = get_effective_df(df_formated_copy, chk_effective_intents, df_chk_effective_nodes, filter_non_intent_node=True, workspace_nodes=workspace_nodes) # Calculate average confidence avg_conf = float("{0:.2f}".format(df_coverage[df_coverage['Covered']==True]['response.top_intent_confidence'].mean()*100)) # Calculate coverage coverage = float("{0:.2f}".format((df_coverage['Covered'].value_counts().to_frame()['Covered'][True]/df_coverage['Covered'].value_counts().sum())*100)) # Calculate effectiveness effective_perc = float("{0:.2f}".format((df_effective.loc[df_effective['Escalated_conversation']==False]['response.context.conversation_id'].nunique()/df_effective['response.context.conversation_id'].nunique())*100)) # Plot pie graphs for coverage and effectiveness coverage_pie = make_pie(coverage, "Percent of total messages covered") effective_pie = make_pie(effective_perc, 'Percent of non-escalated conversations') ``` Messages to be displayed with coverage and effectiveness metrics are given below ``` # Messages to be displayed with effectiveness and coverage coverage_msg = '<h2>Coverage</h2></br>A message that is not covered would either be a \ message your assistant responded to with some form \ of “I’m not trained” or that it immediately handed over \ to a human agent without attempting to respond' effectiveness_msg = '<h2>Effectiveness</h2></br>This notebook provides a list of metrics customers \ can use to assess how effective their assistant is at \ responding to conversation and metrics ' ``` ### 4.2 Display overall results<a id="overall2"></a> ``` # Display the coverage and effectiveness pie charts HTML('<tr><th colspan="4"><div align="center"><h2>Coverage and Effectiveness<hr/></h2></div></th></tr>\ <tr>\ <td style="width:500px">{c_pie}</td>\ <td style="width:450px"><div align="left"> {c_msg} </div></td>\ <td style="width:500px">{e_pie}</td>\ <td style="width:450px"><div align="left"> {e_msg} </div></td>\ </tr>' .format(c_pie=coverage_pie, c_msg = coverage_msg, e_pie = effective_pie, e_msg = effectiveness_msg)) ``` Here, we can see our assistant's coverage and effectiveness. We will have to take a deeper look at both of these metrics to understand the nuances and decide where we should focus next. Note that the distinction between a user message and a conversation. A conversation in Watson Assistant represents a session of one or more messages from a user and the associated responses returned to the user from the assistant. A conversation includes a Conversation id for the purposes of grouping a sequence of messages and responses. <a id="msg_analysis"></a> ## 5. Analyze coverage Here, we take a deeper look at the Coverage of our Watson Assistant. ### 5.1 Display overall coverage<a id="msg_analysis1"></a> ``` %matplotlib inline # Compute the number of conversations in the log convs = df_coverage['response.context.conversation_id'].nunique() # Compute the number of messages in the log msgs = df_coverage['response.context.conversation_id'].size #Display the results print('Overall messages\n', "=" * len('Overall messages'), '\nTotal Conversations: ', convs, '\nTotal Messages: ', msgs, '\n\n', sep = '') #Display the coverage bar chart display(coverage_barh(coverage, avg_conf, 'Coverage & Average confidence', False)) ``` Here, we see the percentage of messages covered and their average confidence. Now, let us take a look at the coverage over time. ### 5.2 Calculate coverage over time<a id="msg_analysis2"></a> ``` # Make a copy of df_coverage dataframe df_Tbot_raw1 = df_coverage.copy(deep=True) # Group by date and covered and compute the count covered_counts = df_Tbot_raw1[['Date','Covered']].groupby(['Date','Covered']).agg({'Covered': 'count'}) # Convert numbers to percentage coverage_grp = covered_counts.groupby(level=0).apply(lambda x:round(100 * x / float(x.sum()),2)).rename(columns = {'Covered':'Coverage'}).reset_index() # Get only covered messages coverage_time = coverage_grp[coverage_grp['Covered']==True].reset_index(drop = True) # Determine the number of xticks required xticks = [d for d in coverage_time['Date']] # Plot the coverage over time graph fig, ax = plt.subplots(figsize=(30,8)) # Format the date on x-axis ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d')) ax.xaxis_date() ax.set_xticks(xticks) # Rotate x-axis labels for ax in fig.axes: plt.sca(ax) plt.xticks(rotation = 315) # Plot a line plot if there are more data points if len(coverage_time) >1: ax.plot_date(coverage_time['Date'], coverage_time['Coverage'], fmt = '-', color = '#4fa8f6', linewidth=6) # Plot a scatter plot if there is only one date on x-axis else: ax.plot_date(coverage_time['Date'], coverage_time['Coverage'], color = '#4fa8f6', linewidth=6) # Set axis labels and title ax.set_xlabel("Time", fontsize=20, fontweight='bold') ax.set_ylabel("Coverage %", fontsize=20, fontweight='bold') ax.set_title('Coverage over time', fontsize=25, fontweight = 'bold') ax.tick_params(axis='both', labelsize=15) # Hide the right and top spines ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ``` **Note:** Compare the coverage over time with any major updates to your assistant, to see if the changes affected the performance. <a id="conv_analysis"></a> ## 6. Analyze effectiveness Here, we will take a deeper look at effectiveness of the assistant ``` # Get the escalated conversations df_effective_true = df_effective.loc[df_effective['Escalated_conversation']==True] # Get the non-escalated conversations df_not_effective = df_effective.loc[df_effective['Escalated_conversation']==False] # Calculate percentage of escalated conversations ef_escalated = float("{0:.2f}".format(100-effective_perc)) # Calculate coverage and non-coverage in escalated conversations if len(df_effective_true) > 0: escalated_covered = float("{0:.2f}".format((df_effective_true['Covered'].value_counts().to_frame()['Covered'][True]/df_effective_true['Covered'].value_counts().sum())*100)) escalated_not_covered = float("{0:.2f}".format(100- escalated_covered)) else: escalated_covered = 0 escalated_not_covered = 0 # Calculate coverage and non-coverage in non-escalated conversations if len(df_not_effective) > 0: not_escalated_covered = float("{0:.2f}".format((df_not_effective['Covered'].value_counts().to_frame()['Covered'][True]/df_not_effective['Covered'].value_counts().sum())*100)) not_escalated_not_covered = float("{0:.2f}".format(100 - not_escalated_covered)) else: not_escalated_covered = 0 not_escalated_not_covered = 0 # Calculate average confidence of escalated conversations if len(df_effective_true) > 0: esc_avg_conf = float("{0:.2f}".format(df_effective_true[df_effective_true['Covered']==True]['response.top_intent_confidence'].mean()*100)) else: esc_avg_conf = 0 # Calculate average confidence of non-escalated conversations if len(df_not_effective) > 0: not_esc_avg_conf = float("{0:.2f}".format(df_not_effective[df_not_effective['Covered']==True]['response.top_intent_confidence'].mean()*100)) else: not_esc_avg_conf = 0 ``` ### 6.1 Generate excel file and upload to our project<a id="conv_analysis1"></a> ``` # Copy the effective dataframe df_excel = df_effective.copy(deep=True) # Rename columns to generate excel df_excel = df_excel.rename(columns={'log_id':'Log ID', 'response.context.conversation_id':'Conversation ID', 'response.timestamp':'Response Timestamp', 'request_input':'Utterance Text', 'response_text':'Response Text', 'response.top_intent_intent':'Detected top intent', 'response.top_intent_confidence':'Detected top intent confidence', 'Intent 2 intent': 'Intent 2', 'Intent 2 confidence':'Intent 2 Confidence', 'Intent 3 intent': 'Intent 3', 'Intent 3 confidence':'Intent 3 Confidence', 'response_entities':'Detected Entities', 'Escalated_conversation':'Escalated conversation?', 'Covered':'Covered?', 'Not Covered cause':'Not covered - cause', 'response.output.nodes_visited_s':'Dialog Flow', 'response_dialog_stack':'Dialog stack', 'response_dialog_request_counter':'Dialog request counter', 'response_dialog_turn_counter':'Dialog turn counter' }) existing_columns = ['Log ID', 'Conversation ID', 'Response Timestamp', 'Customer ID (must retain for delete)', 'Utterance Text', 'Response Text', 'Detected top intent', 'Detected top intent confidence', 'Intent 2', 'Intent 2 Confidence', 'Confidence gap (between 1 and 2)', 'Intent 3', 'Intent 3 Confidence', 'Detected Entities', 'Escalated conversation?', 'Covered?', 'Not covered - cause', 'Dialog Flow', 'Dialog stack', 'Dialog request counter', 'Dialog turn counter'] # Add new columns for annotating problematic logs new_columns_excel = ['Response Correct (Y/N)?', 'Response Helpful (Y/N)?', 'Root cause (Problem with Intent, entity, dialog)', 'Wrong intent? If yes, put the correct intent. Otherwise leave it blank', 'New intent needed? (A new intent. Otherwise leave blank)', 'Add Utterance to Training data (Y/N)', 'Entity missed? If yes, put the missed entity value. Otherwise leave it blank', 'New entity needed? If yes, put the entity name', 'New entity value? If yes, put the entity value', 'New dialog logic needed?', 'Wrong dialog node? If yes, put the node name. Otherwise leave it blank','No dialog node triggered'] # Add the new columns to the dataframe df_excel = df_excel.reindex(columns=[*existing_columns, *new_columns_excel], fill_value='') # Set maximum sampling size SAMPLE_SIZE = 200 # Set output filename all_file = 'All.xlsx' escalated_sample_file = 'Escalated_sample.xlsx' non_escalated_sample_file = 'NotEscalated_sample.xlsx' # Generate all covered sample file df_covered = df_excel[df_excel['Covered?']==True].reset_index(drop=True) # Generate all not covered sample file df_not_covered = df_excel[df_excel['Covered?']==False].reset_index(drop=True) # Convert to Excel format and upload to COS generate_excel_measure([df_covered,df_not_covered], ['Covered', 'Not_Covered'], filename=all_file, project_io=None) # Generate escalated and covered sample file df_escalated_true = df_excel.loc[df_excel['Escalated conversation?']==True] df_escalated_covered = df_escalated_true[df_escalated_true['Covered?']==True] if len(df_escalated_covered) > 0: df_escalated_covered = df_escalated_covered.sample(n=min(len(df_escalated_covered), SAMPLE_SIZE), random_state=1).reset_index(drop=True) # Generate escalated but not covered sample file df_escalated_not_covered = df_escalated_true[df_escalated_true['Covered?']==False] if len(df_escalated_not_covered) > 0: df_escalated_not_covered = df_escalated_not_covered.sample(n=min(len(df_escalated_not_covered), SAMPLE_SIZE), random_state=1).reset_index(drop=True) # Covert to Excel format and upload to COS generate_excel_measure([df_escalated_covered,df_escalated_not_covered], ['Covered', 'Not_Covered'], filename=escalated_sample_file, project_io=None) # Generate not escalated but covered sample file df_not_escalated = df_excel.loc[df_excel['Escalated conversation?']==False] df_not_escalated_covered = df_not_escalated[df_not_escalated['Covered?']==True] if len(df_not_escalated_covered) > 0: df_not_escalated_covered = df_not_escalated_covered.sample(n=min(len(df_not_escalated_covered), SAMPLE_SIZE), random_state=1).reset_index(drop=True) # Generate not escalated and not covered sample file df_not_escalated_not_covered = df_not_escalated[df_not_escalated['Covered?']==False] if len(df_not_escalated_not_covered) > 0: df_not_escalated_not_covered = df_not_escalated_covered.sample(n=min(len(df_not_escalated_not_covered), SAMPLE_SIZE), random_state=1).reset_index(drop=True) # Covert to Excel format and upload to COS generate_excel_measure([df_not_escalated_covered,df_not_escalated_not_covered], ['Covered', 'Not_Covered'], filename=non_escalated_sample_file, project_io=None) ``` ### 6.2 Plot breakdown by effectiveness graph<a id="conv_analysis2"></a> ``` #### Get the links to the excels all_html_link = '<a href={} target="_blank">All.xlsx</a>'.format(all_file) escalated_html_link = '<a href={} target="_blank">Escalated_sample.xlsx</a>'.format(escalated_sample_file) not_escalated_html_link = '<a href={} target="_blank">NotEscalated_sample.xlsx</a>'.format(non_escalated_sample_file) # Embed the links in HTML table format link_html = '<tr><th colspan="4"><div align="left"><a id="file_list"></a>View the lists here: {}&nbsp;&nbsp;&nbsp;{}&nbsp;&nbsp;&nbsp;{}</div></th></tr>'.format(all_html_link, escalated_html_link, not_escalated_html_link) if 100-effective_perc > 0: escalated_bar = coverage_barh(escalated_covered, esc_avg_conf, '', True, 15, width_bar(100-effective_perc)) else: escalated_bar = '' if effective_perc > 0: non_escalated_bar = coverage_barh(not_escalated_covered, not_esc_avg_conf, '' , True , 15,width_bar(effective_perc)) else: non_escalated_bar = '' # Plot the results HTML('<tr><th colspan="4"><div align="left"><h2>Breakdown by effectiveness<hr/></h2></div></th></tr>\ '+ link_html + '<tr><td style= "border-right: 1px solid black; border-bottom: 1px solid black; width : 400px"><div align="left"><strong>Effectiveness (Escalated)&nbsp;</br>\ <font size="5">{ef_escalated}%</strong></font size></br></div></td>\ <td style="width:1000px; height=100;">{one}</td></tr>\ <tr><td style= "border-right: 1px solid black; border-bottom: 1px solid black; width : 400px;"><div align="left"><strong>Effectiveness (Not escalated)&nbsp;</br>\ <font size="5">{effective_perc}%</strong></font size></br></div></td>\ <td style="width:1000px; height=100;border-bottom: 1px solid black;">{two}</td>\ </tr>'.format(ef_escalated= ef_escalated, one = escalated_bar, effective_perc = effective_perc, two = non_escalated_bar)) ``` You can download all the analyzed data from `All.xlsx`. A sample of escalated and non-escalated conversations are available in `Escalated_sample.xlsx` and `NotEscalated_sample.xlsx` respectively. <a id="root_cause"></a> ## 7. Root cause analysis of non coverage Lets us take a look at the reasons for non-coverage of messages ``` # Count the causes for non-coverage and store results in dataframe not_covered = pd.DataFrame(df_coverage['Not Covered cause'].value_counts().reset_index()) # Name the columns in the dataframe not_covered.columns = ['Messages', 'Total'] not_covered ``` <a id="abandoned_resolved_intents"></a> ## 8. Abandoned and resolved intent analysis When users engage in a conversation session, an assistant identifies the intent of each message from the user. Based on the logic flow defined in a dialog tree, the assistant communicates with users and performs actions. The assistant may succeed or fail to satisfy users' intent. One way to identify patterns of success or failure is by analyzing which intents most often lead to a dialog node associated with resolution, and which intents most often lead to users abandoning the session. Analyzing resolved and abandoned intents can help you identify issues in your assistant to improve, such as a problematic dialog flow or imprecise intents. In this section, we demonstrate a method of conducting intent analysis using context variables. We introduce two context variables: `response_context_IntentStarted` and `response_context_IntentCompleted`. You will need to modify your dialog skill definition (workspace) to introduce these variables in your dialog flow. After you modify your dialog skill definition, your logs will be marked such that when users trigger a conversation with an intent, the assistant will use `response_context_IntentStarted` to record the intent. During the conversation, the assistant will use `response_context_IntentCompleted` to record if the intent is satisfied. Follow the steps below to add the context variables for an intent in your dialog skill definition. 1. Open Dialog Overview page, see <a href="https://cloud.ibm.com/docs/services/assistant?topic=assistant-dialog-overview" target="_blank">Dialog Overview</a> for more information. 2. Click the entry point node of the dialog node that is associated with the intent you want to analyze 3. Open the context editor, see <a href="https://cloud.ibm.com/docs/services/assistant?topic=assistant-dialog-runtime#dialog-runtime-context-variables" target="_blank">Context Variables</a> for more information 4. Add `response_context_IntentStarted` as a variable and \[intent_name\] as the value 5. Follow the dialog flow to locate the satisfying node of the intent 6. Open the context editor 7. Add `response_context_IntentCompleted` as variable and \[intent_name\] as the value 8. Repeat step 5-7 to mark all satisfying nodes of the intent if necessary Then repeat the above steps for every intent you want to analyze in this way. After completing the above steps, run the following code for intent analysis. Note that the analysis requires logs generated after the above changes. You will need to reload the updated workspace and logs. ### 8.1 Count of all started intents<a id="started_intents"></a> ``` # Define context variables start_intent_variable = 'response_context_IntentStarted' if start_intent_variable in df_formated: # Group dataframe by conversation_id and start_intent_variable df_intent_started = df_formated.groupby(['response.context.conversation_id', start_intent_variable]).count().reset_index() # Refactors data to show only columns of conversation_id and start_intent_variable df_intent_started = df_intent_started[['response.context.conversation_id', start_intent_variable]] # Count the number of conversation_ids with each start_intent_variable intent_started = df_intent_started[start_intent_variable].value_counts().reset_index() intent_started.columns = ['Intent', 'Count'] display(HTML(intent_started.to_html())) else: print('Cannot find \'response_context_IntentStarted\' and \'response_context_IntentCompleted\' in logs. Please check step 4 and make sure updated logs are reloaded.') ``` ### 8.2 Analyze resolved intents<a id="resolved_intents"></a> ``` end_intent_variable = 'response_context_IntentCompleted' if end_intent_variable in df_formated: # Group dataframe by conversation_id and end_intent_variable df_intent_completed = df_formated.groupby(['response.context.conversation_id',end_intent_variable]).count().reset_index() # Refactor data to show columns of conversation_id and end_intent_variable only df_intent_completed = df_intent_completed[['response.context.conversation_id',end_intent_variable]] # Count the number of conversation_ids with each end_intent_variable intent_completed = df_intent_completed[end_intent_variable].value_counts().reset_index() intent_completed.columns = ['Intent', 'Count'] # Show counts of resolved intents intent_completed_title = '\nCount of resolved intents in all conversations\n' print(intent_completed_title, "=" * len(intent_completed_title),'', sep = '') display(HTML(intent_completed.to_html())) # Convert dataframe to a list res_intent_list = intent_completed.values.tolist() # Get list of started intents all_intent = df_intent_started[start_intent_variable].value_counts().reset_index().values.tolist() # Loop over resolved intents list. Each element contains a pair of intent and count data = [] for pair_ab in res_intent_list: # Loop over each row of started intents. Each row contains a pair of intent and count for pair_all in all_intent: # Check if the intent name matches in started and resolved intents if pair_ab[0] == pair_all[0]: # Then acccesses the count from that matched intent, and calculate percentage perc = (pair_ab[1]/pair_all[1])*100 # Add the matched intent name and percentage to data list data.append([pair_ab[0],perc]) # Create a new dataframe with data list resolved_percentage = pd.DataFrame(data=data).reset_index(drop=True) # Format the dataframe, and orders data in descending order (shows highest percentage first) resolved_percentage.columns = ['Intent','Percentage'] resolved_percentage.sort_values(ascending=False,inplace=True, by='Percentage') # Format the data in the percentage column to include '%', and 1 decimal point resolved_percentage['Percentage'] = resolved_percentage['Percentage'].apply(lambda x: "{0:.1f}%".format(x)) resolved_percentage.reset_index(drop=True, inplace=True) # Show most resolved intents most_resolved_intents = "\nMost resolved intents (%)\n" print(most_resolved_intents, "=" * len(most_resolved_intents),'', sep = '') display(HTML(resolved_percentage.to_html())) else: print('Cannot find \'response_context_IntentStarted\' and \'response_context_IntentCompleted\' in logs. Please check step 4 and make sure updated logs are reloaded.') ``` ### 8.3 Analyze abandoned intents<a id="abandoned_intents"></a> ``` if start_intent_variable in df_formated and end_intent_variable in df_formated: # Create lists of started and end_intent_variable intent_complete_list = df_intent_completed.values.tolist() intent_started_list = df_intent_started.values.tolist() # Looping over completed intents list. Each element contains a pair of conversation id and end_intent_variable for pair in intent_complete_list: # Checks if any element is found in list of started intents if pair in intent_started_list: # If found, remove that pair from the list of started intents intent_started_list.remove(pair) # Create a new dataframe with updated dataset. # This updated dataset contains intents that have been started but not completed, thus categorised as abandoned df_intent_abandoned = pd.DataFrame(data=intent_started_list) # Group each pair (conversation id, intent abandoned), and show number of occurances of each abandoned intent final_intent_abandoned = df_intent_abandoned[1].value_counts().reset_index() final_intent_abandoned.columns = ['Intent','Count'] # Show counts of abandoned intents intent_abandoned_title = '\nCount of abandoned intents in all conversations\n' print(intent_abandoned_title, "=" * len(intent_abandoned_title),'', sep = '') display(HTML(final_intent_abandoned.to_html())) # Convert dataframe to a list aban_intent_list = final_intent_abandoned.values.tolist() # Get list of started intents all_intent = df_intent_started[start_intent_variable].value_counts().reset_index().values.tolist() # Loop over resolved intents list. Each element contains a pair of intent and count data = [] for pair_ab in aban_intent_list: # Loop over each row of started intents. Each row contains a pair of intent and count for pair_all in all_intent: # Check if the intent name matches in started and resolved intents if pair_ab[0] == pair_all[0]: # Then acccesse the count from that matched intent, and calculate percentage perc = (pair_ab[1]/pair_all[1])*100 # Add the matched intent name and percentage to data list data.append([pair_ab[0],perc]) # Create a new dataframe with data list abandoned_percentage = pd.DataFrame(data=data).reset_index(drop=True) # Format the dataframe, and orders data in descending order (shows highest percentage first) abandoned_percentage.columns = ['Intent','Percentage'] abandoned_percentage.sort_values(ascending=False,inplace=True, by='Percentage') abandoned_percentage.reset_index(drop=True, inplace=True) # Format the data in the percentage column to include '%', and 1 decimal point abandoned_percentage['Percentage'] = abandoned_percentage['Percentage'].apply(lambda x: "{0:.1f}%".format(x)) # Show most abandoned intents most_abandoned_intents = "\nMost abandoned intents (%)\n" print(most_abandoned_intents, "=" * len(most_abandoned_intents),'', sep = '') display(HTML(abandoned_percentage.to_html())) else: print('Cannot find \'response_context_IntentStarted\' and \'response_context_IntentCompleted\' in logs. Please check step 4 and make sure updated logs are reloaded.') ``` Finally, we generate an Excel file that lists all conversations for which there are abandoned and resolved intents for further analysis. ``` if df_intent_abandoned is not None and df_intent_completed is not None: # Rename columns df_intent_abandoned.columns = ['Conversation_id','Intent'] df_intent_completed.columns = ['Conversation_id','Intent'] # Generate excel file file_name = 'Abandoned_Resolved.xlsx' generate_excel_measure([df_intent_abandoned,df_intent_completed], ['Abandoned', 'Resolved'], filename= file_name, project_io=None) link_html = 'Abandoned and resolved intents: <b><a href={} target="_blank">Abandoned_Resolved.xlsx</a></b>'.format(file_name) display(HTML(link_html)) else: print('Cannot find \'response_context_IntentStarted\' and \'response_context_IntentCompleted\' in logs. Please check step 4 and make sure updated logs are reloaded.') ``` <a id="summary"></a> ## 9. Summary and next steps The metrics described above help you narrow your immediate focus of improvement. We suggest the following two strategies: - **Toward improving Effectiveness** We suggest focusing on a group of problematic conversations, e.g., escalated conversations, then performing a deeper analysis on these conversation as follows. <br> 1. Choose to download either the complete conversations ([All.xlsx](#file_list)), or sampled escalated conversations [Escalated_sample.xlsx](#file_list), or non-escalated conversations [NotEscalated_sample.xlsx](#file_list).<br> 2. Perform a manual assessment of these conversations.<br> 3. Analyze the results using our __Analyze Watson Assistant Effectiveness__ Jupyter Notebook. - **Toward improving Coverage** For utterances where an intent was found but no response was given. We suggest performing a deeper analysis to identify root causes, e.g., missing entities or lacking of dialog logic. For utterances where no intent was found, we suggest expanding intent coverage as follows. 1. Examine utterances from the production log, especially focus on the utterances that are below the confidence (0.2 by default). 2. If you set a confidence threshold significantly higher than 0.2, we suggest looking at utterances that are below but close to the threshold. 3. Once you select a collection of utterances, intent expansion, you can focus on intent expansion by two methods: - One-by-One: examine each utterance to either change to an existing intent or add a new intent. - Unsupervised Learning: perform semantic clustering to generate utterance clusters; examine each cluster to decide (1) adding utterances of an existing intent or (2) creating a new intent. For more information, please check <a href="https://github.com/watson-developer-cloud/assistant-improve-recommendations-notebook/raw/master/notebook/IBM%20Watson%20Assistant%20Continuous%20Improvement%20Best%20Practices.pdf" target="_blank" rel="noopener noreferrer">Watson Assistant Continuous Improvement Best Practices</a>. ### <a id="authors"></a>Authors **Zhe Zhang**, Ph.D. in Computer Science, is an Advisory Software Engineer for IBM Watson AI. Zhe has a research background in Natural Language Processing, Sentiment Analysis, Text Mining, and Machine Learning. His research has been published at leading conferences and journals including ACL and EMNLP. **Sherin Varughese** is a Data Scientist for IBM Watson AI. Sherin has her graduate degree in Business Intelligence and Data Analytics and has experience in Data Analysis, Warehousing and Machine Learning. ### <a id="acknowledgement"></a> Acknowledgement The authors would like to thank the following members of the IBM Research and Watson Assistant teams for their contributions and reviews of the notebook: Matt Arnold, Adam Benvie, Kyle Croutwater, Eric Wayne. Copyright © 2019 IBM. This notebook and its source code are released under the terms of the MIT License.
github_jupyter
# Modeling and Simulation in Python Chapter 4 Copyright 2017 Allen Downey License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0) ``` # Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim library from modsim import * ``` ## Returning values Here's a simple function that returns a value: ``` def add_five(x): return x + 5 ``` And here's how we call it. ``` y = add_five(3) ``` If you run a function on the last line of a cell, Jupyter displays the result: ``` add_five(5) ``` But that can be a bad habit, because usually if you call a function and don't assign the result in a variable, the result gets discarded. In the following example, Jupyter shows the second result, but the first result just disappears. ``` add_five(3) add_five(5) ``` When you call a function that returns a variable, it is generally a good idea to assign the result to a variable. ``` y1 = add_five(3) y2 = add_five(5) print(y1, y2) ``` **Exercise:** Write a function called `make_state` that creates a `State` object with the state variables `olin=10` and `wellesley=2`, and then returns the new `State` object. Write a line of code that calls `make_state` and assigns the result to a variable named `init`. ``` def make_state(init): init = State(olin=10, wellesley=2) return init make_state(init) ``` ## Running simulations Here's the code from the previous notebook. ``` def step(state, p1, p2): """Simulate one minute of time. state: bikeshare State object p1: probability of an Olin->Wellesley customer arrival p2: probability of a Wellesley->Olin customer arrival """ if flip(p1): bike_to_wellesley(state) if flip(p2): bike_to_olin(state) def bike_to_wellesley(state): """Move one bike from Olin to Wellesley. state: bikeshare State object """ if state.olin == 0: state.olin_empty += 1 return state.olin -= 1 state.wellesley += 1 def bike_to_olin(state): """Move one bike from Wellesley to Olin. state: bikeshare State object """ if state.wellesley == 0: state.wellesley_empty += 1 return state.wellesley -= 1 state.olin += 1 def decorate_bikeshare(): """Add a title and label the axes.""" decorate(title='Olin-Wellesley Bikeshare', xlabel='Time step (min)', ylabel='Number of bikes') ``` Here's a modified version of `run_simulation` that creates a `State` object, runs the simulation, and returns the `State` object. ``` def run_simulation(p1, p2, num_steps): """Simulate the given number of time steps. p1: probability of an Olin->Wellesley customer arrival p2: probability of a Wellesley->Olin customer arrival num_steps: number of time steps """ state = State(olin=10, wellesley=2, olin_empty=0, wellesley_empty=0) for i in range(num_steps): step(state, p1, p2) return state ``` Now `run_simulation` doesn't plot anything: ``` state = run_simulation(0.4, 0.2, 60) ``` But after the simulation, we can read the metrics from the `State` object. ``` state.olin_empty ``` Now we can run simulations with different values for the parameters. When `p1` is small, we probably don't run out of bikes at Olin. ``` state = run_simulation(0.2, 0.2, 60) state.olin_empty ``` When `p1` is large, we probably do. ``` state = run_simulation(0.6, 0.2, 60) state.olin_empty ``` ## More for loops `linspace` creates a NumPy array of equally spaced numbers. ``` p1_array = linspace(0, 1, 5) ``` We can use an array in a `for` loop, like this: ``` for p1 in p1_array: print(p1) ``` This will come in handy in the next section. `linspace` is defined in `modsim.py`. You can get the documentation using `help`. ``` help(linspace) ``` `linspace` is based on a NumPy function with the same name. [Click here](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html) to read more about how to use it. **Exercise:** Use `linspace` to make an array of 10 equally spaced numbers from 1 to 10 (including both). ``` # Solution goes here ``` **Exercise:** The `modsim` library provides a related function called `linrange`. You can view the documentation by running the following cell: ``` help(linrange) ``` Use `linrange` to make an array of numbers from 1 to 11 with a step size of 2. ``` # Solution goes here ``` ## Sweeping parameters `p1_array` contains a range of values for `p1`. ``` p2 = 0.2 num_steps = 60 p1_array = linspace(0, 1, 11) ``` The following loop runs a simulation for each value of `p1` in `p1_array`; after each simulation, it prints the number of unhappy customers at the Olin station: ``` for p1 in p1_array: state = run_simulation(p1, p2, num_steps) print(p1, state.olin_empty) ``` Now we can do the same thing, but storing the results in a `SweepSeries` instead of printing them. ``` sweep = SweepSeries() for p1 in p1_array: state = run_simulation(p1, p2, num_steps) sweep[p1] = state.olin_empty ``` And then we can plot the results. ``` plot(sweep, label='Olin') decorate(title='Olin-Wellesley Bikeshare', xlabel='Arrival rate at Olin (p1 in customers/min)', ylabel='Number of unhappy customers') savefig('figs/chap02-fig02.pdf') ``` ## Exercises **Exercise:** Wrap this code in a function named `sweep_p1` that takes an array called `p1_array` as a parameter. It should create a new `SweepSeries`, run a simulation for each value of `p1` in `p1_array`, store the results in the `SweepSeries`, and return the `SweepSeries`. Use your function to plot the number of unhappy customers at Olin as a function of `p1`. Label the axes. ``` # Solution goes here # Solution goes here ``` **Exercise:** Write a function called `sweep_p2` that runs simulations with `p1=0.5` and a range of values for `p2`. It should store the results in a `SweepSeries` and return the `SweepSeries`. ``` # Solution goes here # Solution goes here ``` ## Optional exercises The following two exercises are a little more challenging. If you are comfortable with what you have learned so far, you should give them a try. If you feel like you have your hands full, you might want to skip them for now. **Exercise:** Because our simulations are random, the results vary from one run to another, and the results of a parameter sweep tend to be noisy. We can get a clearer picture of the relationship between a parameter and a metric by running multiple simulations with the same parameter and taking the average of the results. Write a function called `run_multiple_simulations` that takes as parameters `p1`, `p2`, `num_steps`, and `num_runs`. `num_runs` specifies how many times it should call `run_simulation`. After each run, it should store the total number of unhappy customers (at Olin or Wellesley) in a `TimeSeries`. At the end, it should return the `TimeSeries`. Test your function with parameters ``` p1 = 0.3 p2 = 0.3 num_steps = 60 num_runs = 10 ``` Display the resulting `TimeSeries` and use the `mean` function provided by the `TimeSeries` object to compute the average number of unhappy customers. ``` # Solution goes here # Solution goes here ``` **Exercise:** Continuting the previous exercise, use `run_multiple_simulations` to run simulations with a range of values for `p1` and ``` p2 = 0.3 num_steps = 60 num_runs = 20 ``` Store the results in a `SweepSeries`, then plot the average number of unhappy customers as a function of `p1`. Label the axes. What value of `p1` minimizes the average number of unhappy customers? ``` # Solution goes here # Solution goes here ```
github_jupyter
# Training baseline model This notebook shows the implementation of a baseline model for our movie genre classification problem. ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import re import json import nltk from sklearn.model_selection import train_test_split from sklearn.preprocessing import MultiLabelBinarizer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.linear_model import LogisticRegression from sklearn.datasets import make_multilabel_classification from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import LinearSVC from sklearn.metrics import f1_score from sklearn.metrics import precision_recall_fscore_support ``` Load the data and keep the title information: we will add the title to the overview, and delete the title column. ``` path = '../data/movies_data_ready.csv' df = pd.read_csv(path) df['genres'] = df['genres'].apply(lambda x: x.split(',')) df['overview'] = df['title'].apply(lambda x: x.lower()).astype(str) + ' ' + df['overview'] del df['title'] df.head() ``` ### One-hot vector representation and TF-IDF: ``` multilabel_binarizer = MultiLabelBinarizer() multilabel_binarizer.fit(df['genres']) # transform target variable y = multilabel_binarizer.transform(df['genres']) print(multilabel_binarizer.classes_) print('size = ', len(multilabel_binarizer.classes_)) # Split the data in train, validate, test testing_size = 0.15 x_train, x_test, y_train, y_test = train_test_split(df['overview'], y, test_size=testing_size, random_state=42) validation_size_relative = testing_size/(1-testing_size) tfidf_vectorizer = TfidfVectorizer(max_df=0.8, max_features=100000) # create TF-IDF features x_train_tfidf = tfidf_vectorizer.fit_transform(x_train.values.astype('U')) x_test_tfidf = tfidf_vectorizer.transform(x_test.values.astype('U')) ``` ### ML model : Logistic regression ``` lr = LogisticRegression(solver='saga', n_jobs=-1, max_iter=1000) clf = OneVsRestClassifier(lr) # fit model on train data clf.fit(x_train_tfidf, y_train) # predict probabilities y_pred_prob = clf.predict_proba(x_test_tfidf) t = 0.5 # threshold value y_pred_new = np.where(y_pred_prob >= t, 1, 0) precision_recall_fscore_support(y_test, y_pred_new, average='micro') ``` ### ML model: SVC ``` svc = LinearSVC() clf_svm = OneVsRestClassifier(svc) # fit model on train data clf_svm.fit(x_train_tfidf, y_train) # make predictions for validation set y_pred = clf_svm.predict(x_test_tfidf) precision_recall_fscore_support(y_test, y_pred, average='micro') ``` ## ROC curves ``` from itertools import cycle from sklearn.metrics import roc_curve, auc from sklearn.metrics import roc_auc_score from scipy import interp from matplotlib import colors as mcolors import seaborn as sns def plot_multilabel_ROC(classes_lab, y_score, title, hide_classes=False): # Compute ROC curve and ROC area for each class fpr = dict() tpr = dict() thresholds = dict() roc_auc = dict() n_classes = len(classes_lab) for i in range(n_classes): fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) # Compute micro-average ROC curve and ROC area fpr["micro"], tpr["micro"], thresholds['micro'] = roc_curve(y_test.ravel(), y_score.ravel()) roc_auc["micro"] = auc(fpr["micro"], tpr["micro"]) # First aggregate all false positive rates all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)])) # Then interpolate all ROC curves at this points mean_tpr = np.zeros_like(all_fpr) for i in range(n_classes): mean_tpr += interp(all_fpr, fpr[i], tpr[i]) # Finally average it and compute AUC mean_tpr /= n_classes fpr["macro"] = all_fpr tpr["macro"] = mean_tpr roc_auc["macro"] = auc(fpr["macro"], tpr["macro"]) lw = 3 # Plot all ROC curves plt.figure(figsize=(12,12)) if not hide_classes: colors = cycle(sns.color_palette('bright', 8)) for i, color in zip(range(n_classes), colors): lab = classes_lab[i] + '(auc = '+ str(round(roc_auc[i],2)) + ')' plt.plot(fpr[i], tpr[i], color=color, label=lab, linewidth=1) plt.plot(fpr["micro"], tpr["micro"], label='micro-average ROC curve (area = {0:0.2f})'''.format(roc_auc["micro"]), color='deeppink', linestyle=':', linewidth=8) plt.plot(fpr["macro"], tpr["macro"], label='macro-average ROC curve (area = {0:0.2f})'''.format(roc_auc["macro"]), color='navy', linestyle=':', linewidth=8) plt.plot([0, 1], [0, 1], 'k--', lw=lw) plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title(title) plt.legend(loc="lower right") plt.show() return thresholds['micro'], fpr['macro'], tpr['macro'] multilabel_binarizer.classes_ title = 'ROC curve for logistic regression' thresh, fpr, tpr = plot_multilabel_ROC(multilabel_binarizer.classes_, y_pred_prob, title, hide_classes=False) ``` ### Find the best threshold for logistic regression ``` optimal_idx = np.argmax(tpr - fpr) optimal_threshold = thresh[optimal_idx] optimal_threshold y_pred_new = np.where(y_pred_prob > optimal_threshold, 1, 0) precision_recall_fscore_support(y_test, y_pred_new, average='micro') ``` ### Precision recall curve ``` from sklearn.metrics import precision_recall_curve # precision, recall, thresholds = precision_recall_curve(y_test, y_pred_prob) step = 0.01 prec = {'micro':{}} recall = {'micro':{}} f1 = {'micro':{}} thresholds = np.arange(0.0, 1.0, step) for t in thresholds: pred_t = np.where(y_pred_prob >= t, 1, 0) prec['micro'][t], recall['micro'][t], f1['micro'][t], _ = precision_recall_fscore_support(y_test, pred_t, average='micro') plt.plot(list(recall['micro'].values()), list(prec['micro'].values()), lw=2, label='micro avg') plt.xlabel("recall") plt.ylabel("precision") plt.legend(loc="best") plt.title("precision vs. recall curve") plt.show() optimal_idx_micro = np.argmax(list(f1['micro'].values())) print('micro optimal threshold:', thresholds[optimal_idx_micro]) y_pred_new = np.where(y_pred_prob > thresholds[optimal_idx_micro], 1, 0) print(precision_recall_fscore_support(y_test, y_pred_new, average='micro')) ```
github_jupyter
# 张量 [![](https://gitee.com/mindspore/docs/raw/master/resource/_static/logo_source.png)](https://gitee.com/mindspore/docs/blob/master/tutorials/source_zh_cn/tensor.ipynb)&emsp;[![](https://gitee.com/mindspore/docs/raw/master/resource/_static/logo_notebook.png)](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/master/tutorials/zh_cn/mindspore_tensor.ipynb)&emsp;[![](https://gitee.com/mindspore/docs/raw/master/resource/_static/logo_modelarts.png)](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9vYnMuZHVhbHN0YWNrLmNuLW5vcnRoLTQubXlodWF3ZWljbG91ZC5jb20vbWluZHNwb3JlLXdlYnNpdGUvbm90ZWJvb2svbW9kZWxhcnRzL3F1aWNrX3N0YXJ0L21pbmRzcG9yZV90ZW5zb3IuaXB5bmI=&imageid=65f636a0-56cf-49df-b941-7d2a07ba8c8c) 张量(Tensor)是MindSpore网络运算中的基本数据结构。 首先导入本文档需要的模块和接口,如下所示: ``` import numpy as np from mindspore import Tensor, context from mindspore import dtype as mstype context.set_context(mode=context.GRAPH_MODE, device_target="CPU") ``` ## 初始化张量 张量的初始化方式有多种,构造张量时,支持传入`Tensor`、`float`、`int`、`bool`、`tuple`、`list`和`NumPy.array`类型。 - **根据数据直接生成** 可以根据数据创建张量,数据类型可以设置或者自动推断。 ``` x = Tensor(0.1) ``` - **从NumPy数组生成** 可以从NumPy数组创建张量。 ``` arr = np.array([1, 0, 1, 0]) x_np = Tensor(arr) ``` 初始值是`NumPy.array`,则生成的`Tensor`数据类型与之对应。 - **继承另一个张量的属性,形成新的张量** ``` from mindspore import ops oneslike = ops.OnesLike() x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.int32)) output = oneslike(x) print(output) ``` - **输出指定大小的恒定值张量** `shape`是张量的尺寸元组,确定输出的张量的维度。 ``` from mindspore.ops import operations as ops shape = (2, 2) ones = ops.Ones() output = ones(shape, mstype.float32) print(output) zeros = ops.Zeros() output = zeros(shape, mstype.float32) print(output) ``` `Tensor`初始化时,可指定dtype,如`mstype.int32`、`mstype.float32`、`mstype.bool_`等。 ## 张量的属性 张量的属性包括形状(shape)和数据类型(dtype)。 - 形状:`Tensor`的shape,是一个tuple。 - 数据类型:`Tensor`的dtype,是MindSpore的一个数据类型。 ``` t1 = Tensor(np.zeros([1,2,3]), mstype.float32) print("Datatype of tensor: {}".format(t1.dtype)) print("Shape of tensor: {}".format(t1.shape)) ``` ## 张量运算 张量之间有很多运算,包括算术、线性代数、矩阵处理(转置、标引、切片)、采样等,下面介绍其中几种操作,张量运算和NumPy的使用方式类似。 类似NumPy的索引和切片操作: ``` tensor = Tensor(np.array([[0, 1], [2, 3]]).astype(np.float32)) print("First row: {}".format(tensor[0])) print("First column: {}".format(tensor[:, 0])) print("Last column: {}".format(tensor[..., -1])) ``` `Concat`将给定维度上的一系列张量连接起来。 ``` data1 = Tensor(np.array([[0, 1], [2, 3]]).astype(np.float32)) data2 = Tensor(np.array([[4, 5], [6, 7]]).astype(np.float32)) op = ops.Concat() output = op((data1, data2)) print(output) ``` `Stack`则是从另一个维度上将两个张量合并起来。 ``` data1 = Tensor(np.array([[0, 1], [2, 3]]).astype(np.float32)) data2 = Tensor(np.array([[4, 5], [6, 7]]).astype(np.float32)) op = ops.Stack() output = op([data1, data2]) print(output) ``` 普通运算: ``` input_x = Tensor(np.array([1.0, 2.0, 3.0]), mstype.float32) input_y = Tensor(np.array([4.0, 5.0, 6.0]), mstype.float32) mul = ops.Mul() output = mul(input_x, input_y) print(output) ``` ## 与NumPy转换 张量可以和NumPy进行互相转换。 ### 张量转换为NumPy ``` zeros = ops.Zeros() output = zeros((2,2), mstype.float32) print("output: {}".format(type(output))) n_output = output.asnumpy() print("n_output: {}".format(type(n_output))) ``` ### NumPy转换为张量 ``` output = np.array([1, 0, 1, 0]) print("output: {}".format(type(output))) t_output = Tensor(output) print("t_output: {}".format(type(t_output))) ```
github_jupyter
# Speed benchmarks This is just for having a quick reference of how the speed of running the program scales ``` from __future__ import print_function import pprint import subprocess import sys sys.path.append('../') # sys.path.append('/home/heberto/learning/attractor_sequences/benchmarking/') import numpy as np import matplotlib.pyplot as plt import matplotlib import matplotlib.gridspec as gridspec from mpl_toolkits.axes_grid1 import make_axes_locatable import seaborn as sns %matplotlib inline np.set_printoptions(suppress=True, precision=2) sns.set(font_scale=2.0) ``` #### Git machine ``` run_old_version = False if run_old_version: hash_when_file_was_written = '321620ef1b753fe42375bbf535c9ab941b72ae26' hash_at_the_moment = subprocess.check_output(["git", 'rev-parse', 'HEAD']).strip() print('Actual hash', hash_at_the_moment) print('Hash of the commit used to run the simulation', hash_when_file_was_written) subprocess.call(['git', 'checkout', hash_when_file_was_written]) ``` #### Load the libraries ``` from benchmarking.standard_program import run_standard_program, calculate_succes_program, training_program import timeit def wrapper(func, *args, **kwargs): def wrapped(): return func(*args, **kwargs) return wrapped ``` ## Standard program #### Minicolumns ``` hypercolumns = 4 minicolumns_range = np.arange(10, 100, 5) epochs = 1 times_minicolumns = [] for minicolumns in minicolumns_range: function = wrapper(run_standard_program, hypercolumns=hypercolumns, minicolumns=minicolumns, epochs=epochs) time = timeit.timeit(function, number=1) times_minicolumns.append(time) fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111) ax.plot(minicolumns_range, times_minicolumns, '*-', markersize=14) ax.set_xlabel('Minicolumns') ax.set_ylabel('Seconds that the program runed'); ``` #### Hypercolumns ``` hypercolumns_range = np.arange(4, 20, 2) minicolumns = 20 epochs = 1 times_hypercolumns = [] for hypercolumns in hypercolumns_range: function = wrapper(run_standard_program, hypercolumns, minicolumns, epochs) time = timeit.timeit(function, number=1) times_hypercolumns.append(time) sns.set(font_scale=2.0) fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111) ax.plot(hypercolumns_range, times_hypercolumns, '*-', markersize=14) ax.set_xlabel('Hypercolumns') ax.set_ylabel('Seconds that the program runed'); ``` #### Epochs ``` hypercolumns = 4 minicolumns = 20 epochs_range = np.arange(1, 10, 1) times_epochs = [] for epochs in epochs_range: function = wrapper(run_standard_program, hypercolumns, minicolumns, epochs) time = timeit.timeit(function, number=1) times_epochs.append(time) sns.set(font_scale=2.0) fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111) ax.plot(epochs_range, times_epochs, '*-', markersize=14) ax.set_xlabel('Epochs') ax.set_ylabel('Seconds that the program runed') ``` #### Everything to compare ``` fig = plt.figure(figsize=(16, 12)) ax1 = fig.add_subplot(131) ax2 = fig.add_subplot(132) ax3 = fig.add_subplot(133) ax1.plot(minicolumns_range, times_minicolumns, '*-', markersize=14) ax2.plot(hypercolumns_range, times_hypercolumns, '*-', markersize=14) ax3.plot(epochs_range, times_epochs, '*-', markersize=14) ax1.set_title('Minicolumn scaling') ax2.set_title('Hypercolumn scaling') ax3.set_title('Epoch scaling') ax1.set_ylabel('Time (s)'); ``` ## Training and recalling times Her we run the standard program before and then we test how long it takes for it to run recalls and test recall success ``` hypercolumns = 4 minicolumns = 10 epochs = 3 manager = run_standard_program(hypercolumns, minicolumns, epochs) ``` #### Recall only ``` T_recall_range = np.arange(3, 20, 1) time_recall = [] for T_recall in T_recall_range: function = wrapper(training_program, manager=manager, T_recall=T_recall) time = timeit.timeit(function, number=1) time_recall.append(time) # Plot4 fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111) ax.plot(T_recall_range, time_recall, '*-', markersize=14) ax.set_xlabel('T_recall') ax.set_ylabel('Seconds that the program took to run') ax.set_title('Normal recall profile') plt.show() ``` #### Success recall ``` T_recall_range = np.arange(3, 20, 1) time_success = [] for T_recall in T_recall_range: function = wrapper(calculate_succes_program, manager=manager, T_recall=T_recall) time = timeit.timeit(function, number=1) time_success.append(time) # Plot fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111) ax.plot(T_recall_range, time_success, '*-', markersize=14) ax.plot(T_recall_range, time_recall, '*-', markersize=14) ax.set_xlabel('T_recall') ax.set_ylabel('Seconds that the program took to run') ax.set_title('Recall Success profiling') plt.show() ```
github_jupyter
<!--- <div style="text-align: center;"> <font size="5"> <b>Data-driven Design and Analyses of Structures and Materials (3dasm)</b> </font> </div> <br> </br> <div style="text-align: center;"> <font size="5"> <b>Lecture 1</b> </font> </div> <center> <img src=docs/tudelft_logo.jpg width=550px> </center> <div style="text-align: center;"> <font size="4"> <b>Miguel A. Bessa | <a href = "mailto: M.A.Bessa@tudelft.nl">M.A.Bessa@tudelft.nl</a> | Associate Professor</b> </font> </div> --> <img src=docs/tudelft_logo.jpg width=50%> ## Data-driven Design and Analyses of Structures and Materials (3dasm) ## Lecture 1 ### Miguel A. Bessa | <a href = "mailto: M.A.Bessa@tudelft.nl">M.A.Bessa@tudelft.nl</a> | Associate Professor ## Introduction **What:** A lecture of the "3dasm" course **Where:** This notebook comes from this [repository](https://github.com/bessagroup/3dasm_course) **Reference for entire course:** Murphy, Kevin P. *Probabilistic machine learning: an introduction*. MIT press, 2022. Available online [here](https://probml.github.io/pml-book/book1.html) **How:** We try to follow Murphy's book closely, but the sequence of Chapters and Sections is different. The intention is to use notebooks as an introduction to the topic and Murphy's book as a resource. * If working offline: Go through this notebook and read the book. * If attending class in person: listen to me (!) but also go through the notebook in your laptop at the same time. Read the book. * If attending lectures remotely: listen to me (!) via Zoom and (ideally) use two screens where you have the notebook open in 1 screen and you see the lectures on the other. Read the book. **Optional reference (the "bible" by the "bishop"... pun intended 😆) :** Bishop, Christopher M. *Pattern recognition and machine learning*. Springer Verlag, 2006. **References/resources to create this notebook:** * [Figure (Car stopping distance)](https://korkortonline.se/en/theory/reaction-braking-stopping/) * Snippets of code from this awesome [repo](https://github.com/gerdm/prml) by Gerardo Duran-Martin that replicates many figures in Bishop's book Apologies in advance if I missed some reference used in this notebook. Please contact me if that is the case, and I will gladly include it here. ## **OPTION 1**. Run this notebook **locally in your computer**: 1. Install miniconda3 [here](https://docs.conda.io/en/latest/miniconda.html) 2. Open a command window and create a virtual environment called "3dasm": ``` conda create -n 3dasm python=3 numpy scipy jupyter nb_conda matplotlib pandas scikit-learn rise tensorflow -c conda-forge ``` 3. Install [git](https://github.com/git-guides/install-git), open command window & clone the repository to your computer: ``` git clone https://github.com/bessagroup/3dasm_course ``` 4. Load jupyter notebook by typing in (anaconda) command window (it will open in your internet browser): ``` conda activate 3dasm jupyter notebook ``` 5. Open notebook (3dasm_course/Lectures/Lecture1/3dasm_Lecture1.ipynb) **Short note:** My personal environment also has other packages that help me while teaching. > conda install -n 3dasm -c conda-forge jupyter_contrib_nbextensions hide_code Then in the 3dasm conda environment: > jupyter nbextension install --py hide_code --sys-prefix > > jupyter nbextension enable --py hide_code > > jupyter serverextension enable --py hide_code > > jupyter nbextension enable splitcell/splitcell ## **OPTION 2**. Use **Google's Colab** (no installation required, but times out if idle): 1. go to https://colab.research.google.com 2. login 3. File > Open notebook 4. click on Github (no need to login or authorize anything) 5. paste the git link: https://github.com/bessagroup/3dasm_course 6. click search and then click on the notebook (*3dasm_course/Lectures/Lecture1/3dasm_Lecture1.ipynb*) ``` # Basic plotting tools needed in Python. import matplotlib.pyplot as plt # import plotting tools to create figures import numpy as np # import numpy to handle a lot of things! %config InlineBackend.figure_format = "retina" # render higher resolution images in the notebook plt.style.use("seaborn") # style for plotting that comes from seaborn plt.rcParams["figure.figsize"] = (8,4) # rescale figure size appropriately for slides ``` ## Outline for today * Introduction - Taking a probabilistic perspective on machine learning * Basics of univariate statistics - Continuous random variables - Probabilities vs probability densities - Moments of a probability distribution * The mindblowing Bayes' rule - The rule that spawns almost every ML model (even when we don't realize it) **Reading material**: This notebook + Chapter 2 until Section 2.3 ## Get hyped about Artificial Intelligence... ``` from IPython.display import display, YouTubeVideo, HTML YouTubeVideo('RNnZwvklwa8', width=512, height=288) # show that slides are interactive: # rescale video to 768x432 and back to 512x288 ``` **Well...** This class *might* not make you break the world (yet!). Let's focus on the fundamentals: * Probabilistic perspective on machine learning * Supervised learning (especially regression) ## Machine learning (ML) * **ML definition**: A computer program that learns from experience $E$ wrt tasks $T$ such that the performance $P$ at those tasks improves with experience $E$. * We'll treat ML from a **probabilistic perspective**: - Treat all unknown quantities as **random variables** * What are random variables? - Variables endowed with probability distributions! ## The car stopping distance problem <img src="docs/reaction-braking-stopping.svg" title="Car stopping distance" width="50%" align="right"> <br></br> Car stopping distance ${\color{red}y}$ as a function of its velocity ${\color{green}x}$ before it starts braking: ${\color{red}y} = {\color{blue}z} x + \frac{1}{2\mu g} {\color{green}x}^2 = {\color{blue}z} x + 0.1 {\color{green}x}^2$ - ${\color{blue}z}$ is the driver's reaction time (in seconds) - $\mu$ is the road/tires coefficient of friction (assume $\mu=0.5$) - $g$ is the acceleration of gravity (assume $g=10$ m/s$^2$). ## The car stopping distance problem ### How to obtain this formula? $y = d_r + d_{b}$ where $d_r$ is the reaction distance, and $d_b$ is the braking distance. ### Reaction distance $d_r$ $d_r = z x$ with $z$ being the driver's reaction time, and $x$ being the velocity of the car at the start of braking. ## The car stopping distance problem ### Braking distance $d_b$ Kinetic energy of moving car: $E = \frac{1}{2}m x^2$ &nbsp; &nbsp; &nbsp; where $m$ is the car mass. Work done by braking: $W = \mu m g d_b$ &nbsp; &nbsp; &nbsp; where $\mu$ is the coefficient of friction between the road and the tire, $g$ is the acceleration of gravity, and $d_b$ is the car braking distance. The braking distance follows from $E=W$: $d_b = \frac{1}{2\mu g}x^2$ Therefore, if we add the reacting distance $d_r$ to the braking distance $d_b$ we get the stopping distance $y$: $$y = d_r + d_b = z x + \frac{1}{2\mu g} x^2$$ ## The car stopping distance problem <img src="docs/reaction-braking-stopping.svg" title="Car stopping distance" width="25%" align="right"> $y = {\color{blue}z} x + 0.1 x^2$ The driver's reaction time ${\color{blue}z}$ is a **random variable (rv)** * Every driver has its own reaction time $z$ * Assume the distribution associated to $z$ is Gaussian with **mean** $\mu_z=1.5$ seconds and **variance** $\sigma_z^2=0.5$ seconds$^2$ $$ z \sim \mathcal{N}(\mu_z=1.5,\sigma_z^2=0.5^2) $$ where $\sim$ means "sampled from", and $\mathcal{N}$ indicates a Gaussian **probability density function (pdf)** ## Univariate Gaussian <a title="probability density function">pdf</a> The gaussian <a title="probability density function">pdf</a> is defined as: $$ \mathcal{N}(z | \mu_z, \sigma_z^2) = \frac{1}{\sqrt{2\pi\sigma_z^2}}e^{-\frac{1}{2\sigma_z^2}(z - \mu_z)^2} $$ Alternatively, we can write it using the **precision** term $\lambda_z := 1 / \sigma_z^2$ instead of using $\sigma_z^2$: $$ \mathcal{N}(z | \mu_z, \lambda_z^{-1}) = \frac{\lambda_z^{1/2}}{\sqrt{2\pi}}e^{-\frac{\lambda_z}{2}(z - \mu_z)^2} $$ Anyway, recall how this <a title="probability density function">pdf</a> looks like... ``` def norm_pdf(z, mu_z, sigma_z2): return 1 / np.sqrt(2 * np.pi * sigma_z2) * np.exp(-(z - mu_z)**2 / (2 * sigma_z2)) zrange = np.linspace(-8, 4, 200) # create a list of 200 z points between z=-8 and z=4 fig, ax = plt.subplots() # create a plot ax.plot(zrange, norm_pdf(zrange, 0, 1), label=r"$\mu_z=0; \ \sigma_z^2=1$") # plot norm_pdf(z|0,1) ax.plot(zrange, norm_pdf(zrange, 1.5, 0.5**2), label=r"$\mu_z=1.5; \ \sigma_z^2=0.5^2$") # plot norm_pdf(z|1.5,0.5^2) ax.plot(zrange, norm_pdf(zrange, -1, 2**2), label=r"$\mu_z=-1; \ \sigma_z^2=2^2$") # plot norm_pdf(z|-1,2^2) ax.set_xlabel("z", fontsize=20) # create x-axis label with font size 20 ax.set_ylabel("probability density", fontsize=20) # create y-axis label with font size 20 ax.legend(fontsize=15) # create legend with font size 15 ax.set_title("Three different Gaussian pdfs", fontsize=20); # create title with font size 20 ``` The <span style="color:green">green</span> curve shows the Gaussian <a title="probability density function">pdf</a> of the <a title="random variable">rv</a> $z$ **conditioned** on the mean $\mu_z=1.5$ and variance $\sigma_z^2=0.5^2$ for the car stopping distance problem. ## Univariate Gaussian <a title="probability density function">pdf</a> $$ p(z) = \mathcal{N}(z | \mu_z, \sigma_z^2) = \frac{1}{\sqrt{2\pi\sigma_z^2}}e^{-\frac{1}{2\sigma_z^2}(z - \mu_z)^2} $$ The output of this expression is the **PROBABILITY DENSITY** of $z$ **given** (or conditioned to) a particular $\mu_z$ and $\sigma_z^2$. * **Important**: Probability Density $\neq$ Probability So, what is a probability? ## Probability The probability of an event $A$ is denoted by $\text{Pr}(A)$. * $\text{Pr}(A)$ means the probability with which we believe event A is true * An event $A$ is a binary variable saying whether or not some state of the world holds. Probability is defined such that: $0 \leq \text{Pr}(A) \leq 1$ where $\text{Pr}(A)=1$ if the event will definitely happen and $\text{Pr}(A)=0$ if it definitely will not happen. ## Joint probability **Joint probability** of two events: $\text{Pr}(A \wedge B)= \text{Pr}(A, B)$ If $A$ and $B$ are **independent**: $\text{Pr}(A, B)= \text{Pr}(A) \text{Pr}(B)$ For example, suppose $z_1$ and $z_2$ are chosen uniformly at random from the set $\mathcal{Z} = \{1, 2, 3, 4\}$. Let $A$ be the event that $z_1 \in \{1, 2\}$ and $B$ be the event that **another** <a title="random variable">rv</a> denoted as $z_2 \in \{3\}$. Then we have: $\text{Pr}(A, B) = \text{Pr}(A) \text{Pr}(B) = \frac{1}{2} \cdot \frac{1}{4}$. ## Probability of a union of two events Probability of event $A$ or $B$ happening is: $\text{Pr}(A \vee B)= \text{Pr}(A) + \text{Pr}(B) - \text{Pr}(A \wedge B)$ If these events are mutually exclusive (they can't happen at the same time): $$ \text{Pr}(A \vee B)= \text{Pr}(A) + \text{Pr}(B) $$ For example, suppose an <a title="random variable">rv</a> denoted as $z_1$ is chosen uniformly at random from the set $\mathcal{Z} = \{1, 2, 3, 4\}$. Let $A$ be the event that $z_1 \in \{1, 2\}$ and $B$ be the event that the **same** <a title="random variable">rv</a> $z_1 \in \{3\}$. Then we have $\text{Pr}(A \vee B) = \frac{2}{4} + \frac{1}{4}$. ## Conditional probability of one event given another We define the **conditional probability** of event $B$ happening given that $A$ has occurred as follows: $$ \text{Pr}(B | A)= \frac{\text{Pr}(A,B)}{\text{Pr}(A)} $$ This is not defined if $\text{Pr}(A) = 0$, since we cannot condition on an impossible event. ## Conditional independence of one event given another We say that event $A$ is conditionally independent of event $B$ if we have $\text{Pr}(A | B)= \text{Pr}(A)$ This implies $\text{Pr}(B|A) = \text{Pr}(B)$. Hence, the joint probability becomes $\text{Pr}(A, B) = \text{Pr}(A) \text{Pr}(B)$ The book uses the notation $A \perp B$ to denote this property. ## Coming back to our car stopping distance problem <img src="docs/reaction-braking-stopping.svg" title="Car stopping distance" width="25%" align="right"> $y = {\color{blue}z} x + 0.1 x^2$ where $z$ is a **continuous** <a title="random variable">rv</a> such that $z \sim \mathcal{N}(\mu_z=1.5,\sigma_z^2=0.5^2)$. * What is the probability of an event $Z$ defined by a reaction time $z \leq 0.52$ seconds? $$ \text{Pr}(Z)=\text{Pr}(z \leq 0.52)= P(z=0.52) $$ where $P(z)$ denotes the **cumulative distribution function (cdf)**. Note that <a title="cumulative distribution function">cdf</a> is denoted with a capital $P$. Likewise, we can compute the probability of being in any interval as follows: $\text{Pr}(a \leq z \leq b)= P(z=b)-P(z=a)$ * But how do we compute the cdf at a particular value $b$, e.g. $P(z=b)$? ## <a title="Cumulative distribution functions">Cdf's</a> result from <a title="probability density functions">pdf's</a> A <a title="probability density functions">pdf</a> $p(z)$ is defined as the derivative of the <a title="cumulative distribution functions">cdf</a> $P(z)$: $$ p(z)=\frac{d}{d z}P(z) $$ So, given a <a title="probability density function">pdf</a> $p(z)$, we can compute the following probabilities: $$\text{Pr}(z \leq b)=\int_{-\infty}^b p(z) dz = P(b)$$ $$\text{Pr}(z \geq a)=\int_a^{\infty} p(z) dz = 1 - P(a)$$ $$\text{Pr}(a \leq z \leq b)=\int_a^b p(z) dz = P(b) - P(a)$$ **IMPORTANT**: $\int_{-\infty}^{\infty} p(z) dz = 1$ ### Some notes about <a title="probability density functions">pdf's</a> The integration to unity is important! $$\int_{-\infty}^{\infty} p(z) dz = 1$$ **Remember:** the integral of a <a title="probability density function">pdf</a> leads to a probability, and probabilities cannot be larger than 1. For example, from this property we can derive the following: $$ \int_{-\infty}^{\infty} p(z) dz = \int_{-\infty}^{a} p(z) dz + \int_{a}^{\infty} p(z) dz $$ $$ \Rightarrow \text{Pr}(z \geq a)= 1 - \text{Pr}(z \leq a) = 1 - \text{P}(a) = 1 - \int_{-\infty}^a p(z) dz $$ In some cases we will work with probability distributions that are **unnormalized**, so this comment is important! * Being unnormalized means that the probability density of the distribution does not integrate to 1. * In this case, we cannot call such function a <a title="probability density function">pdf</a>, even though its output is a probability density. ## <a title="Cumulative distribution functions">Cdf's</a> result from <a title="probability density functions">pdf's</a> Key point? * Given a <a title="probability density function">pdf</a> $p(z)$, we can compute the probability of a continuous <a title="random variable">rv</a> $z$ being in a finite interval as follows: $$ \text{Pr}(a \leq z \leq b)=\int_a^b p(z) dz = P(b) - P(a) $$ As the size of the interval gets smaller, we can write $$ \text{Pr}\left(z - \frac{dz}{2} \leq z \leq z + \frac{dz}{2}\right) \approx p(z) dz $$ Intuitively, this says the probability of $z$ being in a small interval around $z$ is the density at $z$ times the width of the interval. ``` from scipy.stats import norm # import from scipy.stats the normal distribution zrange = np.linspace(-3, 3, 100) # 100 values for plot fig_std_norm, (ax1, ax2) = plt.subplots(1, 2) # create a plot with 2 subplots side-by-side ax1.plot(zrange, norm.cdf(zrange, 0, 1), label=r"$\mu_z=0; \ \sigma_z=1$") # plot cdf of standard normal ax1.set_xlabel("z", fontsize=20) ax1.set_ylabel("probability", fontsize=20) ax1.legend(fontsize=15) ax1.set_title("Standard Gaussian cdf", fontsize=20) ax2.plot(zrange, norm.pdf(zrange, 0, 1), label=r"$\mu_z=0; \ \sigma_z=1$") # plot pdf of standard normal ax2.set_xlabel("z", fontsize=20) ax2.set_ylabel("probability density", fontsize=20) ax2.legend(fontsize=15) ax2.set_title("Standard Gaussian pdf", fontsize=20) fig_std_norm.set_size_inches(25, 5) # scale figure to be wider (since there are 2 subplots) ``` ## Note about scipy.stats [scipy](https://docs.scipy.org/doc/scipy/index.html) is an open-source software for mathematics, science, and engineering. It's brilliant and widely used for many things! **In particular**, [scipy.stats](https://docs.scipy.org/doc/scipy/reference/stats.html) is a simple module within scipy that has statistical functions and operations that are very useful. This way, we don't need to code all the functions ourselves. That's why we are using it to plot the cdf and pdf of the Gaussian distribution from now on, and we will use it for other things later. * In case you are interested, scipy.stats has a nice [tutorial](https://docs.scipy.org/doc/scipy/tutorial/stats.html) ## Coming back to our car stopping distance problem <img src="docs/reaction-braking-stopping.svg" title="Car stopping distance" width="25%" align="right"> $y = {\color{blue}z} x + 0.1 x^2$ where $z$ is a continuous <a title="random variable">rv</a> such that $p(z)= \mathcal{N}(z | \mu_z=1.5,\sigma_z^2=0.5^2)$. * What is the probability of an event $Z$ defined by a reaction time $z \leq 0.52$ seconds? $$ \text{Pr}(Z) = \text{Pr}(z \leq 0.52) = P(z=0.52) = \int_{-\infty}^{0.52} p(z) dz $$ ``` Pr_Z = norm.cdf(0.52, 1.5, 0.5) # using scipy norm.cdf(z=0.52 | mu_z=1.5, sigma_z=0.5) print("The probability of event Z is: Pr(Z) = ",round(Pr_Z,3)) z_value = 0.52 # z = 0.52 seconds zrange = np.linspace(0, 3, 200) # 200 values for plot fig_car_norm, (ax1, ax2) = plt.subplots(1, 2) # create subplot (two figures in 1) ax1.plot(zrange, norm.cdf(zrange, 1.5, 0.5), label=r"$\mu_z=1.5; \ \sigma_z=0.5$") # Figure 1 is cdf ax1.plot(z_value, norm.cdf(z_value, 1.5, 0.5), 'r*',markersize=15, linewidth=2, label=u'$P(z=0.52~|~\mu_z=1.5, \sigma_z^2=0.5^2)$') ax1.set_xlabel("z", fontsize=20) ax1.set_ylabel("probability", fontsize=20) ax1.legend(fontsize=15) ax1.set_title("Gaussian cdf of $z$ for car problem", fontsize=20) ax2.plot(zrange, norm.pdf(zrange, 1.5, 0.5), label=r"$\mu_z=1.5; \ \sigma_z=0.5$") # figure 2 is pdf ax2.plot(z_value, norm.pdf(z_value, 1.5, 0.5), 'r*', markersize=15, linewidth=2, label=u'$p(z=0.52~|~\mu_z=1.5, \sigma_z^2=0.5^2)$') ax2.set_xlabel("z", fontsize=20) ax2.set_ylabel("probability density", fontsize=20) ax2.legend(fontsize=15) ax2.set_title("Gaussian pdf of $z$ for car problem", fontsize=20) fig_car_norm.set_size_inches(25, 5) # scale figure to be wider (since there are 2 subplots) ``` ### Why is the Gaussian distribution so widely used? Several reasons: 1. It has two parameters which are easy to interpret, and which capture some of the most basic properties of a distribution, namely its mean and variance. 2. The central limit theorem (Sec. 2.8.6 of the book) tells us that sums of independent random variables have an approximately Gaussian distribution, making it a good choice for modeling residual errors or “noise”. 3. The Gaussian distribution makes the least number of assumptions (has maximum entropy), subject to the constraint of having a specified mean and variance (Sec. 3.4.4 of the book); this makes it a good default choice in many cases. 4. It has a simple mathematical form, which results in easy to implement, but often highly effective, methods. ## Car stopping distance problem <img src="docs/reaction-braking-stopping.svg" title="Car stopping distance" width="25%" align="right"> $y = {\color{blue}z} x + 0.1 x^2$ where $z$ is a continuous <a title="random variable">rv</a> such that $z \sim \mathcal{N}(\mu_z=1.5,\sigma_z^2=0.5^2)$. * What is the **expected** value for the reaction time $z$? This is not a trick question! It's the mean $\mu_z$, of course! * But how do we compute the expected value for any distribution? ## Moments of a distribution ### First moment: Expected value or mean The expected value (mean) of a distribution is the **first moment** of the distribution: $$ \mathbb{E}[z]= \int_{\mathcal{Z}}z p(z) dz $$ where $\mathcal{Z}$ indicates the support of the distribution (the $z$ domain). * Often, $\mathcal{Z}$ is omitted as it is usually between $-\infty$ to $\infty$ * The expected value $\mathbb{E}[z]$ is often denoted by $\mu_z$ As you might expect (pun intended 😆), the expected value is a linear operator: $$ \mathbb{E}[az+b]= a\mathbb{E}[z] + b $$ where $a$ and $b$ are fixed variables (NOT rv's). Additionally, for a set of $n$ rv's, one can show that the expectation of their sum is as follows: $\mathbb{E}\left[\sum_{i=1}^n z_i\right]= \sum_{i=1}^n \mathbb{E}[z_i]$ If they are **independent**, the expectation of their product is given by $\mathbb{E}\left[\prod_{i=1}^n z_i\right]= \prod_{i=1}^n \mathbb{E}[z_i]$ ## Moments of a distribution ### Second moment (and relation to Variance) The 2nd moment of a distribution $p(z)$ is: $$ \mathbb{E}[z^2]= \int_{\mathcal{Z}}z^2 p(z) dz $$ #### Variance can be obtained from the 1st and 2nd moments The variance is a measure of the “spread” of the distribution: $$ \mathbb{V}[z] = \mathbb{E}[(z-\mu_z)^2] = \int (z-\mu_z)^2 p(z) dz = \mathbb{E}[z^2] - \mu_z^2 $$ * It is often denoted by the square of the standard deviation, i.e. $\sigma_z^2 = \mathbb{V}[z] = \mathbb{E}[(z-\mu_z)^2]$ #### Elaboration of the variance as a result of the first two moments of a distribution $$ \begin{align} \mathbb{V}[z] & = \mathbb{E}[(z-\mu_z)^2] \\ & = \int (z-\mu_z)^2 p(z) dz \\ & = \int z^2 p(z) dz + \mu_z^2 \int p(z) dz - 2\mu_z \int zp(z) dz \\ & = \mathbb{E}[z^2] - \mu_z^2 \end{align} $$ where $\mu_z = \mathbb{E}[z]$ is the first moment, and $\mathbb{E}[z^2]$ is the second moment. Therefore, we can also write the second moment of a distribution as $$\mathbb{E}[z^2] = \sigma_z^2 + \mu_z^2$$ #### Variance and standard deviation properties The standard deviation is defined as $ \sigma_z = \text{std}[z] = \sqrt{\mathbb{V}[z]}$ The variance of a shifted and scaled version of a random variable is given by $\mathbb{V}[a z + b] = a^2\mathbb{V}[z]$ where $a$ and $b$ are fixed variables (NOT rv's). If we have a set of $n$ independent rv's, the variance of their sum is given by the sum of their variances $$ \mathbb{V}\left[\sum_{i=1}^n z_i\right] = \sum_{i=1}^n \mathbb{V}[z_i] $$ The variance of their product can also be derived, as follows: $$ \begin{align} \mathbb{V}\left[\prod_{i=1}^n z_i\right] & = \mathbb{E}\left[ \left(\prod_i z_i\right)^2 \right] - \left( \mathbb{E}\left[\prod_i z_i \right]\right)^2\\ & = \mathbb{E}\left[ \prod_i z_i^2 \right] - \left( \prod_i\mathbb{E}\left[ z_i \right]\right)^2\\ & = \prod_i \mathbb{E}\left[ z_i^2 \right] - \prod_i\left( \mathbb{E}\left[ z_i \right]\right)^2\\ & = \prod_i \left( \mathbb{V}\left[ z_i \right] +\left( \mathbb{E}\left[ z_i \right]\right)^2 \right)- \prod_i\left( \mathbb{E}\left[ z_i \right]\right)^2\\ & = \prod_i \left( \sigma_{z,\,i}^2 + \mu_{z,\,i}^2 \right)- \prod_i\mu_{z,\,i}^2 \\ \end{align} $$ ## Note about higher-order moments * The $k$-th moment of a distribution $p(z)$ is defined as the expected value of the $k$-th power of $z$, i.e. $z^k$: $$ \mathbb{E}[z^k]= \int_{\mathcal{Z}}z^k p(z) dz $$ ## Mode of a distribution The mode of an <a title="random variable">rv</a> $z$ is the value of $z$ for which $p(z)$ is maximum. Formally, this is written as, $$ \mathbf{z}^* = \underset{z}{\mathrm{argmax}}~p(z)$$ If the distribution is multimodal, this may not be unique: * That's why $\mathbf{z}^*$ is in **bold**, to denote that in general it is a vector that is retrieved! * However, if the distribution is unimodal (one maximum), like the univariate Gaussian distribution, then it retrieves a scalar $z^*$ Note that even if there is a unique mode, this point may not be a good summary of the distribution. ## Mean vs mode for a non-symmetric distribution ``` # 1. Create a gamma pdf with parameter a = 2.0 from scipy.stats import gamma # import from scipy.stats the Gamma distribution a = 2.0 # this is the only input parameter needed for this distribution # Define the support of the distribution (its domain) by using the # inverse of the cdf (called ppf) to get the lowest z of the plot that # corresponds to Pr = 0.01 and the highest z of the plot that corresponds # to Pr = 0.99: zrange = np.linspace(gamma.ppf(0.01, a), gamma.ppf(0.99, a), 200) mu_z, var_z = gamma.stats(2.0, moments='mv') # This computes the mean and variance of the pdf fig_gamma_pdf, ax = plt.subplots() # a trick to save the figure for later use ax.plot(zrange, gamma.pdf(zrange, a), label=r"$\Gamma(z|a=2.0)$") ax.set_xlabel("z", fontsize=20) ax.set_ylabel("probability density", fontsize=20) ax.legend(fontsize=15) ax.set_title("Gamma pdf for $a=2.0$", fontsize=20) plt.close(fig_gamma_pdf) # do not plot the figure now. We will show it in a later cell # 2. Plot the expected value (mean) for this pdf ax.plot(mu_z, gamma.pdf(mu_z, a), 'r*', markersize=15, linewidth=2, label=u'$\mu_z = \mathbb{E}[z]$') # 3. Calculate the mode and plot it from scipy.optimize import minimize # import minimizer # Finding the maximum of the gamma pdf can be done by minimizing # the negative gamma pdf. So, we create a function that outputs # the negative of the gamma pdf given the parameter a=2.0: def neg_gamma_given_a(z): return -gamma.pdf(z,a) # Use the default optimizer of scipy (L-BFGS) to find the # maximum (by minimizing the negative gamma pdf). Note # that we need to give an initial guess for the value of z, # so we can use, for example, z=mu_z: mode_z = minimize(neg_gamma_given_a,mu_z).x ax.plot(mode_z, np.max(gamma.pdf(mode_z, a)),'g^', markersize=15, linewidth=2,label=u'mode $\mathbf{z}^*=\mathrm{argmax}~p(z)$') ax.legend() # show legend # Code to generate this Gamma distribution hidden during presentation (it's shown as notes) print('The mean is ',mu_z) # print the mean calculated for this gamma pdf print('The mode is approximately ',mode_z) # print the mode fig_gamma_pdf # show figure of this gamma pdf ``` ## The amazing Bayes' rule <font color='red'>Bayesian</font> <font color='blue'>inference</font> definition: * <font color='blue'>Inference</font> means “the act of passing from sample data to generalizations, usually with calculated degrees of certainty”. * <font color='red'>Bayesian</font> is used to refer to inference methods that represent “degrees of certainty” using probability theory, and which leverage Bayes’ rule to update the degree of certainty given data. **Bayes’ rule** is a formula for computing the probability distribution over possible values of an unknown (or hidden) quantity $z$ given some observed data $y$: $$ p(z|y) = \frac{p(y|z) p(z)}{p(y)} $$ Bayes' rule follows automatically from the identity: $p(z|y) p(y) = p(y|z) p(z) = p(y,z) = p(z,y)$ ## The amazing Bayes' rule * I know... You don't find it very amazing (yet!). * Wait until you realize that almost all ML methods can be derived from this simple formula $$ p(z|y) = \frac{p(y|z) p(z)}{p(y)} $$ ### See you next class Have fun!
github_jupyter
<a href="https://colab.research.google.com/github/CrucifierBladex/cifar10_convnet/blob/main/convnet_cifar10.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` from keras import layers,models from tensorflow.python.client import device_lib device_lib.list_local_devices() model=models.Sequential() model.add(layers.Conv2D(32,(3,3),activation='relu',padding='same',input_shape=(32,32,3))) model.add(layers.MaxPooling2D((2,2),padding='same')) model.add(layers.Conv2D(64,(3,3),activation='relu',padding='same')) model.add(layers.MaxPooling2D((2,2),padding='same')) model.add(layers.Conv2D(128,(3,3),activation='relu',padding='same')) model.add(layers.MaxPooling2D((2,2),padding='same')) model.add(layers.Conv2D(128,(3,3),activation='relu',padding='same')) model.add(layers.MaxPooling2D((2,2),padding='same')) model.add(layers.GlobalAveragePooling2D()) model.add(layers.Flatten()) model.add(layers.Dense(64,activation='relu')) model.add(layers.BatchNormalization()) model.add(layers.Dense(10,activation='softmax')) import keras dot_img_file = '/content/sample_data/model/model_1.png' keras.utils.plot_model(model, to_file=dot_img_file, show_shapes=True) from keras.datasets import cifar10 from keras.utils import to_categorical (train_images,train_labels),(test_images,test_labels)=cifar10.load_data() train_images=train_images.reshape((50000,32,32,3)) train_images=train_images.astype('float32')/255 test_images=test_images.reshape((10000,32,32,3)) test_images=test_images.astype('float32')/255 train_labels=to_categorical(train_labels) test_labels=to_categorical(test_labels) model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy']) history=model.fit(train_images,train_labels,epochs=30,batch_size=32,validation_data=(test_images,test_labels)) history.history.keys() import pandas as pd his=pd.DataFrame(history.history) import matplotlib.pyplot as plt import seaborn as sns sns.set(style='darkgrid') his.plot(y='loss') his # summarize history for accuracy plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # summarize history for loss plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() import imageio,cv2 import matplotlib.pyplot as plt img=imageio.imread('/content/horse.jpeg') img=cv2.resize(img,(32,32)) plt.imshow(img) img=img.reshape(1,32,32,3) img=img/255 model.predict(img) import numpy as np np.argmax(model.predict(img)) model.save('cifar10class.h5') ```
github_jupyter
#### Information About the Data u.data -- The full u data set, 100000 ratings by 943 users on 1682 items. Each user has rated at least 20 movies. Users and items are numbered consecutively from 1. The data is randomly ordered. This is a tab separated list of user id | item id | rating | timestamp. The time stamps are unix seconds since 1/1/1970 UTC u.info -- The number of users, items, and ratings in the u data set. u.item -- Information about the items (movies); this is a tab separated list of movie id | movie title | release date | video release date | IMDb URL | unknown | Action | Adventure | Animation | Children's | Comedy | Crime | Documentary | Drama | Fantasy | Film-Noir | Horror | Musical | Mystery | Romance | Sci-Fi | Thriller | War | Western | The last 19 fields are the genres, a 1 indicates the movie is of that genre, a 0 indicates it is not; movies can be in several genres at once. The movie ids are the ones used in the u.data data set. u.genre -- A list of the genres. u.user -- Demographic information about the users; this is a tab separated list of user id | age | gender | occupation | zip code The user ids are the ones used in the u.data data set. u.occupation -- A list of the occupations. u1.base -- The data sets u1.base and u1.test through u5.base and u5.test u1.test are 80%/20% splits of the u data into training and test data. u2.base Each of u1, …, u5 have disjoint test sets; this if for u2.test 5 fold cross validation (where you repeat your experiment u3.base with each training and test set and average the results). u3.test These data sets can be generated from u.data by mku.sh. u4.base u4.test u5.base u5.test ua.base -- The data sets ua.base, ua.test, ub.base, and ub.test ua.test split the u data into a training set and a test set with ub.base exactly 10 ratings per user in the test set. The sets ub.test ua.test and ub.test are disjoint. These data sets can be generated from u.data by mku.sh. allbut.pl -- The script that generates training and test sets where all but n of a users ratings are in the training data. mku.sh -- A shell script to generate all the u data sets from u.data. ``` import pandas as pd import sklearn as sk import matplotlib.pyplot as plt import numpy as np import seaborn as sns # info about the dataset info_path = "data/u.info" pd.read_csv(info_path, sep=' ', names=['Number','Info']) # load ratings data data_path = "data/u.data" # This is a tab separated list of user id | item id | rating | timestamp. The time stamps are unix seconds since 1/1/1970 UTC. columns_name=['user_id','item_id','rating','timestamp'] data=pd.read_csv(data_path,sep="\t",names=columns_name) print(data.head()) print(data.shape) # load the movies movie_path = "data/u.item" columns = "item_id | movie title | release date | video release date | IMDb URL | unknown | Action | Adventure | Animation | Children's | Comedy | Crime | Documentary | Drama | Fantasy | Film-Noir | Horror | Musical | Mystery | Romance | Sci-Fi | Thriller | War | Western" list_cols = columns.split(" | ") movies = pd.read_csv(movie_path, sep="\|", names=list_cols) movies # load the users users_path = "data/u.user" columns = "user_id | age | gender | occupation | zip code" list_cols = columns.split(" | ") users = pd.read_csv(users_path, sep="\|", names=list_cols) users data = pd.merge(data,movies,on="item_id") data data = pd.merge(data,users, on="user_id") data np.sort(data["rating"].unique()) data = data.drop(["video release date","IMDb URL"], axis=1) data data.isna().sum() data[data.isnull().any(axis=1)] data = data.dropna() data data.groupby("movie title").count()['rating'].sort_values(ascending=False) ratings = pd.DataFrame(data.groupby("movie title").mean()['rating']) ratings['number of ratings'] = pd.DataFrame(data.groupby("movie title").count()['rating']) ratings plt.figure(figsize=(12,8)) plt.axes(xlabel = "Number of Ratings", ylabel= "Number of Movies") plt.hist(ratings['number of ratings'], bins=70) plt.show plt.figure(figsize=(12,8)) plt.axes(xlabel = "Average Rating", ylabel= "Number of Movies") plt.hist(ratings['rating'], bins=70) plt.show sns.distplot(ratings['rating'], hist=True, kde=True, bins=70) sns.jointplot(x='rating',y='number of ratings',data=ratings,alpha=0.5) moviematrix = data.pivot_table(index="user_id", columns="movie title", values="rating") moviematrix starwars_user_ratings = moviematrix["Star Wars (1977)"] starwars_user_ratings sim_to_starwars = moviematrix.corrwith(starwars_user_ratings) sim_to_starwars.sort_values(ascending=False) corr_to_starwars = pd.DataFrame(sim_to_starwars, columns=['Correlation']) corr_to_starwars.sort_values(by=['Correlation']) #drop NaN values corr_to_starwars.dropna(inplace=True) corr_to_starwars.sort_values(by='Correlation',ascending=False) corr_to_starwars = corr_to_starwars.join(ratings['number of ratings']) corr_to_starwars corr_to_starwars[corr_to_starwars['number of ratings']>100].sort_values(by='Correlation', ascending=False) def MovieRecommendations(movie): user_ratings = moviematrix[movie] sim_to_movie = moviematrix.corrwith(user_ratings) corr_to_movie = pd.DataFrame(sim_to_movie, columns=['Correlation']) corr_to_movie.dropna(inplace=True) corr_to_movie = corr_to_movie.join(ratings['number of ratings']) return corr_to_movie[corr_to_movie['number of ratings']>100].sort_values(by='Correlation', ascending=False)[1:11] MovieRecommendations("Monty Python's Life of Brian (1979)") data['release date'] = pd.to_datetime(data['release date']) data[data['release date'] == data['release date'].min()]["movie title"].unique() ```
github_jupyter
# Purpose The purpose of this notebook is to train and export the model configuration selected from previous hyperparameter analysis. The following are the optimal parameters. Other parameter alignments are also stored in order to be able to compare different iterations of the model **Parameters Selected**: * **Embedding**: Glove * **Stopwords**: False * **Lemmatization**: False * **LSTM Stack**: True * **Hidden Dimension - Layer 1**: 32 * **Hidden Dimension - Layer 2**: 128 * **Dropout** : True * **Dropout Rate**: 0.5 * **Sample Weights**: False * **Trainable**: True * **Time Distributed Output Layer**: True * **Optimizer**: rmsprop ## Import ### Packages ``` # General import codecs, io, os, re, sys, time from collections import OrderedDict from scipy.stats import uniform from tqdm import tqdm # Analysis import numpy as np import pandas as pd from sklearn.metrics import \ accuracy_score, classification_report, confusion_matrix, \ precision_recall_fscore_support from sklearn.model_selection import \ ParameterGrid, RandomizedSearchCV, RepeatedStratifiedKFold from sklearn.utils.class_weight import compute_class_weight # Visual import matplotlib.pyplot as plt import seaborn as sn # Deep Learning import tensorflow as tf from keras.wrappers.scikit_learn import KerasClassifier from keras.callbacks import EarlyStopping from keras.layers.experimental.preprocessing import TextVectorization ``` ### Custom Functions ``` sys.path.append('*') from source_entity_extraction import * ``` ### Data The training data is imported and the necessary columns are converted to lists. ``` #import data path_dir_data ="./../data/input/" file_training_data = 'training_data_dir_multiclass.xlsx' path_training_data = os.path.join(path_dir_data, file_training_data) dataset = pd.read_excel(path_training_data, engine='openpyxl') #convert into lists df = pd.DataFrame({ 'text': dataset.sentence, 'node1': dataset.node_1, 'node2': dataset.node_2 }) df.dropna(inplace = True) ``` ## Randomness To better control and compare results of the Entity Extraction model between the environments where the model is trainined (Python) and where it will be implemented (R/Shiny), we will attempt to control any random actions by the process to maintain consistent results. ``` random_state = 5590 np.random.seed(random_state) tf.random.set_seed(random_state) ``` # Model Settings ## Define All Configurations ``` dct_alignments = { "initial": { "EMBEDDING": "text_vectorization", "MAX_LENGTH": 70, "LEMMATIZE": False, "STOP_WORDS": False, "LSTM_STACK": False, "HIDDEN_DIM_1": "", "HIDDEN_DIM_2": 3, "DROPOUT": True, "DROPOUT_RATE": 0.1, "SAMPLE_WEIGHTS": False, "TRAINABLE": "", "TIME_DISTRIBUTED": True, "OPTIMIZER": "rmsprop", "EPOCHS": 60, }, "optimal": { "EMBEDDING": "glove", "MAX_LENGTH": 50, "LEMMATIZE": False, "STOP_WORDS": False, "LSTM_STACK": True, "HIDDEN_DIM_1": 32, "HIDDEN_DIM_2": 128, "DROPOUT": True, "DROPOUT_RATE": 0.5, "SAMPLE_WEIGHTS": False, "TRAINABLE": True, "TIME_DISTRIBUTED": True, "OPTIMIZER": "rmsprop", "EPOCHS": 250, } } ``` ## Select Configuration ``` # Label CONFIGURATION_LABEL = "optimal" # Extract Values EMBEDDING = dct_alignments[CONFIGURATION_LABEL]["EMBEDDING"] MAX_LENGTH = dct_alignments[CONFIGURATION_LABEL]["MAX_LENGTH"] LEMMATIZE = dct_alignments[CONFIGURATION_LABEL]["LEMMATIZE"] STOP_WORDS = dct_alignments[CONFIGURATION_LABEL]["STOP_WORDS"] LSTM_STACK = dct_alignments[CONFIGURATION_LABEL]["LSTM_STACK"] HIDDEN_DIM_1 = dct_alignments[CONFIGURATION_LABEL]["HIDDEN_DIM_1"] HIDDEN_DIM_2 = dct_alignments[CONFIGURATION_LABEL]["HIDDEN_DIM_2"] DROPOUT = dct_alignments[CONFIGURATION_LABEL]["DROPOUT"] DROPOUT_RATE = dct_alignments[CONFIGURATION_LABEL]["DROPOUT_RATE"] SAMPLE_WEIGHTS = dct_alignments[CONFIGURATION_LABEL]["SAMPLE_WEIGHTS"] TRAINABLE = dct_alignments[CONFIGURATION_LABEL]["TRAINABLE"] TIME_DISTRIBUTED = dct_alignments[CONFIGURATION_LABEL]["TIME_DISTRIBUTED"] OPTIMIZER = dct_alignments[CONFIGURATION_LABEL]["OPTIMIZER"] EPOCHS = dct_alignments[CONFIGURATION_LABEL]["EPOCHS"] ``` # Global Actions The following section defines global settings or performs actions that are consistent across the entirety of this notebook. ## Variables Varaibles that are used across multiple calls should be defined here. ``` # Global Settings MAX_FEATURES = 1000 BATCH_SIZE = 32 TIME_STAMP = time.strftime("%Y%m%d-%H%M%S") ``` ## Pre-Processing ### Text Processing ``` df = process_text( df, stopwords = LEMMATIZE, lemmatize = STOP_WORDS ) ``` ### Target Generation With the text processing complete we will now create two versions of the target set. The first will have the feature tokens converted into numerical representations for each class: 0, 1, 2. Then we will also create a target set that is a one-hot-encoded representation of the numerical classes. ``` df = target_gen_wrapper( df, max_length=MAX_LENGTH ) ``` ## Split Data into Training / Test Sets ``` from sklearn.model_selection import train_test_split df_train, df_test = train_test_split( df, test_size=0.25, random_state = random_state ) ``` Due to the different nature of random actions between R and Python, it is easier to export the test set than to duplciate the train/test split. ``` subfolder_a = "outputs" subfolder_entity = "entity_extraction" file_name = 'entity_extraction_test_set.csv' path = os.path.join(path_dir_data, subfolder_a, subfolder_entity, file_name) df_test.to_csv(path, index_label=False) ``` # Feature/Target Definition We need to define a target variable and perform preprocessing steps on the features before inputting into the model ``` # Features X_train = df_train['text'].tolist() X_test = df_test['text'].tolist() # Targets y_train_val = df_train['target_labels'].tolist() y_test_val = df_test['target_labels'].tolist() ``` ### Export Target Classes To simplify eventual work in R, we will generate the our target classes the test dataset and export it. ``` # Export Test Target Values # Set File Location file_name = 'entity_extraction_test_target_classes_cause_effect.csv' path = os.path.join(path_dir_data, subfolder_a, subfolder_entity, file_name) # Convert Lists to Dataframe df_targets_test = pd.DataFrame(y_test_val) # Export Target Values df_targets_test.to_csv(path, index=False, header=False) ``` # Build Model ## Vectorization Layer ``` vectorization_layer = TextVectorization( max_tokens=MAX_FEATURES, output_mode='int', output_sequence_length=MAX_LENGTH ) vectorization_layer.adapt(X_train) vocab = vectorization_layer.get_vocabulary() vocab_len = len(vocab) print(f"Vocabulary Size: {vocab_len}") # Inspect vocabulary word_index = dict(zip(vocab, range(len(vocab)))) # word_index # # Inspect vectorization output # m = 5 # test_string = X_train[m] # print(f"Test String - Raw:\n{test_string}") # print() # test_string_vec = vectorization_layer([test_string]) # print(f"Test String - Vectorized:\n{test_string_vec[0]}") ``` ## Embedding Layer ### Initialize ``` dct_embedding_index = {} # Initialize None for text vectorizaiton dct_embedding_index["text_vectorization"] = { "index": None, "dimension": None } dct_embedding_matrix = { "text_vectorization": None } ``` ### Embedding Matrix ### Glove ##### Import/Load Embeddings ``` embed_label = "glove" embedding_dim = 100 # Define file path subfolder_embed = "pre_trained" subfolder_embed_glove = "glove.6B" file_name = "glove.6B.100d.txt" path = os.path.join(path_dir_data, subfolder_embed, subfolder_embed_glove, file_name) print("Preparing embedding index...") embeddings_index = {} with open(path, encoding="utf8") as f: for line in tqdm(f): word, coefs = line.split(maxsplit=1) coefs = np.fromstring(coefs, "f", sep=" ") embeddings_index[word] = coefs dct_embedding_index[embed_label] = { "index": embeddings_index, "dimension": embedding_dim } print(f"Found {len(embeddings_index)} word vectors.") ``` ##### Create Matrix ``` embedding_matrix = gen_embedding_matrix( dct_embedding_index = dct_embedding_index, embed_label = embed_label, vocabulary_length = vocab_len, word_index = word_index ) dct_embedding_matrix[embed_label] = embedding_matrix ``` #### FastText ##### Import/Load Embeddings ``` embed_label = "fasttext" embedding_dim = 300 # Define file path subfolder_embed = "pre_trained" subfolder_embed_fasttext = "wiki-news-300d-1M.vec" file_name = "wiki-news-300d-1M.vec" path = os.path.join(path_dir_data, subfolder_embed, subfolder_embed_fasttext, file_name) f = codecs.open(path, encoding='utf-8') print("Preparing embedding index...") embeddings_index = {} for line in tqdm(f): values = line.rstrip().rsplit(' ') # print(values) word = values[0] # print(word) coefs = np.asarray(values[1:], dtype='float32') # print(coefs) embeddings_index[word] = coefs f.close() dct_embedding_index[embed_label] = { "index": embeddings_index, "dimension": embedding_dim, } print(f'Found {len(embeddings_index)} word vectors.') ``` ##### Create Matrix ``` embedding_matrix = gen_embedding_matrix( dct_embedding_index = dct_embedding_index, embed_label = embed_label, vocabulary_length = vocab_len, word_index = word_index ) dct_embedding_matrix[embed_label] = embedding_matrix ``` ## Imbalance Class Management There is an imbalance in the classes predicted by the model. Roughly 90% of tokens are class 0 (not Cause or Entity node, filler words). The remaining classes (1: Cause, 2: Entity) account for about 5% of the remaining tokens, each. In order to better predict all classes in the dataset, and not create a model which simply predicts the default class, we need to weight each of these classes differently. Unfortunately, we cannot simply use the **class weights** input for training a Keras model. that is because we are predicting a 3D array as an output, and Keras will not allow the use of **class weights** in such a case. There is a workaround, as discussed in Keras Github issue 3653. We can use **sample weights**, with the sample weights mode set to *temporal*. https://github.com/keras-team/keras/issues/3653 To apply sample weights to our model, we needa matrix of sample weights to account for all input values. This matrix will be the same size as the y_train output array (n samples X sample length). ### Sample Weights ``` # Initialize Sample weight matrix sample_weight_matrix = np.array(y_train_val).copy() # Flatten matrix sample_weight_matrix_fl = flatten_list(sample_weight_matrix) # Determine number of classes n_classes = np.unique(sample_weight_matrix_fl).shape[0] # Determine class weights class_weights = compute_class_weight( "balanced", np.unique(sample_weight_matrix_fl), np.array(sample_weight_matrix_fl) ) # Replace class label with class weight for i in range(0, len(class_weights)): sample_weight_matrix = np.where( sample_weight_matrix==i, class_weights[i], sample_weight_matrix ) ``` # Train Model ## Define Embedding Layer ``` embedding_matrix = gen_embedding_matrix( dct_embedding_index = dct_embedding_index, embed_label = EMBEDDING, vocabulary_length = vocab_len, word_index = word_index ) # Embedding Layer embedding_layer = gen_embedding_layer( label = EMBEDDING, input_dimension = vocab_len, output_dimension_wo_init = 64, max_length = MAX_LENGTH, embedding_matrix = embedding_matrix, trainable = TRAINABLE ) ``` ## Compile Model ``` model = compile_model( vectorization_layer = vectorization_layer, embedding_layer = embedding_layer, dropout = DROPOUT, dropout_rate = DROPOUT_RATE, lstm_stack = LSTM_STACK, hidden_dimension_1 = HIDDEN_DIM_1, hidden_dimension_2 = HIDDEN_DIM_2, sample_weights = SAMPLE_WEIGHTS, time_distributed = TIME_DISTRIBUTED, optimizer = OPTIMIZER ) model.summary() ``` ## Define Inputs ``` X_train = np.array(X_train) # Encode targets y_train = encode_target(y_train_val) # Convert target to numpy array for model input y_train = np.array(y_train) ``` ## Train Model ``` if SAMPLE_WEIGHTS: history = model.fit( X_train, y_train, batch_size=BATCH_SIZE, epochs=EPOCHS, validation_split=0.2, sample_weight=sample_weight_matrix, verbose=2 ) else: history = model.fit( X_train, y_train, batch_size=BATCH_SIZE, epochs=EPOCHS, validation_split=0.2, verbose=2 ) ``` # Evaluation ## Generate Predictions ``` # Generate Predictions y_pred = [] for i in range(len(X_test)): y_pred_prob = model.predict(np.array([X_test[i]])) y_pred_class = np.argmax(y_pred_prob, axis=-1)[0].tolist() y_pred.append(y_pred_class) # Flatten lists y_pred = flatten_list(y_pred) y_test = flatten_list(y_test_val) ``` ## Classification Report ``` print(classification_report(y_test, y_pred)) df_classification_report = get_classification_report(y_test, y_pred) file_name = f"entity_extraction_configuration_report_{CONFIGURATION_LABEL}_epochs_{EPOCHS}_{TIME_STAMP}.csv" path_classification_report = os.path.join( path_dir_data, subfolder_a, subfolder_entity, file_name ) df_classification_report.to_csv(path_classification_report, index_label=False) ``` ## Charts ### Accuracy ``` hist = pd.DataFrame(history.history) #plot training and validation accuracy f = plt.figure(figsize=(5,5)) plt.plot(hist["accuracy"], label =' Training Accuracy') plt.plot(hist["val_accuracy"], label = 'Validation Accuracy') plt.legend(loc="lower right") plt.show() # Save and output file_name = f"enitity_extraction_training_validation_accuracy_{CONFIGURATION_LABEL}_epochs_{EPOCHS}_{TIME_STAMP}.png" path = os.path.join(path_dir_data, subfolder_a, subfolder_entity, file_name) f.savefig(path) # f.download(path) ``` ### Loss ``` # plot training and validation accuracy f = plt.figure(figsize=(5,5)) plt.plot(hist["loss"], label = 'Training Loss') plt.plot(hist["val_loss"], label = 'Validation Loss') plt.legend(loc="upper right") plt.show() # Save and output file_name = f"enitity_extraction_training_validation_loss_{CONFIGURATION_LABEL}_epochs_{EPOCHS}_{TIME_STAMP}.png" path = os.path.join(path_dir_data, subfolder_a, subfolder_entity, file_name) f.savefig(path) # f.download(path) ``` ## Confusion Matrix ``` # Create confusion matrix cm = confusion_matrix(y_test, y_pred) df_cm = pd.DataFrame(cm, columns=np.unique(y_test), index = np.unique(y_pred)) df_cm.index.name = 'Actual' df_cm.columns.name = 'Predicted' # Visualize plt.figure(figsize = (10,7)) sn.set(font_scale=1.4)#for label size sn.heatmap(df_cm, cmap="Blues", annot=True , fmt='d', annot_kws={"size": 16}) ;# font size ``` # Save and Restore Model The following section tests saving and restoring the model. ## Save ``` # Export Model folder_keras_model = f"entity_extraction_epoch_{CONFIGURATION_LABEL}_epochs_{EPOCHS}_{TIME_STAMP}" path_keras_model = os.path.join( path_dir_data, subfolder_a, subfolder_entity, folder_keras_model ) # SavedModel model.save(path_keras_model) ``` # Restore ``` from tensorflow import keras model = keras.models.load_model(path_keras_model) model.summary() ```
github_jupyter
``` import pandas as pd train = pd.read_csv("./datasets/labeledTrainData.tsv", header=0, delimiter='\t', quoting=3) train.head() train.shape train.columns.values train["review"][0] from bs4 import BeautifulSoup example1 = BeautifulSoup(train["review"][0]) example1.get_text() import re letters_only = re.sub("[^a-zA-Z]", " ", example1.get_text()) #替换非字母为空格 letters_only lower_case = letters_only.lower() words = lower_case.split() # import nltk # nltk.download() from nltk.corpus import stopwords stopwords.words("english") words = [w for w in words if not w in stopwords.words("english")] words def review_to_words(raw_review): review_text = BeautifulSoup(raw_review).get_text() letters_only = re.sub("[^a-zA-Z]", " ", review_text) words = letters_only.lower().split() stops = set(stopwords.words('english')) meaningful_words = [w for w in words if not w in stops] return(" ".join(meaningful_words)) clean_review = review_to_words(train["review"][0]) clean_review print("Cleaning and parsing the training set movie reviews...\n") number = 10000 num_reviews = train["review"][:number].size clean_train_reviews = [] for i in range(0, num_reviews): if (i + 1) % 1000 == 0: print("Review {} of {}".format(i + 1, num_reviews)) clean_train_reviews.append(review_to_words(train["review"][i])) print("creating the bag of words...") from sklearn.feature_extraction.text import CountVectorizer vertorizer = CountVectorizer(analyzer='word', tokenizer=None, preprocessor=None, stop_words=None, max_features=5000) train_data_features = vertorizer.fit_transform(clean_train_reviews) train_data_features = train_data_features.toarray() train_data_features.shape #每一个词由5000维的向量表示 vocab = vertorizer.get_feature_names() vocab #5000 import numpy as np dist = np.sum(train_data_features, axis=0) # axis=0 对每一列进行操作 for tag, count in zip(vocab, dist): print(str(count) + " : " + tag) #每个单词出现的次数 from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(train_data_features, train["sentiment"][:10000], test_size=0.1) print("Training the random forest...") from sklearn.ensemble import RandomForestClassifier rf_clf = RandomForestClassifier(n_estimators=50) # 100的效果最好 forest = rf_clf.fit(X_train, y_train) # evaluate from sklearn.metrics import accuracy_score y_pred = forest.predict(X_test) accuracy_score(y_pred, y_test) test = pd.read_csv("./datasets/testData.tsv", header=0, delimiter="\t", quoting=3) test.shape num_reviews = len(test["review"]) clean_test_reviews = [] print("Cleaning and parsing the test set movie reviews...") for i in range(num_reviews): if (i + 1) % 1000 == 0: print("Review {} of {}".format(i + 1, num_reviews)) clean_test_reviews.append(review_to_words(test["review"][i])) test_data_features = vertorizer.transform(clean_test_reviews) test_data_features = test_data_features.toarray() result = rf_clf.predict(test_data_features) output = pd.DataFrame({ "id":test["id"], "sentiment":result }) output.to_csv("result.csv", index=False, quoting=3) ```
github_jupyter
## Creating schools.csv 1. Install packages 2. Create cities.csv with full state name/ city column to use in getting school information 3. For persisitance creating a schools csv using selenium to get school information from greatschools.org 4. Clean csv for use in schools endpoint ### 1. Import necessary libraries ``` from bs4 import BeautifulSoup import json import pandas as pd from state_abbr import us_state_abbrev as abbr from selenium import webdriver import urllib.parse import re ``` ### 2. Create cities.csv with full state name/ city column to use in getting school information ``` # create city state list cities = pd.read_excel('notebooks/datasets/data/schools/csv/List of Cities.xlsx') # just get the second and third colun cities = cities[['Unnamed: 1','Unnamed: 2']] # create new dictionary with reversed key, value pairs full = dict(map(reversed, abbr.items())) # map state abbreviations to full name cities['states'] = cities['Unnamed: 2'].map(full) # making sure state/city combo conform to url format of "-" for " " cities['states'] = cities['states'].str.strip() cities['states'] = cities['states'].str.replace(" ", "-") cities['Unnamed: 1'] = cities['Unnamed: 1'].str.replace(" ", "-") # remove extraneous header rows cities = cities.iloc[2:] cities['city'] = (cities['states'] + '/'+ cities['Unnamed: 1']).str.lower() print(cities.head()) # persist by creating new csv cities.to_csv('notebooks/datasets/data/schools/csv/cities.csv') ``` ### 3. For persisitance creating a schools csv using selenium and Beautiful Soup to get school information from greatschools.org ``` # Looping through each city in the file cities = pd.read_csv('csv/cities.csv') records = [] total_schools = [] # selenium driver driver = webdriver.Chrome() # url for greatschools pre_url and post_url (with state/city inbetween) url_pre = 'http://www.greatschools.org/' for i in cities['city']: fetching = True page = 0 while fetching: page += 1 url = url_pre + urllib.parse.quote(i) + '/schools/?page={}&tableView=Overview&view=table'.format(page) print("Fetching ", url) driver.get(url) html = driver.page_source soup = BeautifulSoup(html, 'html.parser') # check if last page page_status = soup.find('div', {'class': 'pagination-summary'}) page_status_text = page_status.text.strip() print(page_status_text) page_status_regex = re.search(r".* (\d+) to (\d+) of (\d+)", page_status_text) beginning, ending, total = page_status_regex.groups() total_schools.append(total) if int(ending) >= int(total): fetching = False table = soup.find("table", { "class" : "" }) for row in table.find_all("tr"): cell = row.find_all("td") if len(cell) == 7: school = row.find('a', {'class':'name'}).text.strip() try: score = row.find('div', {'class': 'circle-rating--small'}).text.strip() except AttributeError: score = '0/10' rating = row.find('div', {'class': 'scale'}).text.strip() try: address = row.find('div', {'class': 'address'}).text.strip() except AttributeError: address = "Unavailable" school_type = cell[1].find(text=True) grade = cell[2].find(text=True) students = cell[3].find(text=True) student_teacher_ratio = cell[4].find(text=True) try: district = cell[6].find(text=True) except AttributeError: district = 'Unavailable' records.append({ 'School': school, 'Score': score, 'Rating': rating, 'Address': address, 'Type': school_type, 'Grades' : grade, 'Total Students Enrolled': students, 'Students per teacher' : student_teacher_ratio, 'District': district }) driver.close() df = pd.DataFrame.from_dict(records) print(df.shape) df.head() df.to_csv('files/schools.csv') df = pd.read_csv('files/schools.csv') print(df.shape) df.head() ``` ### Creating new csv - for cities that were truncated during scraping - retrieved first 25 rather than all the records ``` from selenium import webdriver from bs4 import BeautifulSoup import urllib.parse import json import pandas as pd import re # Looping through each city in the file # These cities were not fully scraped, returned a truncated list cities = ['illinois/chicago', 'texas/houston', 'california/los-angeles', 'florida/miami', 'new-york/new-york', 'texas/san-antonio'] records = [] total_schools = [] # selenium driver driver = webdriver.Chrome() # url for greatschools pre_url and post_url (with state/city inbetween) url_pre = 'http://www.greatschools.org/' for i in cities: fetching = True page = 0 while fetching: page += 1 url = url_pre + urllib.parse.quote(i) + '/schools/?page={}&tableView=Overview&view=table'.format(page) print("Fetching ", url) driver.get(url) html = driver.page_source soup = BeautifulSoup(html, 'html.parser') # check if last page page_status = soup.find('div', {'class': 'pagination-summary'}) print(page_status.text.strip()) page_status_list = page_status.text.strip().split() ending = (page_status_list[3]).replace(',', '') total = (page_status_list[5]).replace(',' , '') total_schools.append(total) if int(ending) >= int(total): fetching = False table = soup.find("table", { "class" : "" }) for row in table.find_all("tr"): cell = row.find_all("td") if len(cell) == 7: school = row.find('a', {'class':'name'}).text.strip() try: score = row.find('div', {'class': 'circle-rating--small'}).text.strip() except AttributeError: score = '0/10' rating = row.find('div', {'class': 'scale'}).text.strip() try: address = row.find('div', {'class': 'address'}).text.strip() except AttributeError: address = "Unavailable" school_type = cell[1].find(text=True) grade = cell[2].find(text=True) students = cell[3].find(text=True) student_teacher_ratio = cell[4].find(text=True) try: district = cell[6].find(text=True) except AttributeError: district = 'Unavailable' records.append({ 'School': school, 'Score': score, 'Rating': rating, 'Address': address, 'Type': school_type, 'Grades' : grade, 'Total Students Enrolled': students, 'Students per teacher' : student_teacher_ratio, 'District': district }) driver.close() df_missing = pd.DataFrame.from_dict(records) df_missing.to_csv('files/missing_schools.csv') print(df_missing.shape) df_missing.head() ```
github_jupyter
<a href="https://colab.research.google.com/github/stephenbeckr/numerical-analysis-class/blob/master/Demos/Ch4_integration.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Numerical Integration (quadrature) - See also Prof. Brown's [integration notebook](https://github.com/cu-numcomp/numcomp-class/blob/master/Integration.ipynb) for CSCI-3656 [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/cu-numcomp/numcomp-class/blob/master/Integration.ipynb) - Bengt Fornberg's talk [Gregory formulas and improving on the Trapezoidal rule](https://www.colorado.edu/amath/sites/default/files/attached-files/2019_unm_0.pdf) ``` import numpy as np import matplotlib.pyplot as plt from scipy.interpolate import BarycentricInterpolator as interp # From Table 9.2 in Quarteroni, Sacco and Saleri "Numerical Mathematics" (Springer, 2000) ClosedNewtonCotesWeights = { 1:[1/2,1/2], 2:[1/3,4/3,1/3], 3:[3/8,9/8,9/8,3/8], 4:[14/45, 64/45, 24/45, 64/45, 14/45], 5:[95/288, 375/288,250/288, 250/288, 375/288, 95/288], 6:[41/140,216/140,27/140,272/140,27/140,216/140,41/140]} ClosedNewtonCotesNames = {1:"n=1, Trapezoid", 2:"n=2, Simpson's", 3:"n=3, Simpson's 3/8", 4:"n=4, Boole's", 5:"n=5", 6:"n=6"} f = lambda x : np.cos(x) F = lambda x : np.sin(x) # dF/dx = f a,b = -1,2 # Other examples to try # f = lambda x : x**(3/2) # F = lambda x : 2/5*x**(5/2) # a,b = 0,1 # f = lambda x : 1/(1+x**2) # aka Runge's function # F = lambda x : np.arctan(x) # a,b = -5,5 I = F(b) - F(a) print("Integral I is {:.3f}".format(I)) x = np.linspace(a,b) plt.fill_between( x, f(x), alpha=0.5); plt.axvline(color='k'); plt.axhline(color='k'); ``` ### Try the Trapezoidal rule, n = 1 ``` n = 1 print("Using the rule: ", ClosedNewtonCotesNames[n] ) weights = ClosedNewtonCotesWeights[n] (nodes,h) = np.linspace(a,b,n+1,retstep=True) # retstep tells it to return the spacing h I_estimate = h*np.dot( weights, f(nodes) ) p = interp(nodes,f(nodes)) x = np.linspace(a,b) plt.fill_between( x, f(x), alpha=0.5); plt.axvline(color='k'); plt.axhline(color='k'); plt.plot( x, p(x), 'r-', label="Interpolating polynomial" ) plt.legend() print("True integral: {:.3f}, Estimate: {:.3f}, Abs. Error: {:.5f}".format(I,I_estimate,abs(I-I_estimate))) nodes.tolist(),h,weights ``` ### And Simpson's rule, n=2 ``` n = 2 print("Using the rule: ", ClosedNewtonCotesNames[n] ) weights = ClosedNewtonCotesWeights[n] (nodes,h) = np.linspace(a,b,n+1,retstep=True) # retstep tells it to return the spacing h I_estimate = h*np.dot( weights, f(nodes) ) p = interp(nodes,f(nodes)) x = np.linspace(a,b) plt.fill_between( x, f(x), alpha=0.5); plt.axvline(color='k'); plt.axhline(color='k'); plt.plot( x, p(x), 'r-', label="Interpolating polynomial" ) plt.legend() print("True integral: {:.3f}, Estimate: {:.3f}, Abs. Error: {:.5f}".format(I,I_estimate,abs(I-I_estimate))) nodes.tolist(),h,weights ``` ### n=3 ``` n = 3 print("Using the rule: ", ClosedNewtonCotesNames[n] ) weights = ClosedNewtonCotesWeights[n] (nodes,h) = np.linspace(a,b,n+1,retstep=True) # retstep tells it to return the spacing h I_estimate = h*np.dot( weights, f(nodes) ) p = interp(nodes,f(nodes)) x = np.linspace(a,b) plt.fill_between( x, f(x), alpha=0.5); plt.axvline(color='k'); plt.axhline(color='k'); plt.plot( x, p(x), 'r-', label="Interpolating polynomial" ) plt.legend() print("True integral: {:.3f}, Estimate: {:.3f}, Abs. Error: {:.5f}".format(I,I_estimate,abs(I-I_estimate))) nodes.tolist(),h,weights ``` ### n=4 ``` n = 4 print("Using the rule: ", ClosedNewtonCotesNames[n] ) weights = ClosedNewtonCotesWeights[n] (nodes,h) = np.linspace(a,b,n+1,retstep=True) # retstep tells it to return the spacing h I_estimate = h*np.dot( weights, f(nodes) ) p = interp(nodes,f(nodes)) x = np.linspace(a,b) plt.fill_between( x, f(x), alpha=0.5); plt.axvline(color='k'); plt.axhline(color='k'); plt.plot( x, p(x), 'r-', label="Interpolating polynomial" ) plt.legend() print("True integral: {:.3f}, Estimate: {:.3f}, Abs. Error: {:.5f}".format(I,I_estimate,abs(I-I_estimate))) nodes.tolist(),h,weights ``` ### n=5 ``` n = 5 print("Using the rule: ", ClosedNewtonCotesNames[n] ) weights = ClosedNewtonCotesWeights[n] (nodes,h) = np.linspace(a,b,n+1,retstep=True) # retstep tells it to return the spacing h I_estimate = h*np.dot( weights, f(nodes) ) p = interp(nodes,f(nodes)) x = np.linspace(a,b) plt.fill_between( x, f(x), alpha=0.5); plt.axvline(color='k'); plt.axhline(color='k'); plt.plot( x, p(x), 'r-', label="Interpolating polynomial" ) plt.legend() print("True integral: {:.3f}, Estimate: {:.3f}, Abs. Error: {:.5f}".format(I,I_estimate,abs(I-I_estimate))) nodes.tolist(),h,weights ``` ### n=6 ``` n = 6 print("Using the rule: ", ClosedNewtonCotesNames[n] ) weights = ClosedNewtonCotesWeights[n] (nodes,h) = np.linspace(a,b,n+1,retstep=True) # retstep tells it to return the spacing h I_estimate = h*np.dot( weights, f(nodes) ) p = interp(nodes,f(nodes)) x = np.linspace(a,b) plt.fill_between( x, f(x), alpha=0.5); plt.axvline(color='k'); plt.axhline(color='k'); plt.plot( x, p(x), 'r-', label="Interpolating polynomial" ) plt.legend() print("True integral: {:.3f}, Estimate: {:.3f}, Abs. Error: {:.5f}".format(I,I_estimate,abs(I-I_estimate))) nodes.tolist(),h,weights ``` ## Let's try different kinds of functions ``` def tryAllRules( f, F, a, b): err = [] for n in range(1,6+1): weights = ClosedNewtonCotesWeights[n] (nodes,h) = np.linspace(a,b,n+1,retstep=True) I_estimate = h*np.dot( weights, f(nodes) ) I = F(b) - F(a) # True answer err.append( abs(I_estimate - I)) return np.array( err ) f = lambda x : np.cos(x) F = lambda x : np.sin(x) # dF/dx = f a,b = -1,2 err1 = tryAllRules( f, F, a, b) # Other examples to try f = lambda x : x**(3/2) F = lambda x : 2/5*x**(5/2) a,b = 0,1 err2 = tryAllRules( f, F, a, b) f = lambda x : x**(11/2) F = lambda x : 2/13*x**(5/13) a,b = 0,1 err3 = tryAllRules( f, F, a, b) # Runge's function f = lambda x : 1/(1+x**2) F = lambda x : np.arctan(x) a,b = -5,5 err4 = tryAllRules( f, F, a, b) print("Rows are different n, columns are different functions") print(np.array2string( np.array([err1,err2,err3,err4]).T, precision=2)) ``` ### Let's examine Runge's function more closely $$f(x) = \frac{1}{1+x^2}$$ Our error wasn't going down, but the function is $C^\infty(\mathbb{R})$. Did we make a mistake? No, our formula was correct, the issue is that the $f'(\xi)$ term (and $f''(\xi)$, etc.) are very large. One way to think of this issue is that the function has a **singularity** (though it is on the imaginary axis, at $\pm i$). (Btw, how do you prounce Runge? It's German, and you can listen to native speakers say it [at Forvo](https://forvo.com/search/Runge/)) ``` import sympy from sympy.abc import x from sympy import init_printing from sympy.utilities.lambdify import lambdify init_printing() import matplotlib as mpl mpl.rcParams['mathtext.fontset'] = 'cm' mpl.rcParams.update({'font.size': 20}) f = lambda x : 1/(1+x**2) F = lambda x : np.arctan(x) a,b = -5,5 g = 1/(1+x**2) # symbolic version gNumerical = lambdify(x,g) # avoid sympy plotting xGrid = np.linspace(a,b,150) plt.figure(figsize=(10,8)) plt.plot( xGrid, gNumerical(xGrid),label='$f(x)$' ) #k = 3 # order of derivative for k in range(1,6): dg = lambdify(x,sympy.diff(g,x,k)) plt.plot( xGrid, dg(xGrid), label="$f^{("+str(k)+")}(x)$"); plt.axvline(color='k'); plt.axhline(color='k'); #plt.legend(prop={'size': 20}); plt.legend() plt.title("Runge's function"); #sympy.plot(g); # sympy plots are not so nice # sympy.plot(sympy.diff(g,x,k)); ```
github_jupyter
``` %matplotlib inline import pandas as pd import geopandas import matplotlib.pyplot as plt ``` # Case study - Conflict mapping: mining sites in eastern DR Congo In this case study, we will explore a dataset on artisanal mining sites located in eastern DR Congo. **Note**: this tutorial is meant as a hands-on session, and most code examples are provided as exercises to be filled in. I highly recommend actually trying to do this yourself, but if you want to follow the solved tutorial, you can find this in the `_solved` directory. --- #### Background [IPIS](http://ipisresearch.be/), the International Peace Information Service, manages a database on mining site visits in eastern DR Congo: http://ipisresearch.be/home/conflict-mapping/maps/open-data/ Since 2009, IPIS has visited artisanal mining sites in the region during various data collection campaigns. As part of these campaigns, surveyor teams visit mining sites in the field, meet with miners and complete predefined questionnaires. These contain questions about the mining site, the minerals mined at the site and the armed groups possibly present at the site. Some additional links: * Tutorial on the same data using R from IPIS (but without geospatial aspect): http://ipisresearch.be/home/conflict-mapping/maps/open-data/open-data-tutorial/ * Interactive web app using the same data: http://www.ipisresearch.be/mapping/webmapping/drcongo/v5/ ## 1. Importing and exploring the data ### The mining site visit data IPIS provides a WFS server to access the data. We can send a query to this server to download the data, and load the result into a geopandas GeoDataFrame: ``` import requests import json wfs_url = "http://geo.ipisresearch.be/geoserver/public/ows" params = dict(service='WFS', version='1.0.0', request='GetFeature', typeName='public:cod_mines_curated_all_opendata_p_ipis', outputFormat='json') r = requests.get(wfs_url, params=params) data_features = json.loads(r.content.decode('UTF-8')) data_visits = geopandas.GeoDataFrame.from_features(data_features) data_visits ``` However, the data is also provided in the class folder as a GeoJSON file, so it is certainly available during the tutorial. <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Read the GeoJSON file `data/cod_mines_curated_all_opendata_p_ipis.geojson` using geopandas, and call the result `data_visits`.</li> <li>Inspect the first 5 rows, and check the number of observations</li> </ul> </div> ``` # %load _solved/solutions/case-conflict-mapping3.py data_visits = geopandas.read_file("./cod_mines_curated_all_opendata_p_ipis.geojson") # %load _solved/solutions/case-conflict-mapping4.py data_visits.head() # %load _solved/solutions/case-conflict-mapping5.py len(data_visits) ``` The provided dataset contains a lot of information, much more than we are going to use in this tutorial. Therefore, we will select a subset of the column: ``` data_visits = data_visits[['vid', 'project', 'visit_date', 'name', 'pcode', 'workers_numb', 'interference', 'armed_group1', 'mineral1', 'geometry']] data_visits.head() ``` Before starting the actual geospatial tutorial, we will use some more advanced pandas queries to construct a subset of the data that we will use further on: ``` # Take only the data of visits by IPIS data_ipis = data_visits[data_visits['project'].str.contains('IPIS') & (data_visits['workers_numb'] > 0)] # For those mining sites that were visited multiple times, take only the last visit data_ipis_lastvisit = data_ipis.sort_values('visit_date').groupby('pcode', as_index=False).last() data = geopandas.GeoDataFrame(data_ipis_lastvisit, crs=data_visits.crs) ``` ### Data on protected areas in the same region Next to the mining site data, we are also going to use a dataset on protected areas (national parks) in Congo. This dataset was downloaded from http://www.wri.org/our-work/project/congo-basin-forests/democratic-republic-congo#project-tabs and included in the tutorial repository: `data/cod_conservation.zip`. <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Extract the `data/cod_conservation.zip` archive, and read the shapefile contained in it. Assign the resulting GeoDataFrame to a variable named `protected_areas`.</li> <li>Quickly plot the GeoDataFrame.</li> </ul> </div> ``` # %load _solved/solutions/case-conflict-mapping10.py # or to read it directly from the zip file: # protected_areas = geopandas.read_file("/Conservation", vfs="zip://./data/cod_conservation.zip") # %load _solved/solutions/case-conflict-mapping11.py ``` ### Conversion to a common Coordinate Reference System We will see that both datasets use a different Coordinate Reference System (CRS). For many operations, however, it is important that we use a consistent CRS, and therefore we will convert both to a commong CRS. But first, we explore problems we can encounter related to CRSs. --- [Goma](https://en.wikipedia.org/wiki/Goma) is the capital city of North Kivu province of Congo, close to the border with Rwanda. It's coordinates are 1.66°S 29.22°E. <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Create a single Point object representing the location of Goma. Call this `goma`.</li> <li>Calculate the distances of all mines to Goma, and show the 5 smallest distances (mines closest to Goma).</li> </ul> </div> ``` # %load _solved/solutions/case-conflict-mapping12.py # %load _solved/solutions/case-conflict-mapping13.py # %load _solved/solutions/case-conflict-mapping14.py # %load _solved/solutions/case-conflict-mapping15.py ``` The distances we see here in degrees, which is not helpful for interpreting those distances. That is a reason we will convert the data to another coordinate reference system (CRS) for the remainder of this tutorial. <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Make a visualization of the national parks and the mining sites on a single plot.</li> </ul> </div> ``` # %load _solved/solutions/case-conflict-mapping16.py ax = protected_areas.plot() data.plot(ax=ax, color='C1') ``` You will notice that the protected areas and mining sites do not map to the same area on the plot. This is because the Coordinate Reference Systems (CRS) differ for both datasets. Another reason we will need to convert the CRS! Let's check the Coordinate Reference System (CRS) for both datasets. The mining sites data uses the [WGS 84 lat/lon (EPSG 4326)](http://spatialreference.org/ref/epsg/4326/) CRS: ``` data.crs ``` The protected areas dataset, on the other hand, uses a [WGS 84 / World Mercator (EPSG 3395)](http://spatialreference.org/ref/epsg/wgs-84-world-mercator/) projection (with meters as unit): ``` protected_areas.crs ``` We will convert both datasets to a local UTM zone, so we can plot them together and that distance-based calculations give sensible results. To find the appropriate UTM zone, you can check http://www.dmap.co.uk/utmworld.htm or https://www.latlong.net/lat-long-utm.html, and in this case we will use UTM zone 35, which gives use EPSG 32735: https://epsg.io/32735 <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Convert both datasets (`data` and `protected_areas`) to EPSG 32735. Name the results `data_utm` and `protected_areas_utm`.</li> <li>Try again to visualize both datasets on a single map.</li> </ul> </div> ``` # %load _solved/solutions/case-conflict-mapping19.py # %load _solved/solutions/case-conflict-mapping20.py ``` ### More advanced visualizations <p>For the following exercises, check the first section of the [04-more-on-visualization.ipynb](04-more-on-visualization.ipynb) notebook for tips and tricks to plot with GeoPandas.</p> <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Make a visualization of the national parks and the mining sites on a single plot.</li> <li>Pay attention to the following details: <ul> <li>Make the figure a bit bigger.</li> <li>The protected areas should be plotted in green</li> <li>For plotting the mining sites, adjust the markersize and use an `alpha=0.5`.</li> <li>Remove the figure border and x and y labels (coordinates)</li> </ul> </li> </ul> </div> ``` # %load _solved/solutions/case-conflict-mapping21.py # %load _solved/solutions/case-conflict-mapping22.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: In addition to the previous figure: <ul> <li>Give the mining sites a distinct color based on the `'interference'` column, indicating whether an armed group is present at the mining site or not.</li> </ul> </div> ``` # %load _solved/solutions/case-conflict-mapping23.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: In addition to the previous figure: <ul> <li>Give the mining sites a distinct color based on the `'mineral1'` column, indicating which mineral is the primary mined mineral.</li> </ul> </div> ``` # %load _solved/solutions/case-conflict-mapping24.py ``` ## 2. Spatial operations <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Access the geometry of the "Kahuzi-Biega National park".</li> <li>Filter the mining sites to select those that are located in this national park.</li> </ul> </div> ``` # %load _solved/solutions/case-conflict-mapping25.py # %load _solved/solutions/case-conflict-mapping26.py # %load _solved/solutions/case-conflict-mapping27.py ``` <div class="alert alert-success"> <b>EXERCISE</b>: Determine for each mining site the "closest" protected area: <ul> <li> PART 1 - do this for a single mining site: <ul> <li>Get a single mining site, e.g. the first of the dataset.</li> <li>Calculate the distance (in km's) to all protected areas for this mining site</li> <li>Get the index of the minimum distance (tip: `idxmin()`) and get the name of the protected are corresponding to this index.</li> </ul> </li> <li> PART 2 - apply this procedure on each geometry: <ul> <li>Write the above procedure as a function that gets a single site and the protected areas dataframe as input and returns the name of the closest protected area as output.</li> <li>Apply this function to all sites using the `.apply()` method on `data_utm.geometry`.</li> </ul> </li> </ul> </div> ``` # %load _solved/solutions/case-conflict-mapping28.py # %load _solved/solutions/case-conflict-mapping29.py # %load _solved/solutions/case-conflict-mapping30.py # %load _solved/solutions/case-conflict-mapping31.py # %load _solved/solutions/case-conflict-mapping32.py ``` ## 3. Using spatial join to determine mining sites in the protected areas Based on the analysis and visualizations above, we can already see that there are mining sites inside the protected areas. Let's now do an actual spatial join to determine which sites are within the protected areas. ### Mining sites in protected areas <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Add information about the protected areas to the mining sites dataset, using a spatial join: <ul> <li>Call the result `data_within_protected`</li> <li>If the result is empty, this is an indication that the coordinate reference system is not matching. Make sure to re-project the data (see above).</li> </ul> </li> <li>How many mining sites are located within a national park?</li> <li>Count the number of mining sites per national park (pandas tip: check `value_counts()`)</li> </ul> </div> ``` # %load _solved/solutions/case-conflict-mapping33.py # %load _solved/solutions/case-conflict-mapping34.py # %load _solved/solutions/case-conflict-mapping35.py # %load _solved/solutions/case-conflict-mapping36.py ``` ### Mining sites in the borders of protected areas And what about the borders of the protected areas? (just outside the park) <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Create a new dataset, `protected_areas_borders`, that contains the border area (10 km wide) of each protected area: <ul> <li>Tip: one way of doing this is with the `buffer` and `difference` function.</li> <li>Plot the resulting borders as a visual check of correctness.</li> </ul> </li> <li>Count the number of mining sites per national park that are located within its borders</li> </ul> </div> ``` # %load _solved/solutions/case-conflict-mapping37.py # %load _solved/solutions/case-conflict-mapping38.py # %load _solved/solutions/case-conflict-mapping39.py # %load _solved/solutions/case-conflict-mapping40.py # %load _solved/solutions/case-conflict-mapping41.py ```
github_jupyter
``` %load_ext autoreload %autoreload 2 BASE_PATH="/mnt/Archivos/dataset-xray" from pathlib import Path from covidframe.tools.load import load_database base_dir = Path(BASE_PATH) DEFAULT_DATABASE_NAME_TRAIN = "database_clean_balanced_train.metadata.csv" DEFAULT_DATABASE_NAME_TEST = "database_clean_balanced_test.metadata.csv" df_train = load_database(filename= base_dir / DEFAULT_DATABASE_NAME_TRAIN) df_test = load_database(filename= base_dir / DEFAULT_DATABASE_NAME_TEST) from covidframe.tools.image import load_image df_train["vector"]= df_train["image_path"].apply(lambda x: load_image(x)) df_train["vector"] import numpy as np df_train["size"] = df_train["vector"].apply(lambda x: x.shape) df_train["size"] df_train["size"].mode df_train["is_squared"] = df_train["size"].apply(lambda x: x[0]==x[1]) df_train['is_squared'] = df_train['is_squared'].astype('category') import matplotlib.pyplot as plt import seaborn as sns plt.imshow(df_train["vector"].iloc[0],cmap="Greys_r") fig = plt.figure() ax = fig.add_subplot(1,1,1) sns.histplot(data=df_train, ax=ax,y="is_squared", hue="category",multiple="stack") df_train['aspect_ratio'] = df_train["size"].apply(lambda x: x[1]/x[0]) fig = plt.figure() ax = fig.add_subplot(1,1,1) sns.histplot(data=df_train[df_train["aspect_ratio"]!=1], ax=ax,x="aspect_ratio", hue="category",multiple="stack") test_image = df_train["vector"].iloc[0] test_image.shape[0] from covidframe.tools.image import crop_image n_image = crop_image(test_image, (200,200)) n_image.shape plt.imshow(n_image,cmap="Greys_r") from covidframe.tools.image import to_equal_aspect_ratio eq_aspect = to_equal_aspect_ratio(test_image) eq_aspect.shape plt.imshow(eq_aspect,cmap="Greys_r") element = df_train[df_train["aspect_ratio"]>1.2].iloc[0] nq_image = element["vector"] plt.imshow(nq_image,cmap="Greys_r") nq_image.shape n_image = to_equal_aspect_ratio(nq_image) n_image.shape plt.imshow(n_image,cmap="Greys_r") from covidframe.tools.image import resize_image r_image = resize_image(n_image,(299,299)) plt.imshow(r_image,cmap="Greys_r") r_image.shape interpolations = ["linear", "area", "nearest", "cubic"] images = [resize_image(n_image,(299,299), interpolation) for interpolation in interpolations] fig = plt.figure(figsize=(12,12)) axes = fig.subplots(ncols=2,nrows=2) for ax, image, interpolation in zip(axes.ravel(), images, interpolations): ax.imshow(image,cmap="Greys_r") ax.set_title(f"{interpolation} interpolation") from covidframe.plot.image import plot_histogram fig = plt.figure(figsize=(12,12)) axes = fig.subplots(ncols=2,nrows=2) for ax, image, interpolation in zip(axes.ravel(), images, interpolations): plot_histogram(image, ax=ax) ax.set_title(f"{interpolation} interpolation") NEW_SIZE = (299,299) df_train["resized"] = df_train["vector"].apply(lambda x: resize_image(to_equal_aspect_ratio(x), NEW_SIZE)) df_train["new_size"] = df_train["resized"].apply(lambda x: x.shape) df_train["new_size"].unique() np.stack(df_train["resized"]).shape from covidframe.integrate import process_images_in_df, load_images_in_df, describe_images_in_df n_df_test = process_images_in_df(df_test, NEW_SIZE) df_train["is_squared"] = df_train["is_squared"].astype("bool") n_df_test["is_squared"] = n_df_test["is_squared"].astype("bool") IMAGE_DF_TRAIN_NAME = "database_balanced_train.h5" IMAGE_DF_TEST_NAME = "database_balanced_test.h5" from covidframe.tools.save import save_database_to_hdf save_database_to_hdf(df_train.drop(columns="vector"), base_dir / IMAGE_DF_TRAIN_NAME) save_database_to_hdf(n_df_test.drop(columns="vector"), base_dir / IMAGE_DF_TEST_NAME) ```
github_jupyter
# Milne ``` #All libraries necesary: %matplotlib inline import matplotlib matplotlib.rcParams['figure.figsize'] = (10, 6) import matplotlib.pyplot as plt import numpy as np from math import pi, sin, cos from copy import deepcopy from mutils2 import * import time # import seaborn from milne import * # PARAMETROS: nlinea = 3 # Numero linea en fichero x = np.arange(-2.8,2.8,20e-3) # Array Longitud de onda B = 992. # Campo magnetico gamma = np.deg2rad(134.) # Inclinacion xi = np.deg2rad(145.) # Angulo azimutal vlos = 0.0 # velocidad km/s eta0 = 73. # Cociente de abs linea-continuo a = 0.2 # Parametro de amortiguamiento ddop = 0.02 # Anchura Doppler # Sc = 4.0 # Cociente Gradiente y Ordenada de la funcion fuente S_0=0.5 # Ordenada de la funcion fuente S_1=0.5 # Gradiente de la funcion fuente param = paramLine(nlinea) stokes = stokesSyn(param,x,B,gamma,xi,vlos,eta0,a,ddop,S_0,S_1) for i in range(4): plt.subplot(2,2,i+1) if i == 0: plt.ylim(0,1.1) plt.plot(x,stokes[i]) plt.plot([0,0],[min(stokes[i]),max(stokes[i])],'k--') ``` # LM- Milne ``` from lmfit import minimize, Parameters, fit_report from LMmilne import * # PARAMETROS: nlinea = 3 # Numero linea en fichero x = arange(-0.3, 0.3, 1e-2) # Array Longitud de onda B = 600. # Campo magnetico gamma = rad(30.) # Inclinacion xi = rad(160.) # Angulo azimutal vlos = 1.1 eta0 = 3. # Cociente de abs linea-continuo a = 0.2 # Parametro de amortiguamiento ddop = 0.05 # Anchura Doppler S_0 = 0.3 # Ordenada de la funcion fuente S_1 = 0.6 # Gradiente de la funcion fuente Chitol = 1e-6 Maxifev = 280 pesoI = 1. pesoQ = 4. pesoU = 4. pesoV = 2. param = paramLine(nlinea) # Array de valores iniciales p=[B,gamma,xi,vlos,eta0,a,ddop,S_0,S_1] # Cargamos los datos: y2 = np.load('Profiles/stoke2.npy') x = np.arange(-2.8,2.8,20e-3) yc = list(y2[0])+list(y2[1])+list(y2[2])+list(y2[3]) time0 = time.time() # Modulo Initial conditions: iB, igamma, ixi = initialConditions(y2,nlinea,x,param) ixi = rad(( grad(ixi) + 180. ) % 180.) igamma = rad(( grad(igamma) + 180. ) % 180.) # Array de valores iniciales p=[iB,igamma,ixi,vlos,eta0,a,ddop,S_0,S_1] ps = max(y2[0])/max(list(y2[1])+list(y2[2])) #print('Peso Q,U sugerido:',ps) pesoV = 1./max(y2[3]) pesoQ = 1./max(y2[1]) pesoU = 1./max(y2[2]) print('----------------------------------------------------') print('pesos V: {0:2.3f}'.format(pesoV)) print('pesos Q,U: {0:2.3f}, {1:2.3f}'.format(pesoQ, pesoU)) # Establecemos los pesos peso = ones(len(yc)) peso[0:int(len(yc)/4)] = pesoI peso[int(len(yc)/4):int(3*len(yc)/4)] = pesoQ peso[int(2*len(yc)/4):int(3*len(yc)/4)] = pesoU peso[int(3*len(yc)/4):] = pesoV print('--------------------------------------------------------------------') from math import pi p0 = Parameters() p0.add('B', value=p[0], min=50.0, max= 2000.) p0.add('gamma', value=p[1], min=0., max = pi) p0.add('xi', value=p[2], min=0., max = pi) p0.add('vlos', value=p[3], min=-20., max =+20.) p0.add('eta0', value=p[4], min=0., max = 6.) p0.add('a', value=p[5], min=0., max = 0.5) p0.add('ddop', value=p[6], min=0.0, max = 0.5) p0.add('S_0', value=p[7], min=0.0, max = 1.5) p0.add('S_1', value=p[8], min=0.0, max = 1.5) stokes0 = stokesSyn(param,x,B,gamma,xi,vlos,eta0,a,ddop,S_0,S_1) [ysync, out] = inversionStokes(p0,x,yc,param,Chitol,Maxifev,peso) print('Time: {0:2.4f} s'.format(time.time()-time0)) print(fit_report(out, show_correl=False)) # plot section: import matplotlib.pyplot as plt stokes = list(split(yc, 4)) synthetic = list(split(ysync, 4)) for i in range(4): plt.subplot(2,2,i+1) if i == 0: plt.ylim(0,1.1) plt.plot(x,stokes0[i],'g-', alpha=0.5) plt.plot(x,stokes[i],'k-',alpha =0.8) plt.plot(x,synthetic[i],'r-') ```
github_jupyter
``` import math from tensorflow.python.keras.datasets import imdb from tensorflow.python.keras.preprocessing import sequence from tensorflow.python.keras import layers from tensorflow.python.keras.models import Sequential import numpy as np from sklearn.calibration import calibration_curve from sklearn import metrics def ece(predictions, confidences, labels, n_bins=10, max_ece=False): # Get the different bins bins = np.linspace(0, 1, n_bins + 1) low_bins = bins[:-1] up_bins = bins[1:] all_bins = zip(low_bins, up_bins) num_samples = predictions.shape[0] ece_bin_values = [] # For each bin work out the weighted difference between # confidence and accuracy for low_bin, up_bin in all_bins: bin_conf_indcies = np.nonzero((confidences > low_bin) & (confidences <= up_bin)) bin_confs = confidences[bin_conf_indcies] bin_preds = predictions[bin_conf_indcies] bin_labels = labels[bin_conf_indcies] num_samples_in_bin = bin_confs.shape[0] if num_samples_in_bin == 0: ece_bin_values.append(0) continue bin_weight = num_samples_in_bin / num_samples bin_acc = (bin_labels == bin_preds).mean() bin_mean_conf = bin_confs.mean() bin_acc_conf_diff = abs(bin_acc - bin_mean_conf) weighted_diff = bin_weight * bin_acc_conf_diff ece_bin_values.append(weighted_diff) # Return the max ece or the weighted average print(' '.join([f'{ece_value:.2f}'for ece_value in ece_bin_values])) if max_ece: return max(ece_bin_values) else: total_weighted_ece = sum(ece_bin_values) return total_weighted_ece def adapt_ece(confidences, labels, samples_per_bin=250, root_error=False): label_conf = list(zip(labels, confidences)) sorted_label_conf = sorted(label_conf, key=lambda x: x[1]) num_samples = confidences.shape[0] bin_indexs = list(range(0, num_samples, samples_per_bin)) # Merge the last bin with the second to last bin bin_indexs.append((num_samples + 1)) low_bins = bin_indexs[:-1] up_bins = bin_indexs[1:] all_bins = list(zip(low_bins, up_bins)) number_bins = len(all_bins) ece_bin_values = [] for low_bin, up_bin in all_bins: bin_label_conf = sorted_label_conf[low_bin : up_bin] bin_label = np.array([label for label, conf in bin_label_conf]) bin_conf = np.array([conf for label, conf in bin_label_conf]) bin_size = bin_conf.shape[0] bin_mean_conf = bin_conf.mean() bin_mean_label = bin_label.mean() bin_mse = math.pow((bin_mean_conf - bin_mean_label), 2) bin_mse = bin_mse * bin_size ece_bin_values.append(bin_mse) print(' '.join([f'{ece_value:.2f}'for ece_value in ece_bin_values])) ece_error = sum(ece_bin_values) / num_samples if root_error: return math.sqrt(ece_error) else: return ece_error ngram_range = 1 max_features = 20000 maxlen = 400 batch_size = 32 embedding_dims = 50 epochs = 5 print('Loading data...') (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features, seed=113) print(len(x_train), 'train sequences') print(len(x_test), 'test sequences') print(f'Average train sequence length: {np.mean(list(map(len, x_train)))}') print(f'Average test sequence length: {np.mean(list(map(len, x_test)))}') # This makes all of the sequences the same size print('Pad sequences (samples x time)') x_train = sequence.pad_sequences(x_train, maxlen=maxlen) x_test = sequence.pad_sequences(x_test, maxlen=maxlen) print('x_train shape:', x_train.shape) print('x_test shape:', x_test.shape) model = Sequential() # we start off with an efficient embedding layer which maps # our vocab indices into embedding_dims dimensions model.add(layers.Embedding(max_features, embedding_dims, input_length=maxlen)) # we add a GlobalAveragePooling1D, which will average the embeddings # of all words in the document model.add(layers.GlobalAveragePooling1D()) # We project onto a single unit output layer, and squash it with a sigmoid: model.add(layers.Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) train_eces = [] train_max_eces = [] train_accs = [] train_briers = [] train_cross_entropys = [] train_abs_calibration = [] train_mse_calibration = [] train_adapt_eces = [] train_adapt_rmse_eces = [] val_eces = [] val_max_eces = [] val_accs = [] valid_briers = [] val_cross_entropys = [] valid_abs_calibration = [] valid_mse_calibration = [] val_adapt_eces = [] val_adapt_rmse_eces = [] for j in range(1, 50): print(f'epoch {j}') model.fit(x_train, y_train, batch_size=batch_size, epochs=1, validation_data=(x_test, y_test)) preds_train = model.predict(x_train) preds_val = model.predict(x_test) conf_train = preds_train.max(axis=1) conf_val = preds_val.max(axis=1) preds_train_labels = np.round(preds_train).reshape(preds_train.shape[0],) preds_val_labels = np.round(preds_val).reshape(preds_val.shape[0],) train_ece = ece(preds_train_labels, conf_train, y_train, n_bins=15) train_eces.append(train_ece) train_max_ece = ece(preds_train_labels, conf_train, y_train, max_ece=True, n_bins=15) train_max_eces.append(train_max_ece) train_adapt_ece = adapt_ece(conf_train, y_train) train_adapt_eces.append(train_adapt_ece) train_adapt_rmse_ece = adapt_ece(conf_train, y_train, root_error=True) train_adapt_rmse_eces.append(train_adapt_rmse_ece) train_cat_acc = metrics.accuracy_score(y_train, preds_train_labels) train_accs.append(train_cat_acc) train_cross_entropy = metrics.log_loss(y_train, preds_train) train_cross_entropys.append(train_cross_entropy) train_brier = metrics.brier_score_loss(y_train, preds_train) train_briers.append(train_brier) train_empirical_probs, train_predicted_probs= calibration_curve(y_train , conf_train,n_bins=15) train_abs_cal = abs(train_empirical_probs - train_predicted_probs).mean() train_mse_cal = ((train_empirical_probs - train_predicted_probs)**2).mean() train_abs_calibration.append(train_abs_cal) train_mse_calibration.append(train_mse_cal) print(f'Train: ece {train_ece:.2f} max ece {train_max_ece:.2f} acc ' f'{train_cat_acc:.2f} entropy {train_cross_entropy:.2f} ' f'brier {train_brier:.2f} adapt {train_adapt_ece:.2f} ' f'cal abs {train_abs_cal:.2f} cal mse {train_mse_cal:.2f}') val_ece = ece(preds_val_labels, conf_val, y_test, n_bins=15) val_eces.append(val_ece) val_max_ece = ece(preds_val_labels, conf_val, y_test, max_ece=True, n_bins=15) val_max_eces.append(val_max_ece) val_adapt_ece = adapt_ece(conf_val, y_test) val_adapt_eces.append(val_adapt_ece) val_adapt_rmse_ece = adapt_ece(conf_val, y_test, root_error=True) val_adapt_rmse_eces.append(val_adapt_rmse_ece) val_cat_acc = metrics.accuracy_score(y_test, preds_val_labels) val_accs.append(val_cat_acc) val_cross_entropy = metrics.log_loss(y_test, preds_val) val_cross_entropys.append(val_cross_entropy) valid_brier = metrics.brier_score_loss(y_test, preds_val) valid_briers.append(valid_brier) valid_empirical_probs, valid_predicted_probs= calibration_curve(y_test , conf_val,n_bins=15) val_abs_cal = abs(valid_empirical_probs - valid_predicted_probs).mean() val_mse_cal = ((valid_empirical_probs-valid_predicted_probs)**2).mean() valid_abs_calibration.append(val_abs_cal) valid_mse_calibration.append(val_mse_cal) print(f'Val: ece {val_ece:.2f} max ece {val_max_ece:.2f} ' f'acc {val_cat_acc:.2f} entropy {val_cross_entropy:.2f} ' f'brier {valid_brier:.2f} adapt {val_adapt_ece:.2f} ' f'cal abs {val_abs_cal:.2f} cal mse {val_mse_cal:.2f}') %matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') ``` # Accuracy ``` fig=plt.figure(figsize=(10,10)) plt.plot(train_accs, 'b') plt.plot(val_accs, 'r') plt.savefig('accuracy.png', dpi = 300) np.array(val_accs).argmax() val_accs[18] val_accs[47] val_accs[21] ``` # Brier score ``` fig=plt.figure(figsize=(10,10)) plt.plot(train_briers, 'b') plt.plot(valid_briers, 'r') plt.savefig('Briers.png', dpi = 300) np.argmin(train_briers) np.argmin(valid_briers) ``` # Cross Entropy Got NAN values after 10 epochs ``` fig=plt.figure(figsize=(10,10)) plt.plot(train_cross_entropys, 'b') plt.plot(val_cross_entropys, 'r') plt.savefig('cross entropy.png', dpi = 300) ``` # Adapted ECE ``` fig=plt.figure(figsize=(10,10)) plt.plot(train_adapt_eces, 'b') plt.plot(val_adapt_eces, 'r') plt.savefig('Adapted ECE.png', dpi = 300) np.argmin(train_adapt_eces) np.argmin(val_adapt_eces) ``` # RMSE adapted ECE ``` fig=plt.figure(figsize=(10,10)) plt.plot(train_adapt_rmse_eces, 'b') plt.plot(val_adapt_rmse_eces, 'r') plt.savefig('RMSE Adapted ECE.png', dpi = 300) np.argmin(train_adapt_rmse_eces) np.argmin(val_adapt_rmse_eces) ``` # ECE ``` fig=plt.figure(figsize=(10,10)) plt.plot(train_eces, 'b') plt.plot(val_eces, 'r') plt.savefig('ece.png', dpi = 300) ``` # Absolute Calibration ``` fig=plt.figure(figsize=(10,10)) plt.plot(train_abs_calibration, 'b') plt.plot(valid_abs_calibration, 'r') plt.savefig('Absolute Calibration.png', dpi = 300) np.array(valid_abs_calibration).argmin() np.array(train_abs_calibration).argmin() ``` # MCE ``` fig=plt.figure(figsize=(10,10)) plt.plot(train_max_eces, 'b') plt.plot(val_max_eces, 'r') plt.savefig('mce.png', dpi = 300) ``` # MSE Calibration ``` fig=plt.figure(figsize=(10,10)) plt.plot(train_mse_calibration, 'b') plt.plot(valid_mse_calibration, 'r') plt.savefig('MSE Calibration.png', dpi = 300) np.array(valid_abs_calibration).argmin() np.array(train_abs_calibration).argmin() ```
github_jupyter
``` import gtsam import numpy as np from gtsam.gtsam import (Cal3_S2, DoglegOptimizer, GenericProjectionFactorCal3_S2, NonlinearFactorGraph, Point3, Pose3, Point2, PriorFactorPoint3, PriorFactorPose3, Rot3, SimpleCamera, Values) from utils import get_matches_and_e, load_image def symbol(name: str, index: int) -> int: """ helper for creating a symbol without explicitly casting 'name' from str to int """ return gtsam.symbol(ord(name), index) def get_camera_calibration(fx, fy, s, cx, cy): return Cal3_S2(fx, fy, s, cx, cy) # Define the camera observation noise model measurement_noise = gtsam.noiseModel_Isotropic.Sigma(2, 1.0) # one pixel in u and v img1 = load_image('img56.jpg', path='../data/lettuce_home/set6/') img2 = load_image('img58.jpg', path='../data/lettuce_home/set6/') points_1, points_2, e_estimate, r, t = get_matches_and_e(img1, img2) print(e_estimate) print(r) t = +(t/t[0])*0.05 print(t) # Create a factor graph graph = NonlinearFactorGraph() K = get_camera_calibration(644, 644, 0, 213, 387) # add all the image points to the factor graph for (i, point) in enumerate(points_1): # wrap the point in a measurement #print('adding point for camera1') factor = GenericProjectionFactorCal3_S2( Point2(point), measurement_noise, symbol('x', 0), symbol('l', i), K) graph.push_back(factor) for (i, point) in enumerate(points_2): #print('adding point for camera2') factor = GenericProjectionFactorCal3_S2( Point2(point), measurement_noise, symbol('x', 1), symbol('l', i), K) graph.push_back(factor) # Add a prior on pose of camera 1. # 0.3 rad std on roll,pitch,yaw and 0.1m on x,y,z pose_noise = gtsam.noiseModel_Diagonal.Sigmas(np.array([0.3, 0.3, 0.3, 0.1, 0.1, 0.1])) factor = PriorFactorPose3(symbol('x', 0), Pose3(Rot3.Rodrigues(0, 0, 0), Point3(0, 0, 0)), pose_noise) graph.push_back(factor) # Add a prior on pose of camera 2 # 0.3 rad std on roll,pitch,yaw and 0.1m on x,y,z pose_noise = gtsam.noiseModel_Diagonal.Sigmas(np.array([0.3, 0.3, 0.3, 0.1, 0.1, 0.1])) factor = PriorFactorPose3(symbol('x', 1), Pose3(Rot3(r), Point3(t[0], t[1], t[2])), pose_noise) graph.push_back(factor) # point_noise = gtsam.noiseModel_Isotropic.Sigma(3, 0.1) # factor = PriorFactorPoint3(symbol('l', 0), Point3(1,0,0), point_noise) # graph.push_back(factor) graph.print_('Factor Graph:\n') # Create the data structure to hold the initial estimate to the solution initial_estimate = Values() r_init = Rot3.Rodrigues(0, 0, 0) t_init = Point3(0, 0, 0) transformed_pose = Pose3(r_init, t_init) initial_estimate.insert(symbol('x', 0), transformed_pose) r_init = Rot3(r) t_init = Point3(t[0], t[1], t[2]) transformed_pose = Pose3(r_init, t_init) initial_estimate.insert(symbol('x', 1), transformed_pose) for j, point in enumerate(points_1): initial_estimate.insert(symbol('l', j), Point3(0.05*point[0]/640, 0.05*point[1]/640,0.05)) initial_estimate.print_('Initial Estimates:\n') # Optimize the graph and print results params = gtsam.DoglegParams() params.setVerbosity('VALUES') optimizer = DoglegOptimizer(graph, initial_estimate, params) print('Optimizing:') result = optimizer.optimize() result.print_('Final results:\n') print('initial error = {}'.format(graph.error(initial_estimate))) print('final error = {}'.format(graph.error(result))) ```
github_jupyter
``` # default_exp resimulation ``` # Match resimulation > Simulating match outcomes based on the xG of individual shots ``` #hide from nbdev.showdoc import * #export import collections import itertools import numpy as np ``` Use Poisson-Binomial distribution calculation from https://github.com/tsakim/poibin It looks like [there are plans to package the code](https://github.com/tsakim/poibin/pull/8), but for now, just copy+paste the requisite class in here (original code is provided with MIT License). ``` #export class PoiBin(object): """Poisson Binomial distribution for random variables. This class implements the Poisson Binomial distribution for Bernoulli trials with different success probabilities. The distribution describes thus a random variable that is the sum of independent and not identically distributed single Bernoulli random variables. The class offers methods for calculating the probability mass function, the cumulative distribution function, and p-values for right-sided testing. """ def __init__(self, probabilities): """Initialize the class and calculate the ``pmf`` and ``cdf``. :param probabilities: sequence of success probabilities :math:`p_i \\in [0, 1] \\forall i \\in [0, N]` for :math:`N` independent but not identically distributed Bernoulli random variables :type probabilities: numpy.array """ self.success_probabilities = np.array(probabilities) self.number_trials = self.success_probabilities.size self.check_input_prob() self.omega = 2 * np.pi / (self.number_trials + 1) self.pmf_list = self.get_pmf_xi() self.cdf_list = self.get_cdf(self.pmf_list) # ------------------------------------------------------------------------------ # Methods for the Poisson Binomial Distribution # ------------------------------------------------------------------------------ def pmf(self, number_successes): """Calculate the probability mass function ``pmf`` for the input values. The ``pmf`` is defined as .. math:: pmf(k) = Pr(X = k), k = 0, 1, ..., n. :param number_successes: number of successful trials for which the probability mass function is calculated :type number_successes: int or list of integers """ self.check_rv_input(number_successes) return self.pmf_list[number_successes] def cdf(self, number_successes): """Calculate the cumulative distribution function for the input values. The cumulative distribution function ``cdf`` for a number ``k`` of successes is defined as .. math:: cdf(k) = Pr(X \\leq k), k = 0, 1, ..., n. :param number_successes: number of successful trials for which the cumulative distribution function is calculated :type number_successes: int or list of integers """ self.check_rv_input(number_successes) return self.cdf_list[number_successes] def pval(self, number_successes): """Return the p-values corresponding to the input numbers of successes. The p-values for right-sided testing are defined as .. math:: pval(k) = Pr(X \\geq k ), k = 0, 1, ..., n. .. note:: Since :math:`cdf(k) = Pr(X <= k)`, the function returns .. math:: 1 - cdf(X < k) & = 1 - cdf(X <= k - 1) & = 1 - cdf(X <= k) + pmf(X = k), k = 0, 1, .., n. :param number_successes: number of successful trials for which the p-value is calculated :type number_successes: int, numpy.array, or list of integers """ self.check_rv_input(number_successes) i = 0 try: isinstance(number_successes, collections.Iterable) pvalues = np.array(number_successes, dtype='float') # if input is iterable (list, numpy.array): for k in number_successes: pvalues[i] = 1. - self.cdf(k) + self.pmf(k) i += 1 return pvalues except TypeError: # if input is an integer: if number_successes == 0: return 1 else: return 1 - self.cdf(number_successes - 1) # ------------------------------------------------------------------------------ # Methods to obtain pmf and cdf # ------------------------------------------------------------------------------ def get_cdf(self, event_probabilities): """Return the values of the cumulative density function. Return a list which contains all the values of the cumulative density function for :math:`i = 0, 1, ..., n`. :param event_probabilities: array of single event probabilities :type event_probabilities: numpy.array """ cdf = np.empty(self.number_trials + 1) cdf[0] = event_probabilities[0] for i in range(1, self.number_trials + 1): cdf[i] = cdf[i - 1] + event_probabilities[i] return cdf def get_pmf_xi(self): """Return the values of the variable ``xi``. The components ``xi`` make up the probability mass function, i.e. :math:`\\xi(k) = pmf(k) = Pr(X = k)`. """ chi = np.empty(self.number_trials + 1, dtype=complex) chi[0] = 1 half_number_trials = int( self.number_trials / 2 + self.number_trials % 2) # set first half of chis: chi[1:half_number_trials + 1] = self.get_chi( np.arange(1, half_number_trials + 1)) # set second half of chis: chi[half_number_trials + 1:self.number_trials + 1] = np.conjugate( chi[1:self.number_trials - half_number_trials + 1] [::-1]) chi /= self.number_trials + 1 xi = np.fft.fft(chi) if self.check_xi_are_real(xi): xi = xi.real else: raise TypeError("pmf / xi values have to be real.") xi += np.finfo(type(xi[0])).eps return xi def get_chi(self, idx_array): """Return the values of ``chi`` for the specified indices. :param idx_array: array of indices for which the ``chi`` values should be calculated :type idx_array: numpy.array """ # get_z: exp_value = np.exp(self.omega * idx_array * 1j) xy = 1 - self.success_probabilities + \ self.success_probabilities * exp_value[:, np.newaxis] # sum over the principal values of the arguments of z: argz_sum = np.arctan2(xy.imag, xy.real).sum(axis=1) # get d value: exparg = np.log(np.abs(xy)).sum(axis=1) d_value = np.exp(exparg) # get chi values: chi = d_value * np.exp(argz_sum * 1j) return chi # ------------------------------------------------------------------------------ # Auxiliary functions # ------------------------------------------------------------------------------ def check_rv_input(self, number_successes): """Assert that the input values ``number_successes`` are OK. The input values ``number_successes`` for the random variable have to be integers, greater or equal to 0, and smaller or equal to the total number of trials ``self.number_trials``. :param number_successes: number of successful trials :type number_successes: int or list of integers """ try: for k in number_successes: assert (type(k) == int or type(k) == np.int64), \ "Values in input list must be integers" assert k >= 0, 'Values in input list cannot be negative.' assert k <= self.number_trials, \ 'Values in input list must be smaller or equal to the ' \ 'number of input probabilities "n"' except TypeError: assert (type(number_successes) == int or \ type(number_successes) == np.int64), \ 'Input value must be an integer.' assert number_successes >= 0, "Input value cannot be negative." assert number_successes <= self.number_trials, \ 'Input value cannot be greater than ' + str(self.number_trials) return True @staticmethod def check_xi_are_real(xi_values): """Check whether all the ``xi``s have imaginary part equal to 0. The probabilities :math:`\\xi(k) = pmf(k) = Pr(X = k)` have to be positive and must have imaginary part equal to zero. :param xi_values: single event probabilities :type xi_values: complex """ return np.all(xi_values.imag <= np.finfo(float).eps) def check_input_prob(self): """Check that all the input probabilities are in the interval [0, 1].""" if self.success_probabilities.shape != (self.number_trials,): raise ValueError( "Input must be an one-dimensional array or a list.") if not np.all(self.success_probabilities >= 0): raise ValueError("Input probabilities have to be non negative.") if not np.all(self.success_probabilities <= 1): raise ValueError("Input probabilities have to be smaller than 1.") #export def poisson_binomial_pmf(probs, xs): return PoiBin(probs).pmf(xs) def resimulate_match(shots, up_to=26, min_xg=0.0001, **kwargs): """ 'Resimulate' a match based on xG. Takes a list of maps, where each map represents a shot has and has 'is_home' (bool) and 'xg' (float) keys. """ # Prevent potential underflow home_xgs = [max(s['xg'], min_xg) for s in shots if s['is_home']] away_xgs = [max(s['xg'], min_xg) for s in shots if not s['is_home']] home_scores = list(range(min(len(home_xgs) + 1, up_to))) away_scores = list(range(min(len(away_xgs) + 1, up_to))) home_probs = dict(zip(home_scores, poisson_binomial_pmf(home_xgs, home_scores))) away_probs = dict(zip(away_scores, poisson_binomial_pmf(away_xgs, away_scores))) scores = [] for h, a in itertools.product(range(up_to), repeat=2): home_prob = home_probs.get(h, 0) away_prob = away_probs.get(a, 0) scores.append({ 'home_goals': h, 'away_goals': a, 'home_probability': home_prob, 'away_probability': away_prob, 'probability': home_prob*away_prob, **kwargs }) # Keep everything up to 4-4; filter out P == 0 results above that return [ s for s in scores if s['probability'] > 0 or (s['home_goals'] < 5 and s['away_goals'] < 5) ] def extract_prob(probs, home_goals, away_goals): filtered = [p for p in probs if p['home_goals'] == home_goals and p['away_goals'] == away_goals] if len(filtered) == 0: return 0 return filtered[0]['probability'] probs = resimulate_match([ {'is_home': True, 'xg': 0.1} ]) assert np.isclose(extract_prob(probs, 1, 0), 0.1) shots = [ {"is_home": False, "xg": 0.030929630622267723}, {"is_home": False, "xg": 0.021505167707800865}, {"is_home": False, "xg": 0.013733051717281342}, {"is_home": False, "xg": 0.06314441561698914}, ] probs = resimulate_match(shots) assert np.isclose( extract_prob(probs, 0, 4), np.product([s['xg'] for s in shots]) ) ```
github_jupyter
<div align="center"> <font size="6">Solving the Mystery of Chai Time Data Science</font> </div> <br> <div align="center"> <font size="4">A Data Science podcast series by Sanyam Bhutani</font> </div> --- <img src="https://miro.medium.com/max/1400/0*ovcHbNV5470zvsH5.jpeg" alt="drawing"/> --- <div> <font size="5">Mr. RsTaK, Where are we?</font> </div> <br> Hello my dear Kagglers. As you all know I love Kaggle and its community. I spend most of my time surfing my Kaggle feed, scrolling over discussion forms, appreciating the efforts put on by various Kagglers via their unique / interesting way of Storytelling. <br><br> So, this morning when i was following my usual routine in Kaggle, I came across this Dataset named [Chai Time Data Science | CTDS.Show](https://www.kaggle.com/rohanrao/chai-time-data-science) provided by Mr. [Vopani](https://www.kaggle.com/rohanrao) and Mr. [Sanyam Bhutani](https://www.kaggle.com/init27). At first glance, I was like what's this? How they know I'm having a tea break? Oh no buddy! I was wrong. It's [CTDS.Show](https://chaitimedatascience.com) :) --- <div> <font size="5">Chai Time Data Science (CTDS.Show)</font> </div> <img align="right" src="https://api.breaker.audio/shows/681460/episodes/55282147/image?v=0987aece49022c8863600c4cf5b67eb8&width=400&height=400" height=600 width=400> <br> Chai Time Data Science show is a [Podcast](https://anchor.fm/chaitimedatascience) + [Video](https://www.youtube.com/playlist?list=PLLvvXm0q8zUbiNdoIazGzlENMXvZ9bd3x) + [Blog](https://sanyambhutani.com/tag/chaitimedatascience/) based show for interviews with Practitioners, Kagglers & Researchers and all things Data Science. [CTDS.Show](https://chaitimedatascience.com), driven by the community under the supervision of Mr. [Sanyam Bhutani](https://www.kaggle.com/init27) gonna complete its 1 year anniversary on **21st June, 2020** and to celebrate this achievement they decided to run a **Kaggle contest** around the dataset with all of the **75+ ML Hero interviews** on the series. According to our Host, The competition is aimed at articulating insights from the Interviews with ML Heroes. Provided a dataset consists of detailed Stats, Transcripts of CTDS.Show, the goal is to use these and come up with interesting insights or stories based on the 100+ interviews with ML Heroes. We have our Dataset containing : * **Description.csv** : This file consists of the **descriptions texts** from YouTube and Audio * **Episodes.csv** : This file contains the **statistics of all the Episodes** of the Chai Time Data Science show. * **YouTube Thumbnail Types.csv** : This file consists of the **description of Anchor/Audio thumbnail of the episodes** * **Anchor Thumbnail Types.csv** : This file contains the **statistics of the Anchor/Audio thumbnail** * **Raw Subtitles** : Directory containing **74 text files** having raw subtitles of all the episodes * **Cleaned Subtitles** : Directory containing cleaned subtitles (in CSV format) Hmm.. Seems we have some stories to talk about.. Congratulating [CTDS.Show](https://chaitimedatascience.com) for their **1 year anniversary**, Let's get it started :) --- **<font id="toc">Btw This gonna be a long kernel. So, Hey! Looking for a guide :) ?</font>** <br><br> &emsp;&emsp;<b><a href="#0">0. Importing Necessary Libraries</a><br></b> &emsp;&emsp;<b><a href="#1">1. A Closer look to our Dataset</a><br></b> &emsp;&emsp;&emsp;&emsp;<b><a href="#1.1">1.1. Exploring YouTube Thumbnail Types.csv</a><br></b> &emsp;&emsp;&emsp;&emsp;<b><a href="#1.2">1.2. Exploring Anchor Thumbnail Types.csv</a><br></b> &emsp;&emsp;&emsp;&emsp;<b><a href="#1.3">1.3. Exploring Description.csv</a><br></b> &emsp;&emsp;&emsp;&emsp;<b><a href="#1.4">1.4. Exploring Episodes.csv</a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.4.1">1.4.1. Missing Values ?</a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.4.2">1.4.2. M0-M8 Episodes</a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.4.3">1.4.3. Solving the Mystery of Missing Values</a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.4.9">1.4.4. Is it a Gender Biased Show?</a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.4.4">1.4.5. Time for a Chai Break</a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.4.5">1.4.6. How to get More Audience?</a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.4.6">1.4.7. Youtube Favors CTDS?</a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.4.7">1.4.8. Do Thumbnails really matter?</a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.4.11">1.4.9. How much Viewers wanna watch?</a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.4.8">1.4.10. Performance on Other Platforms</a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.4.10">1.4.11. Distribution of Heores by Country and Nationality</a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.4.12">1.4.12. Any Relation between Release Date of Epiosdes?</a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.4.13">1.4.13. Do I know about Release of anniversary interview episode?</a><br></b> &emsp;&emsp;&emsp;&emsp;<b><a href="#1.5">1.5. Exploring Raw / Cleaned Substitles</a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.5.1">1.5.1. A Small Shoutout to Ramshankar Yadhunath</a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.5.2">1.5.2. Intro is Bad for CTDS ?</a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.5.3">1.5.3. Who Speaks More ?</a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.5.4">1.5.4. Frequency of Questions Per Episode </a><br></b> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<b><a href="#1.5.5">1.5.5. Favourite Text ? </a><br></b> &emsp;&emsp;<b><a href="#2">2. End Notes</a><br></b> &emsp;&emsp;<b><a href="#3">3. Credits</a><br></b> <div class="alert alert-block alert-danger"> <b>Note :</b> Sometimes, Plotly Graphs fails to render with the Kernel. Please restart the page in that case </div> <div> <b><font id="0" size="5">Importing Necessary Libraries</font></b> </div> <a href="#toc"><span class="label label-info">Go back to our Guide</span></a> ``` import os import warnings warnings.simplefilter("ignore") warnings.filterwarnings("ignore", category=DeprecationWarning) import numpy as np import pandas as pd import matplotlib.pyplot as plt import missingno as msno import plotly.express as px import plotly.graph_objects as go from plotly.subplots import make_subplots !pip install pywaffle from pywaffle import Waffle from bokeh.layouts import column, row from bokeh.models.tools import HoverTool from bokeh.models import ColumnDataSource, Whisker from bokeh.plotting import figure, output_notebook, show output_notebook() from IPython.display import IFrame pd.set_option('display.max_columns', None) ``` <div> <b><font id="1" size="5">A Closer look to our Directories</font></b> </div> <a href="#toc"><span class="label label-info">Go back to our Guide</span></a> Let's dive into each and every aspect of our Dataset step by step in order to get every inches out of it... <div> <b><font id="1.1" size="4">Exploring YouTube Thumbnail Types.csv</font></b> </div> <a href="#toc"><span class="label label-info">Go back to our Guide</span></a> As per our knowledge, This file consists of the description of Anchor/Audio thumbnail of the episodes. Let's explore more about it... ``` YouTube_df=pd.read_csv("../input/chai-time-data-science/YouTube Thumbnail Types.csv") print("No of Datapoints : {}\nNo of Features : {}".format(YouTube_df.shape[0], YouTube_df.shape[1])) YouTube_df.head() ``` So, Basically [CTDS](https://chaitimedatascience.com) uses **4 types of Thumbnails** in their Youtube videos. Its 2020 and people still uses YouTube default image as thumbnail ! Hmm... a Smart decision or just a blind arrow, We'll figure it out in our futher analysis ... <div> <b><font id="1.2" size="4">Exploring Anchor Thumbnail Types.csv</font></b> </div> <a href="#toc"><span class="label label-info">Go back to our Guide</span></a> So, This file contains the statistics of the Anchor/Audio thumbnail ``` Anchor_df=pd.read_csv("../input/chai-time-data-science/Anchor Thumbnail Types.csv") print("No of Datapoints : {}\nNo of Features : {}".format(Anchor_df.shape[0], Anchor_df.shape[1])) Anchor_df.head() ``` It's just similar to Youtube Thumbnail Types. If you are wondering What's anchor then it's a free platform for podcast creation ``` IFrame('https://anchor.fm/chaitimedatascience', width=800, height=450) ``` <div> <b><font id="1.3" size="4">Exploring Description.csv</font></b> </div> <a href="#toc"><span class="label label-info">Go back to our Guide</span></a> This file consists of the descriptions texts from YouTube and Audio ``` Des_df=pd.read_csv("../input/chai-time-data-science/Description.csv") print("No of Datapoints : {}\nNo of Features : {}".format(Des_df.shape[0], Des_df.shape[1])) Des_df.head() ``` So, We have description for every episode. Let's have a close look what we have here ``` def show_description(specific_id=None, top_des=None): if specific_id is not None: print(Des_df[Des_df.episode_id==specific_id].description.tolist()[0]) if top_des is not None: for each_des in range(top_des): print(Des_df.description.tolist()[each_des]) print("-"*100) ``` <b>⚒️ About the Function :</b> In order to explore our Descriptions, I just wrote a small script. It has two options: * Either You provide specific episode id(specific_id) to have a look at that particular description * Or you can provide a number(top_des) and this script will display description for top "x" numbers that you provided in top_des ``` show_description("E1") show_description(top_des=3) ``` <div class="alert alert-block alert-warning"> <b>Advice :</b> Feel free to play with the function "show_description()" to have a look over various descriptions provided in a go </div> <br> <b>🧠 My Cessation: </b> * Although I went through some description and realized it just contains **URL**, **Necessary links** for social media sites with a **little description** of the current show and some **announcements regarding future releases** * I'm **not gonna put stress** in this area because **I don't think there's much to scrap in them**. Right now, let's move ahead. <div> <b><font id="1.4" size="4">Exploring Episodes.csv</font></b> </div> <a href="#toc"><span class="label label-info">Go back to our Guide</span></a> This file contains the statistics of all the Episodes of the Chai Time Data Science show. Okay ! So, it's the big boy itself .. ``` Episode_df=pd.read_csv("../input/chai-time-data-science/Episodes.csv") print("No of Datapoints : {}\nNo of Features : {}".format(Episode_df.shape[0], Episode_df.shape[1])) Episode_df.head() ``` Wew ! That's a lot of features. I'm sure We'll gonna explore some interesting insights from this Metadata If you reached till here, then please bare me for couple of minutes more.. <br><br> Now, We'll gonna have a big sip of our "Chai" :) <img align="center" src="https://external-preview.redd.it/VVHgy7UiRHOUfs6v91tRSDgvvUIXlJvyiD822RG4Fhg.png?auto=webp&s=d6fb054c1f57ec09b5b45bcd7ee0e821b53449c9" height=400 width=600> <div> <b><font id="1.4.1" size="3">Missing Values</font></b> </div> <br> Before diving into our analysis, Let's have check for Missing Values in our CSV.. For this purpose, I'm gonna use this library named [missingno](https://github.com/ResidentMario/missingno). Just use this line : >import missingno as msno [missingno](https://github.com/ResidentMario/missingno) helps us to deal with missing values in a dataset with the help of visualisations. With over 2k stars on github, this library is already very popular. ``` msno.matrix(Episode_df) ``` Aah shit ! Here We go again.. <img src="https://memegenerator.net/img/instances/64277502.jpg" height=224 width=224> <b>📌 Observations :</b> * We can clearly see that **heroes_kaggle_username**, **heroes_twitter_handle** have **lots of missing values** * We can observe **bunch of data missing** from column name **heroes to heroes_twitter_handle** in a **continous way(that big block region)** which shows a **specific reason** of data missing at those points * **Few datapoints** are too **missing** in **anchor**, **spotify** and **apple** section i.e. missing data in **podcasts** There is also a chart on the right side of plot.It summarizes the general shape of the data completeness and points out the rows with the maximum and minimum nullity in the dataset. Well, Before giving any False Proclaim Let's explore more about it.. ``` temp=Episode_df.isnull().sum().reset_index().rename(columns={"index": "Name", 0: "Count"}) temp=temp[temp.Count!=0] Source=ColumnDataSource(temp) tooltips = [ ("Feature Name", "@Name"), ("No of Missing entites", "@Count") ] fig1 = figure(background_fill_color="#ebf4f6", plot_width = 600, plot_height = 400,tooltips=tooltips, x_range = temp["Name"].values, title = "Count of Missing Values") fig1.vbar("Name", top = "Count", source = Source, width = 0.4, color = "#76b4bd", alpha=.8) fig1.xaxis.major_label_orientation = np.pi / 8 fig1.xaxis.axis_label = "Features" fig1.yaxis.axis_label = "Count" fig1.grid.grid_line_color="#feffff" show(fig1) ``` <b>📌 Observations :</b> * Columns from **heroes** to **heroes_nationality** has same about of missing data. Seems We can find a **reasonable relation** between them * About **45.88% (85-39)** and **22.35% (85-19)** of Data missing in column name **heroes_kaggle_username** and **heroes_twitter_handle** respectively * We have just **1 missing value** in **anchor** and **spotify**, **2 missing values** in **apple section** that is quite easy to handle <br> <b>🧠 My Cessation: </b> * Come-on. I don't understand. Chai Time Data Science show is about interviews with our Heroes. So, How do we have **11 missing values in Feature "heroes"** <img src="https://eatrealamerica.com/wp-content/uploads/2018/05/Fishy-Smell.png"> <br> Let's find out.. ``` Episode_df[Episode_df.heroes.isnull()] ``` <b>💭 Interesting..</b> * episode_id "E0" was all about Chai Time Data Science Launch Announcement * episode_id "E69" was Birthday Special It make sense why there's no hero for the following episodes But What are these M0-M8 episodes .. ? <div> <b><font id="1.4.2" size="3">M0-M8 Episodes</font></b> </div> <br> * Looking around for a while, I realized M0-M8 was a small mini-series based on fast.ai summaries and the Things Jeremy says to do that were released on same date. <b>🧠 My Cessation: </b> * Well for the sake of our analysis, I'll treat them as outlier for the current CSV and will analyise them seperately. So, I'm gonna remove them from this CSV, storing seperately for later analysis ``` temp=[id for id in Episode_df.episode_id if id.startswith('M')] fastai_df=Episode_df[Episode_df.episode_id.isin(temp)] Episode_df=Episode_df[~Episode_df.episode_id.isin(temp)] ``` Also, ignoring "E0" and "E69" for right now ... ``` dummy_df=Episode_df[(Episode_df.episode_id!="E0") & (Episode_df.episode_id!="E69")] msno.matrix(dummy_df) temp=dummy_df.isnull().sum().reset_index().rename(columns={"index": "Name", 0: "Count"}) temp=temp[temp.Count!=0] Source=ColumnDataSource(temp) tooltips = [ ("Feature Name", "@Name"), ("No of Missing entites", "@Count") ] fig1 = figure(background_fill_color="#ebf4f6", plot_width = 600, plot_height = 400,tooltips=tooltips, x_range = temp["Name"].values, title = "Count of Missing Values") fig1.vbar("Name", top = "Count", source = Source, width = 0.4, color = "#76b4bd", alpha=.8) fig1.xaxis.major_label_orientation = np.pi / 4 fig1.xaxis.axis_label = "Features" fig1.yaxis.axis_label = "Count" fig1.grid.grid_line_color="#feffff" show(fig1) ``` Now, That's much better.. But We still have a lots of Missing Values <div> <b><font id="1.4.3" size="3">Solving the Mystery of Missing Values</font></b> </div> ``` parent=[] names =[] values=[] temp=dummy_df.groupby(["category"]).heroes_gender.value_counts() for k in temp.index: parent.append(k[0]) names.append(k[1]) values.append(temp.loc[k]) df1 = pd.DataFrame( dict(names=names, parents=parent,values=values)) parent=[] names =[] values=[] temp=dummy_df.groupby(["category","heroes_gender"]).heroes_kaggle_username.count() for k in temp.index: parent.append(k[0]) names.append(k[1]) values.append(temp.loc[k]) df2 = pd.DataFrame( dict(names=names, parents=parent,values=values)) fig = px.sunburst(df1, path=['names', 'parents'], values='values', color='parents',hover_data=["names"], title="Heroes associated with Categories") fig.update_traces( textinfo='percent entry+label', hovertemplate = "Industry:%{label}: <br>Count: %{value}" ) fig.show() fig = px.sunburst(df2, path=['names', 'parents'], values='values', color='parents', title="Heroes associated with Categories having Kaggle Account") fig.update_traces( textinfo='percent entry+label', hovertemplate = "Industry:%{label}: <br>Count: %{value}" ) fig.show() ``` <b>📌 Observations :</b> * Heores associated with "Category" **Kaggle** are expected to have a **Kaggle account** * Ignoring the counts from "Category" Kaggle (74-31=43), **Out of 43 only 15 Heroes have Kaggle account**. * This **explains all Missing 28 Values** from our CSV * Similarly We have **8 Heroes who don't have twitter handle**. It's okay. Even I don't have a twitter handle :D <div> <b><font size="3">Wanna know a fun fact ?</font></b> </div> <br> > Because of this Kaggle platform, Now I've aprox **42%** chance of becoming a CTDS Hero :) ... <img src="https://i.imgflip.com/2so1le.jpg" width=224 height=224> <br> Ahem Ahem... Focus [RsTaK](https://www.kaggle.com/rahulgulia) Focus.. Let's get back to our work. Wait? Guess I missed something.. What's that gender ratio? <div> <b><font id="1.4.9" size="3">Is it a Gender Biased Show?</font></b> </div> ``` gender = Episode_df.heroes_gender.value_counts() fig = plt.figure( FigureClass=Waffle, rows=5, columns=12, values=gender, colors = ('#20639B', '#ED553B'), title={'label': 'Gender Distribution', 'loc': 'left'}, labels=["{}({})".format(a, b) for a, b in zip(gender.index, gender) ], legend={'loc': 'lower left', 'bbox_to_anchor': (0, -0.4), 'ncol': len(Episode_df), 'framealpha': 0}, font_size=30, icons = 'child', figsize=(12, 5), icon_legend=True ) ``` Jokes apart, We can't give any strong statement over this. But yea, I'm hoping for more Female Heroes :D <b>🧠 My Cessation: </b> I won't talk much about relation of gender with other features because : * Gender feature is highly biased towards one category So, We can not conclude any relation with other features. * If we had a good gender ratio, then we could have talked about impact of gender Even if we somehow observe any positive conclusion for Female gender then I would say it will be just a coincidence. There are other factors apart from gender that may have resulted in positive conclusion for Female gender. With such a biased and small sample size for Female, We can not comment any strong statement on that ``` dummy_df[dummy_df.apple_listeners.isnull()] ``` <b>📌 Observations :</b> Following our analysis, We realized : * CTDS had an episode with **Tuatini Godard, episode_id : "E12" exclusively for Youtube**. Although it was an **Audio Only video**(if it makes sense :D ) released on Youtube * If it was an Audio Only Version, then **Why it wasn't released on other platforms** ? Hmmm... interesting. Well, I think Mr. [Sanyam Bhutani](https://www.kaggle.com/init27) can answer this well. * Similarly, CTDS had an **episode with Rachel Thomas released at every platform expect for Apple** With this, We have solved all the mysteries related to the Missing Data. Now we can finally explore other aspects of this CSV But before that.. <div> <b><font id="1.4.4" size="3">Time for a Chai Break</font></b> </div> <img src="https://encrypted-tbn0.gstatic.com/images?q=tbn%3AANd9GcQf6ziN-7WH50MIZZQtJbO0Czsll5wTud7E3Q&usqp=CAU" width=400 height=400> <br> While having a sip of my Chai (Tea), I'm just curious **Why this show is named "Chai Time Data Science"?** <br><br> Well, I **don't have a solid answer** for this but maybe its just **because our Host loves Chai?** Hmmm.. So You wanna say our Host is more Hardcore Chai Lover than me? Hey ! Hold my Chai.. ``` fig = go.Figure([go.Pie(labels=Episode_df.flavour_of_tea.value_counts().index.to_list(),values=Episode_df.flavour_of_tea.value_counts().values,hovertemplate = '<br>Type: %{label}</br>Count: %{value}<br>Popularity: %{percent}</br>', name = '')]) fig.update_layout(title_text="What Host drinks everytime ?", template="plotly_white", title_x=0.45, title_y = 1) fig.data[0].marker.line.color = 'rgb(255, 255, 255)' fig.data[0].marker.line.width = 2 fig.update_traces(hole=.4,) fig.show() ``` <b>📌 Observations :</b> * **Masala Chai (count=16)** and **Ginger Chai (count=16)** seems to be **favourite Chai** of our Host **followed by Herbal Tea (count=11) and Sulemani Chai (count=11)** * Also, Our Host seems to be **quite experimental with Chai**. He has varities of flavour in his belly Oh Man ! This time you win. You're a real Chai lover <b>Now, One Question arises..❓</b> <b>So, Host drinking any specific Chai at specific time have any relation with other factors or success of CTDS?</b> <b>🧠 My Cessation: </b> * Thinking practically, **I don't think** drinking Chai at any specific time can have **any real impact** for the show. * Believing on such things is an **example of superstition**. * No doubt, as per the data it **may have some relation** with other factors. But to support any statement here, I would like to quote a **famous sentence used in statistics** i.e. <img src="https://miro.medium.com/max/420/1*lYw_nshU1qg3dqbqgpWoDA.png" width=400 height=400> <br> <div class="alert-block alert-info"> <b>Correlation</b> does not imply <b>Causation</b> </div> <div> <b><font id="1.4.5" size="3">How to get More Audience?</font></b> </div> <br> Well, rewarding for your victory in that Chai Lover Challenge, I'll try to assist CTDS on how to get more Audience 😄 * Ofcourse CTDS.Shows are a kind of gem, fully informative, covering interviews with some successfull people * But talking Statistically here, We'll gonna define Success of an Episode by amount of Audience it gathered ``` temp=dummy_df.isnull().sum().reset_index().rename(columns={"index": "Name", 0: "Count"}) temp=temp[temp.Count!=0] Source=ColumnDataSource(temp) tooltips = [ ("Feature Name", "@Name"), ("No of Missing entites", "@Count") ] fig1 = figure(background_fill_color="#ebf4f6", plot_width = 600, plot_height = 400,tooltips=tooltips, x_range = temp["Name"].values, title = "Count of Missing Values") fig1.vbar("Name", top = "Count", source = Source, width = 0.4, color = "#76b4bd", alpha=.8) fig1.xaxis.major_label_orientation = np.pi / 4 fig1.xaxis.axis_label = "Features" fig1.yaxis.axis_label = "Count" fig1.grid.grid_line_color="#feffff" show(fig1) Episode_df.release_date = pd.to_datetime(Episode_df.release_date) Source = ColumnDataSource(Episode_df) fastai_df.release_date = pd.to_datetime(fastai_df.release_date) Source2 = ColumnDataSource(fastai_df) tooltips = [ ("Episode Id", "@episode_id"), ("Episode Title", "@episode_name"), ("Hero Present", "@heroes"), ("CTR", "@youtube_ctr"), ("Category", "@category"), ("Date", "@release_date{%F}"), ] tooltips2 = [ ("Episode Id", "@episode_id"), ("Episode Title", "@episode_name"), ("Hero Present", "@heroes"), ("Subscriber Gain", "@youtube_subscribers"), ("Category", "@category"), ("Date", "@release_date{%F}"), ] fig1 = figure(background_fill_color="#ebf4f6",plot_width = 600, plot_height = 400, x_axis_type = "datetime", title = "CTR Per Episode") fig1.line("release_date", "youtube_ctr", source = Source, color = "#03c2fc", alpha = 0.8, legend_label="youtube_ctr") fig1.varea(source=Source, x="release_date", y1=0, y2="youtube_ctr", alpha=0.2, fill_color='#55FF88', legend_label="youtube_ctr") fig1.line("release_date", Episode_df.youtube_ctr.mean(), source = Source, color = "#f2a652", alpha = 0.8,line_dash="dashed", legend_label="Youtube CTR Mean : {:.3f}".format(Episode_df.youtube_ctr.mean())) fig1.circle(x="release_date", y="youtube_ctr", source = Source2, color = "#5bab37", alpha = 0.8, legend_label="M0-M8 Series") fig1.add_tools(HoverTool(tooltips=tooltips,formatters={"@release_date": "datetime"})) fig1.xaxis.axis_label = "Release Date" fig1.yaxis.axis_label = "Click Per Impression" fig1.grid.grid_line_color="#feffff" fig2 = figure(background_fill_color="#ebf4f6", plot_width = 600, plot_height = 400, x_axis_type = "datetime", title = "Subscriber Gain Per Episode") fig2.line("release_date", "youtube_subscribers", source = Source, color = "#03c2fc", alpha = 0.8, legend_label="Subscribers") fig2.varea(source=Source, x="release_date", y1=0, y2="youtube_subscribers", alpha=0.2, fill_color='#55FF88', legend_label="Subscribers") fig2.circle(x="release_date", y="youtube_subscribers", source = Source2, color = "#5bab37", alpha = 0.8, legend_label="M0-M8 Series") fig2.add_tools(HoverTool(tooltips=tooltips2,formatters={"@release_date": "datetime"})) fig2.xaxis.axis_label = "Release Date" fig2.yaxis.axis_label = "Subscriber Count" fig2.grid.grid_line_color="#feffff" show(column(fig1, fig2)) ``` <b>📌 Observations :</b> From the Graphs, We can see : * On an average, CTDS episodes had **2.702 CTR** * **38 out of 76 (50% exact)** have CTRs above the average * Episode **E1, E27 and E49 seems to be lucky** for CTDS in terms of **Subscriber Count** * Episode E19 had the **best CTR (8.460)** which is self explanatory from the Episode Title. Everyone loves to know about MOOC(s) and ML Interviews in Industries <b>🤖 M0-M8 Series :</b> * Despite related to Fast.ai, M0-M7 **doesn't perform** that well as compared to other vides related to fast.ai * **M0 and M1 though received a good amount of CTR** but other M Series quite below the average CTR * **M0 and M1 has better impact on subscriber gain** as compared to other M series but overall series doesn't perform well on Subscriber gain * All M0-M8 series were released on the **same day**, which can be the reason for this incident. M0-M1 despite having good CTR **fails to hold the viewers interest on M series** <b>💭 Interestingly..</b>, * Episode **E19 despite of having best CTR** till now (8.460), **didn't contributed much** in Subscriber Count (only 7 Subscriber Gained) But Why ❓ * CTR **doen't mean that person likes the content**, or **he/she will be watching that video** for long or will be **subscribing to the channel** * Maybe that video was **recommended on his newsfeed and he/she clicked on it** just to check the video * Maybe he/she **doesn't liked the content** * Maybe he/she **accidently clicked** on the video There's an huge possibility of such cases. But in conclusion, We can say high CTR reflect cases like : * People clicked on the video maybe because of the **Thumbnail, or Title was soothing**. Maybe he/she clicked because of the **hero mentioned in Title/Thumbnail** 📃 I don't know how Youtube algorithm works. But for the sake of answering exceptional case of E19, My hypothetical answer will be: * Title contains the word **"MOOC"**. Since now a days everyone wanna **break into DataScience**, Youtube algorithm may have **suggested **that video to people looking for MOOCs * Most of other episodes have similar kind of Titles stating **"Interview with"** or have some terms that **are't that begineer friendly**. Resulting in **low CTR** * Supporting my hypothesis, observe **E27** (having fast.ai in Title that is a famous MOOC), **E38**(Title with Youngest Grandmaster may have attracted people to click),**E49** (Getting started in Datascience), **E60**(Terms like NLP and Open Source Projects) and **E75**(again fast.ai) * You can **argue for E12** which has the word "Freelancing" in the Title. Well **exceptions** will be there Okay What's about organic reach of channel or reach via Heroes? ``` Source = ColumnDataSource(Episode_df) Source2 = ColumnDataSource(fastai_df) tooltips = [ ("Episode Id", "@episode_id"), ("Hero Present", "@heroes"), ("Impression Views", "@youtube_impression_views"), ("Non Impression Views", "@youtube_nonimpression_views"), ("Category", "@category"), ("Date", "@release_date{%F}"), ] tooltips2 = [ ("Episode Id", "@episode_id"), ("Hero Present", "@heroes"), ("Subscriber Gain", "@youtube_subscribers"), ("Category", "@category"), ("Date", "@release_date{%F}"), ] fig1 = figure(background_fill_color="#ebf4f6", plot_width = 600, plot_height = 400, x_axis_type = "datetime", title = "Impression-Non Impression Views Per Episode") fig1.line("release_date", "youtube_impression_views", source = Source, color = "#03c2fc", alpha = 0.8, legend_label="Impression Views") fig1.line("release_date", "youtube_nonimpression_views", source = Source, color = "#f2a652", alpha = 0.8, legend_label="Non Impression Views") fig1.varea(source=Source, x="release_date", y1=0, y2="youtube_impression_views", alpha=0.2, fill_color='#55FF88', legend_label="Impression Views") fig1.varea(source=Source, x="release_date", y1=0, y2="youtube_nonimpression_views", alpha=0.2, fill_color='#e09d53', legend_label="Non Impression Views") fig1.circle(x="release_date", y="youtube_impression_views", source = Source2, color = "#5bab37", alpha = 0.8, legend_label="M0-M8 Series Impression Views") fig1.circle(x="release_date", y="youtube_nonimpression_views", source = Source2, color = "#2d3328", alpha = 0.8, legend_label="M0-M8 Series Non Impression Views") fig1.add_tools(HoverTool(tooltips=tooltips,formatters={"@release_date": "datetime"})) fig1.xaxis.axis_label = "Release Date" fig1.yaxis.axis_label = "Total Views" fig2 = figure(background_fill_color="#ebf4f6", plot_width = 600, plot_height = 400, x_axis_type = "datetime", title = "Subscriber Gain Per Episode") fig2.line("release_date", "youtube_subscribers", source = Source, color = "#03c2fc", alpha = 0.8, legend_label="Subscribers") fig2.varea(source=Source, x="release_date", y1=0, y2="youtube_subscribers", alpha=0.2, fill_color='#55FF88', legend_label="Subscribers") fig2.circle(x="release_date", y="youtube_subscribers", source = Source2, color = "#5bab37", alpha = 0.8, legend_label="M0-M8 Series") fig2.add_tools(HoverTool(tooltips=tooltips2,formatters={"@release_date": "datetime"})) fig2.xaxis.axis_label = "Release Date" fig2.yaxis.axis_label = "Subscriber Count" show(column(fig1, fig2)) ``` <b>📌 Observations :</b> * Mostly Non Impression Views are **greater than Impression views**. CTDS seems to have a **loyal fan base** that's sharing his videos, producing more Non Impression Views * In some cases, there's sharp increase in views and **big difference** between Impression and Non Impression Views. * People **love to see specific Heroes**. Hero choice do matter * Total Views(especially Non Impression Views) definately plays role in Subscriber Gain * Though M series doesn't have good performance, but if you watch carefully, you'll realise **M series has better Impression Views than Non Impression Views** <div> <b><font id="1.4.6" size="3">Youtube Favors CTDS?</font></b> </div> ``` data1={ "Youtube Impressions":Episode_df.youtube_impressions.sum(), "Youtube Impression Views": Episode_df.youtube_impression_views.sum(), "Youtube NonImpression Views" : Episode_df.youtube_nonimpression_views.sum() } text=("Youtube Impressions","Youtube Impression Views","Youtube NonImpression Views") fig = go.Figure(go.Funnelarea( textinfo= "text+value", text =list(data1.keys()), values = list(data1.values()), title = {"position": "top center", "text": "Youtube and Views"}, name = '', showlegend=False,customdata=['Video Thumbnail shown to Someone', 'Views From Youtube Impressions', 'Views without Youtube Impressions'], hovertemplate = '%{customdata} <br>Count: %{value}</br>' )) fig.show() ``` <b>📌 Observations :</b> Few things to note here : * Well, I havn't cracked Youtube Algorithm, but it seems Youtube has its blessing over CTDS * CTDS Episodes is only able to convert **2.84%** of **Youtube Impressions into Viewers** * **65.12%** of CTDS views are **Non Impression views** Seems It's clear Youtube Thumbnail, Video Title are the important factor for deciding whether a person will click on the video or not. Wait, You want some figures? <div> <b><font id="1.4.7" size="3">Do Thumbnails really matter ?</font></b> </div> ``` colors = ["red", "olive", "darkred", "goldenrod"] index={ 0:"YouTube default image", 1:"YouTube default image with custom annotation", 2:"Mini Series: Custom Image with annotations", 3:"Custom image with CTDS branding, Title and Tags" } p = figure(background_fill_color="#ebf4f6", plot_width=600, plot_height=300, title="Thumbnail Type VS CTR") base, lower, upper = [], [], [] for each_thumbnail_ref in index: if each_thumbnail_ref==2: temp = fastai_df[fastai_df.youtube_thumbnail_type==each_thumbnail_ref].youtube_ctr else: temp = Episode_df[Episode_df.youtube_thumbnail_type==each_thumbnail_ref].youtube_ctr mpgs_mean = temp.mean() mpgs_std = temp.std() lower.append(mpgs_mean - mpgs_std) upper.append(mpgs_mean + mpgs_std) base.append(each_thumbnail_ref) source_error = ColumnDataSource(data=dict(base=base, lower=lower, upper=upper)) p.add_layout( Whisker(source=source_error, base="base", lower="lower", upper="upper") ) tooltips = [ ("Episode Id", "@episode_id"), ("Hero Present", "@heroes"), ] color = colors[each_thumbnail_ref % len(colors)] p.circle(y=temp, x=each_thumbnail_ref, color=color, legend_label=index[each_thumbnail_ref]) print("Mean CTR for Thumbnail Type {} : {:.3f} ".format(index[each_thumbnail_ref], temp.mean())) show(p) ``` <b>📌 Observations :</b> From above Box-Plot : * It seems Type of Thumbnail do have **some impact on CTR** * Despite of using **YouTube default image for maximum of time**, it's average CTR is **lowest** as compared to CTR from other Youtube Thumbnail * Since **Count of other YouTube thumbnails are less**, We can't say What's the best Thumbnail * CTR depends on other factors too like Title, Hero featured in the Episode etc. Still we can **confidently** say that **Thumbnails other than YouTube default image attracts** more Users to click on the Video * As We talked about M series, M0-M1 failed to keep interest of people in the series. * Although their Mean CTR is lowest yet we can observe M0-M1 has a **better CTR as compared to majority of Episodes with Youtube default thumbnails**. In short, Don't use **Default Youtube Image for Thumbnail** <div> <b><font id="1.4.11" size="3">How much Viewers wanna watch?</font></b> </div> <br> Episodes have different duration. In order to get a significant insight, I'll calculate the **percentage of each Episode watched**.. ``` a=Episode_df[["episode_id", "episode_duration", "youtube_avg_watch_duration"]] a["percentage"]=(a.youtube_avg_watch_duration/a.episode_duration)*100 b=fastai_df[["episode_id", "episode_duration", "youtube_avg_watch_duration"]] b["percentage"]=(b.youtube_avg_watch_duration/b.episode_duration)*100 temp=a.append(b).reset_index().drop(["index"], axis=1) Source = ColumnDataSource(temp) tooltips = [ ("Episode Id", "@episode_id"), ("Episode Duration", "@episode_duration"), ("Youtube Avg Watch_duration Views", "@youtube_avg_watch_duration"), ("Percentage of video watched", "@percentage"), ] fig1 = figure(background_fill_color="#ebf4f6", plot_width = 1000, plot_height = 400, x_range = temp["episode_id"].values, title = "Percentage of Episode Watched") fig1.line("episode_id", "percentage", source = Source, color = "#03c2fc", alpha = 0.8) fig1.line("episode_id", temp.percentage.mean(), source = Source, color = "#f2a652", alpha = 0.8,line_dash="dashed", legend_label="Mean : {:.3f}".format(temp.percentage.mean())) fig1.add_tools(HoverTool(tooltips=tooltips)) fig1.xaxis.axis_label = "Episode Id" fig1.yaxis.axis_label = "Percentage" fig1.xaxis.major_label_orientation = np.pi / 3 show(column(fig1)) ``` <b>📌 Observations :</b> * On an average, **13.065%** of Episode is watched by Viewers * But **most Episode** haave watched percentage **less** than this threshold. <b> How does it make sense ❓</b> * That's because we have **some outliers** like E0 and M series that has watched percentage over **20%**. <b> But Why such outliers occured ❓</b> * That's because they have **low Episode Duration** In this fast moving world, Humans get **bored** of things very **easily**. E0 and M Series having **low Episode Duration** made viewers to **watch more**. If they'll subscribe to the channel or not that's a different thing. That depends on the content. <b>In order to give more to Viewers and Community, Short lengthed Episodes can be a big step </b> <div> <b><font id="1.4.8" size="3">Performance on Other Platforms</font></b> </div> ``` colors = ["red", "olive", "darkred", "goldenrod"] index={ 0:"YouTube default playlist image", 1:"CTDS Branding", 2:"Mini Series: Custom Image with annotations", 3:"Custom image with CTDS branding, Title and Tags" } p = figure(background_fill_color="#ebf4f6", plot_width=600, plot_height=300, title="Thumbnail Type VS Anchor Plays") base, lower, upper = [], [], [] for each_thumbnail_ref in index: if each_thumbnail_ref==2: temp = fastai_df[fastai_df.youtube_thumbnail_type==each_thumbnail_ref].anchor_plays else: temp = Episode_df[Episode_df.youtube_thumbnail_type==each_thumbnail_ref].anchor_plays mpgs_mean = temp.mean() mpgs_std = temp.std() lower.append(mpgs_mean - mpgs_std) upper.append(mpgs_mean + mpgs_std) base.append(each_thumbnail_ref) source_error = ColumnDataSource(data=dict(base=base, lower=lower, upper=upper)) p.add_layout( Whisker(source=source_error, base="base", lower="lower", upper="upper") ) tooltips = [ ("Episode Id", "@episode_id"), ("Hero Present", "@heroes"), ] color = colors[each_thumbnail_ref % len(colors)] p.circle(y=temp, x=each_thumbnail_ref, color=color, legend_label=index[each_thumbnail_ref]) print("Mean Anchor Plays for Thumbnail Type {} : {:.3f} ".format(index[each_thumbnail_ref], temp.mean())) show(p) ``` <b>📌 Observations :</b> * **55.40%** of the Anchor Thumbnail have **CTDS Branding** * But on an Average, **Podcasts with YouTube default playlist** image performs **better** in terms of Anchor Plays ``` Episode_df.release_date = pd.to_datetime(Episode_df.release_date) Source = ColumnDataSource(Episode_df) tooltips = [ ("Episode Id", "@episode_id"), ("Episode Title", "@episode_name"), ("Hero Present", "@heroes"), ("Anchor Plays", "@anchor_plays"), ("Category", "@category"), ("Date", "@release_date{%F}"), ] tooltips2 = [ ("Episode Id", "@episode_id"), ("Episode Title", "@episode_name"), ("Hero Present", "@heroes"), ("Spotify Starts Plays", "@spotify_starts"), ("Spotify Streams", "@spotify_streams"), ("Spotify Listeners", "@spotify_listeners"), ("Category", "@category"), ("Date", "@release_date{%F}"), ] fig1 = figure(background_fill_color="#ebf4f6", plot_width = 600, plot_height = 400, x_axis_type = "datetime", title = "Anchor Plays Per Episode") fig1.line("release_date", "anchor_plays", source = Source, color = "#03c2fc", alpha = 0.8, legend_label="Anchor Plays") fig1.line("release_date", Episode_df.anchor_plays.mean(), source = Source, color = "#f2a652", alpha = 0.8, line_dash="dashed", legend_label="Anchor Plays Mean : {:.3f}".format(Episode_df.youtube_ctr.mean())) fig1.add_tools(HoverTool(tooltips=tooltips,formatters={"@release_date": "datetime"})) fig1.xaxis.axis_label = "Release Date" fig1.yaxis.axis_label = "Anchor Plays" fig2 = figure(background_fill_color="#ebf4f6", plot_width = 600, plot_height = 400, x_axis_type = "datetime", title = "Performance on Spotify Per Episode") fig2.line("release_date", "spotify_starts", source = Source, color = "#03c2fc", alpha = 0.8, legend_label="Spotify Starts Plays") fig2.line("release_date", "spotify_streams", source = Source, color = "#f2a652", alpha = 0.8, legend_label="Spotify Streams") fig2.line("release_date", "spotify_listeners", source = Source, color = "#03fc5a", alpha = 0.8, legend_label="Spotify Listeners") fig2.add_tools(HoverTool(tooltips=tooltips2,formatters={"@release_date": "datetime"})) fig2.xaxis.axis_label = "Release Date" fig2.yaxis.axis_label = "Total Plays" show(column(fig1,fig2)) ``` * It's 2020 and Seems now-a-days people aren't not much into podcasts and Audios <div> <b><font id="1.4.10" size="3">Distribution of Heores by Country and Nationality</font></b> </div> ``` temp=Episode_df.groupby(["heroes_location", "heroes"])["heroes_nationality"].value_counts() parent=[] names =[] values=[] heroes=[] for k in temp.index: parent.append(k[0]) heroes.append(k[1]) names.append(k[2]) values.append(temp.loc[k]) df = pd.DataFrame( dict(names=names, parents=parent,values=values, heroes=heroes)) df["World"] = "World" fig = px.treemap( df, path=['World', 'parents','names','heroes'], values='values',color='parents') fig.update_layout( width=1000, height=700, title_text="Distribution of Heores by Country and Nationality") fig.show() ``` * Most of our Heroes lives in USA but There's quite range of Diversity in Heroes Nationality within a country which is good to know <div> <b><font id="1.4.12" size="3">Any Relation between Release Date of Epiosdes?</font></b> </div> ``` a=Episode_df.release_date b=(a-a.shift(periods=1, fill_value='2019-07-21')).astype('timedelta64[D]') d = {'episode_id':Episode_df.episode_id, 'heroes':Episode_df.heroes, 'release_date': Episode_df.release_date, 'day_difference': b} temp = pd.DataFrame(d) Source = ColumnDataSource(temp) tooltips = [ ("Episode Id", "@episode_id"), ("Hero Present", "@heroes"), ("Day Difference", "@day_difference"), ("Date", "@release_date{%F}"), ] fig1 = figure(background_fill_color="#ebf4f6", plot_width = 1000, plot_height = 400, x_axis_type = "datetime", title = "Day difference between Each Release Date") fig1.line("release_date", "day_difference", source = Source, color = "#03c2fc", alpha = 0.8) fig1.add_tools(HoverTool(tooltips=tooltips,formatters={"@release_date": "datetime"})) fig1.xaxis.axis_label = "Date" fig1.yaxis.axis_label = "No of Days" fig1.xaxis.major_label_orientation = np.pi / 3 show(column(fig1)) ``` <b>📌 Observations :</b> * Seems 2020 made Sanyam a bit consistant on his Release Date having a difference of 3 or 4 days between each release till 18th July <div> <b><font id="1.4.13" size="3">Do I know about Release of anniversary interview episode?</font></b> </div> <br> Because of time shortage, I didn't scraped new data myself. Though I visited his Youtube Channel and manually examined his Release Patterns | Episode Id | Release | Day Difference | |----------|:-------------:|:-------------:| | E75 | 2020-06-18 | 4 | | E76 | 2020-06-21 | 3 | | E77 | 2020-06-28 | 7 | | E78 | 2020-07-02 | 4 | | E79 | 2020-07-09 | 7 | | E80 | 2020-07-12 | 3 | Maybe He's experimenting with a new pattern <b> Can We pin-point when 1 year anniversary interview episode❓</b> **Actually No!** Though a small pattern can be observed in Release dates, He has bit **odd recording pattern** : * Who knows He may have 3-4 videos already recorded and ready to be released. * Even if He records anniversary interview episode today, We can not say when He'll gonna release that Episode As per his Release pattern, He's been releasing his Episodes **after 3 or 4 days**. * Considering **E77 and E79 as exception**, He'll more probably release **E81** on **2020-07-16** or **2020-07-15** * If He's experimenting with a new pattern (**7 day difference after one video**), then E81 will be released on **2020-07-19** followed by **E82** on **2020-07-22** or **2020-07-23** If I'm correct then Mr. [Sanyam Bhutani](https://www.kaggle.com/init27), Please don't forget to give a small shoutout to me 😄 <div> <b><font id="1.5" size="4"> Exploring Raw / Cleaned Substitles</font></b> </div> <a href="#toc"><span class="label label-info">Go back to our Guide</span></a> <br> So, We have 2 directories here : * Raw Subtitles : Tanscript in Text format * Cleaned Subtitles : Tanscript in CSV format with Timestamp ``` def show_script(id): return pd.read_csv("../input/chai-time-data-science/Cleaned Subtitles/{}.csv".format(id)) df = show_script("E1") df ``` <div> <b><font id="1.5.1" size="3">A Small Shoutout to Ramshankar Yadhunath</font></b> </div> <br> I would like to give a small shoutout to [Ramshankar Yadhunath](https://www.kaggle.com/thedatabeast) for providing a feature engineering script in his [Kernel](https://www.kaggle.com/thedatabeast/making-perfect-chai-and-other-tales). Hey Guys, If you followed me till here, then dont forget to check out his [Kernel](https://www.kaggle.com/thedatabeast/making-perfect-chai-and-other-tales) too. ``` # feature engineer the transcript features def conv_to_sec(x): """ Time to seconds """ t_list = x.split(":") if len(t_list) == 2: m = t_list[0] s = t_list[1] time = int(m) * 60 + int(s) else: h = t_list[0] m = t_list[1] s = t_list[2] time = int(h) * 60 * 60 + int(m) * 60 + int(s) return time def get_durations(nums, size): """ Get durations i.e the time for which each speaker spoke continuously """ diffs = [] for i in range(size - 1): diffs.append(nums[i + 1] - nums[i]) diffs.append(30) # standard value for all end of the episode CFA by Sanyam return diffs def transform_transcript(sub, episode_id): """ Transform the transcript of the given episode """ # create the time second feature that converts the time into the unified qty. of seconds sub["Time_sec"] = sub["Time"].apply(conv_to_sec) # get durations sub["Duration"] = get_durations(sub["Time_sec"], sub.shape[0]) # providing an identity to each transcript sub["Episode_ID"] = episode_id sub = sub[["Episode_ID", "Time", "Time_sec", "Duration", "Speaker", "Text"]] return sub def combine_transcripts(sub_dir): """ Combine all the 75 transcripts of the ML Heroes Interviews together as one dataframe """ episodes = [] for i in range(1, 76): file = "E" + str(i) + ".csv" try: sub_epi = pd.read_csv(os.path.join(sub_dir, file)) sub_epi = transform_transcript(sub_epi, ("E" + str(i))) episodes.append(sub_epi) except: continue return pd.concat(episodes, ignore_index=True) # create the combined transcript dataset sub_dir = "../input/chai-time-data-science/Cleaned Subtitles" transcripts = combine_transcripts(sub_dir) transcripts.head() ``` Now, We have some Data to work with. Thanking [Ramshankar Yadhunath](https://www.kaggle.com/thedatabeast) once again, Let's get it started .. <div class="alert alert-block alert-warning"> <b>Note :</b> Transcript for E0 and E4 is missing </div> <div> <b><font id="1.5.2" size="3">Intro is Bad for CTDS ?</font></b> </div> <br> From our previous analysis, We realised Majority of the Episodes have quite less watch time i.e. less than 13.065% of the total Duration. In such case, How much CTDS intro hurts itself in terms of intro duration. Let's find out... ``` temp = Episode_df[["episode_id","youtube_avg_watch_duration"]] temp=temp[(temp.episode_id!="E0") & (temp.episode_id!="E4")] intro=[] for i in transcripts.Episode_ID.unique(): intro.append(transcripts[transcripts.Episode_ID==i].iloc[0].Duration) temp["Intro_Duration"]=intro temp["diff"]=temp.youtube_avg_watch_duration-temp.Intro_Duration Source = ColumnDataSource(temp) tooltips = [ ("Episode Id", "@episode_id"), ("Youtube Avg Watch_duration Views", "@youtube_avg_watch_duration"), ("Intro Duration", "@Intro_Duration"), ("Avg Duration of Content Watched", "@diff"), ] fig1 = figure(background_fill_color="#ebf4f6", plot_width = 1000, plot_height = 600, x_range = temp["episode_id"].values, title = "Impact of Intro Durations") fig1.line("episode_id", "youtube_avg_watch_duration", source = Source, color = "#03c2fc", alpha = 0.8, legend_label="Youtube Avg Watch_duration Views") fig1.line("episode_id", "Intro_Duration", source = Source, color = "#f2a652", alpha = 0.8, legend_label="Intro Duration") fig1.line("episode_id", "diff", source = Source, color = "#03fc5a", alpha = 0.8, legend_label="Avg Duration of Content Watched") fig1.add_tools(HoverTool(tooltips=tooltips)) fig1.xaxis.axis_label = "Episode Id" fig1.yaxis.axis_label = "Percentage" fig1.xaxis.major_label_orientation = np.pi / 3 show(column(fig1)) print("{:.2f} % of Episodes have Avg Duration of Content Watched less than 5 minutes".format(len(temp[temp["diff"]<300])/len(temp)*100)) print("{:.2f} % of Episodes have Avg Duration of Content Watched less than 4 minutes".format(len(temp[temp["diff"]<240])/len(temp)*100)) print("{:.2f} % of Episodes have Avg Duration of Content Watched less than 3 minutes".format(len(temp[temp["diff"]<180])/len(temp)*100)) print("{:.2f} % of Episodes have Avg Duration of Content Watched less than 2 minutes".format(len(temp[temp["diff"]<120])/len(temp)*100)) print("In {} case, Viewer left in the Intro Duration".format(len(temp[temp["diff"]<0]))) ``` <b>🧠 My Cessation: </b> * Observing the graph and stats, it's clear it's a high time * We don't have Transcript of M Series where the Percentage of Episode Watched i.e. Episode had small Duration. * Concluding from analysis We can now strongly comment, Episode with shorter length will definitely help There's lots of things to improve. * Shorter Duration Videos can be delivered highlighting the important aspects of the Shows * Short Summaries can be provided in the description. Maybe after reading them, Viewers could devote for a longer Show (depending on the interest on the topic reflected in summery) * Full length Show can be provided as Podcast in Apple, Spotify, Anchor. If Viewer after shorter duration videos and summeries wishes to have a full show, they can have it from there With 45.95% of Episodes having Avg Duration of Content Watched less than 3 minutes, We can hardly gain any useful insight or can comment on quality of Content delivered. But Okay! We can have some fun though 😄 <div> <b><font id="1.5.3" size="3">Who Speaks More ?</font></b> </div> <br> ``` host_text = [] hero_text = [] for i in transcripts.Episode_ID.unique(): host_text.append([i, transcripts[(transcripts.Episode_ID==i) & (transcripts.Speaker=="Sanyam Bhutani")].Text]) hero_text.append([i, transcripts[(transcripts.Episode_ID==i) & (transcripts.Speaker!="Sanyam Bhutani")].Text]) temp_host={} temp_hero={} for i in range(len(transcripts.Episode_ID.unique())): host_text_count = len(host_text[i][1]) hero_text_count = len(hero_text[i][1]) temp_host[hero_text[i][0]]=host_text_count temp_hero[hero_text[i][0]]=hero_text_count def getkey(dict): list = [] for key in dict.keys(): list.append(key) return list def getvalue(dict): list = [] for key in dict.values(): list.append(key) return list Source = ColumnDataSource(data=dict( x=getkey(temp_host), y=getvalue(temp_host), a=getkey(temp_hero), b=getvalue(temp_hero), )) tooltips = [ ("Episode Id", "@x"), ("No of Times Host Speaks", "@y"), ("No of Times Hero Speaks", "@b"), ] fig1 = figure(background_fill_color="#ebf4f6",plot_width = 1000, tooltips=tooltips,plot_height = 400, x_range = getkey(temp_host), title = "Who Speaks More ?") fig1.vbar("x", top = "y", source = Source, width = 0.4, color = "#76b4bd", alpha=.8, legend_label="No of Times Host Speaks") fig1.vbar("a", top = "b", source = Source, width = 0.4, color = "#e7f207", alpha=.8, legend_label="No of Times Hero Speaks") fig1.xaxis.axis_label = "Episode" fig1.yaxis.axis_label = "Count" fig1.grid.grid_line_color="#feffff" fig1.xaxis.major_label_orientation = np.pi / 4 show(fig1) ``` * Excluding Few Episodes, Ratio between No of Times One Speaks is quite mantained * E69 was AMA episode. That's why there is no Hero <div> <b><font id="1.5.4" size="3">Frequency of Questions Per Episode </font></b> </div> <br> ``` ques=0 total_ques={} for episode in range(len(transcripts.Episode_ID.unique())): for each_text in range(len(host_text[episode][1])): ques += host_text[episode][1].reset_index().iloc[each_text].Text.count("?") total_ques[hero_text[episode][0]]= ques ques=0 from statistics import mean Source = ColumnDataSource(data=dict( x=getkey(total_ques), y=getvalue(total_ques), )) tooltips = [ ("Episode Id", "@x"), ("No of Questions", "@y"), ] fig1 = figure(background_fill_color="#ebf4f6",plot_width = 1000, plot_height = 400,tooltips=tooltips, x_range = getkey(temp_host), title = "Questions asked Per Episode") fig1.vbar("x", top = "y", source = Source, width = 0.4, color = "#76b4bd", alpha=.8, legend_label="No of Questions asked Per Episode") fig1.line("x", mean(getvalue(total_ques)), source = Source, color = "#f2a652", alpha = 0.8,line_dash="dashed", legend_label="Average Questions : {:.3f}".format(mean(getvalue(total_ques)))) fig1.xaxis.axis_label = "Episode" fig1.yaxis.axis_label = "No of Questions" fig1.legend.location = "top_left" fig1.grid.grid_line_color="#feffff" fig1.xaxis.major_label_orientation = np.pi / 4 show(fig1) ``` * On an Average, around 30 Questions are asked by Host * E69 being AMA Episode justifies the reason of having such high no of Questions <div> <b><font id="1.5.5" size="3">Favourite Text ?</font></b> </div> <br> <b>⚒️ About the Function :</b> Well, I'm gonna write a small function. You can pass a Hero Name and it will create a graph about 7 most common words spoken by that person But before that I would like to give a small Shoutout to [Parul Pandey](https://www.kaggle.com/parulpandey) for providing a text cleaning script in her [Kernel](https://www.kaggle.com/parulpandey/how-to-explore-the-ctds-show-data). ``` import re import nltk from statistics import mean from collections import Counter import string def clean_text(text): '''Make text lowercase, remove text in square brackets,remove links,remove punctuation and remove words containing numbers.''' text = text.lower() text = re.sub('\[.*?\]', '', text) text = re.sub('https?://\S+|www\.\S+', '', text) text = re.sub('<.*?>+', '', text) text = re.sub('[%s]' % re.escape(string.punctuation), '', text) text = re.sub('\n', '', text) text = re.sub('\w*\d\w*', '', text) return text def text_preprocessing(text): """ Cleaning and parsing the text. """ tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+') nopunc = clean_text(text) tokenized_text = tokenizer.tokenize(nopunc) #remove_stopwords = [w for w in tokenized_text if w not in stopwords.words('english')] combined_text = ' '.join(tokenized_text) return combined_text transcripts['Text'] = transcripts['Text'].apply(str).apply(lambda x: text_preprocessing(x)) def get_data(speakername=None): label=[] value=[] text_data=transcripts[(transcripts.Speaker==speakername)].Text.tolist() temp=list(filter(lambda x: x.count(" ")<10 , text_data)) freq=nltk.FreqDist(temp).most_common(7) for each in freq: label.append(each[0]) value.append(each[1]) Source = ColumnDataSource(data=dict( x=label, y=value, )) tooltips = [ ("Favourite Text", "@x"), ("Frequency", "@y"), ] fig1 = figure(background_fill_color="#ebf4f6",plot_width = 600, tooltips=tooltips, plot_height = 400, x_range = label, title = "Favourite Text") fig1.vbar("x", top = "y", source = Source, width = 0.4, color = "#76b4bd", alpha=.8) fig1.xaxis.axis_label = "Text" fig1.yaxis.axis_label = "Frequency" fig1.grid.grid_line_color="#feffff" fig1.xaxis.major_label_orientation = np.pi / 4 show(fig1) get_data(speakername="Sanyam Bhutani") ``` <b>📌 Observations :</b> * Okay, Yeah seems to be favourite words of [Sanyam Bhutani](https://www.kaggle.com/init27) * Well He has some different laughs for different scenario I guess 😄 * We have all the Transcript where [Sanyam Bhutani](https://www.kaggle.com/init27) speaks. So, It's common that you'll find words with good frequency for [Sanyam Bhutani](https://www.kaggle.com/init27) only. * But you can still try. Who knows I might be missing something interesting 😄 <div class="alert-block alert-info"> <b>Tip:</b> Pass your favourite hero name in function get_data() and you're good to go </div> <div> <b><font id="2" size="5">End Notes</font></b> </div> <br> With this, I end my analysis on this Dataset named [Chai Time Data Science | CTDS.Show](https://www.kaggle.com/rohanrao/chai-time-data-science) provided by Mr. [Vopani](https://www.kaggle.com/rohanrao) and Mr. [Sanyam Bhutani](https://www.kaggle.com/init27).<br><br> It was a wonderfull experience for me<br><br> Somehow if my Analysis/Way of StoryTelling hurted any sentiments then I apologize for that <br><br> And yea Congraulations to [Chai Time Data Science | CTDS.Show](https://www.kaggle.com/rohanrao/chai-time-data-science) for completing a successfull 1 Year journey.<br><br> Now I can finally enjoy my Chai break in a peace <img src="https://imgmediagumlet.lbb.in/media/2018/07/5b5712e41d12b6235f1385a4_1532433124061.jpeg?w=750&h=500&dpr=1" width=400 height=400> <div> <b><font id="3" size="5">Credits</font></b> </div> <br> Thanks everyone for these amazing photos. A Small shoutout to all of you * https://miro.medium.com/max/1400/0*ovcHbNV5470zvsH5.jpeg <br> * https://api.breaker.audio/shows/681460/episodes/55282147/image?v=0987aece49022c8863600c4cf5b67eb8&width=400&height=400 <br> * https://external-preview.redd.it/VVHgy7UiRHOUfs6v91tRSDgvvUIXlJvyiD822RG4Fhg.png?auto=webp&s=d6fb054c1f57ec09b5b45bcd7ee0e821b53449c9 <br> * https://memegenerator.net/img/instances/64277502.jpg <br> * https://miro.medium.com/max/420/1*lYw_nshU1qg3dqbqgpWoDA.png <br> * https://eatrealamerica.com/wp-content/uploads/2018/05/Fishy-Smell.png <br> * https://i.imgflip.com/2so1le.jpg <br> * https://encrypted-tbn0.gstatic.com/images?q=tbn%3AANd9GcQf6ziN-7WH50MIZZQtJbO0Czsll5wTud7E3Q&usqp=CAU <br> * https://imgmediagumlet.lbb.in/media/2018/07/5b5712e41d12b6235f1385a4_1532433124061.jpeg?w=750&h=500&dpr=1 <br>
github_jupyter
``` import os import folium print(folium.__version__) ``` # How to create Popups ## Simple popups You can define your popup at the feature creation, but you can also overwrite them afterwards: ``` m = folium.Map([45, 0], zoom_start=4) folium.Marker([45, -30], popup='inline implicit popup').add_to(m) folium.CircleMarker( location=[45, -10], radius=25, fill=True, popup=folium.Popup('inline explicit Popup') ).add_to(m) ls = folium.PolyLine( locations=[[43, 7], [43, 13], [47, 13], [47, 7], [43, 7]], color='red' ) ls.add_child(folium.Popup('outline Popup on Polyline')) ls.add_to(m) gj = folium.GeoJson( data={ 'type': 'Polygon', 'coordinates': [[[27, 43], [33, 43], [33, 47], [27, 47]]] } ) gj.add_child(folium.Popup('outline Popup on GeoJSON')) gj.add_to(m) m.save(os.path.join('results', 'simple_popups.html')) m m = folium.Map([45, 0], zoom_start=2) folium.Marker( location=[45, -10], popup=folium.Popup("Let's try quotes", parse_html=True) ).add_to(m) folium.Marker( location=[45, -30], popup=folium.Popup(u"Ça c'est chouette", parse_html=True) ).add_to(m) m ``` ## Vega Popup You may know that it's possible to create awesome Vega charts with (or without) `vincent`. If you're willing to put one inside a popup, it's possible thanks to `folium.Vega`. ``` import json import numpy as np import vincent scatter_points = { 'x': np.random.uniform(size=(100,)), 'y': np.random.uniform(size=(100,)), } # Let's create the vincent chart. scatter_chart = vincent.Scatter(scatter_points, iter_idx='x', width=600, height=300) # Let's convert it to JSON. scatter_json = scatter_chart.to_json() # Let's convert it to dict. scatter_dict = json.loads(scatter_json) m = folium.Map([43, -100], zoom_start=4) popup = folium.Popup() folium.Vega(scatter_chart, height=350, width=650).add_to(popup) folium.Marker([30, -120], popup=popup).add_to(m) # Let's create a Vega popup based on scatter_json. popup = folium.Popup(max_width=0) folium.Vega(scatter_json, height=350, width=650).add_to(popup) folium.Marker([30, -100], popup=popup).add_to(m) # Let's create a Vega popup based on scatter_dict. popup = folium.Popup(max_width=650) folium.Vega(scatter_dict, height=350, width=650).add_to(popup) folium.Marker([30, -80], popup=popup).add_to(m) m.save(os.path.join('results', 'vega_popups.html')) m ``` ## Fancy HTML popup ``` import branca m = folium.Map([43, -100], zoom_start=4) html = """ <h1> This is a big popup</h1><br> With a few lines of code... <p> <code> from numpy import *<br> exp(-2*pi) </code> </p> """ folium.Marker([30, -100], popup=html).add_to(m) m.save(os.path.join('results', 'html_popups.html')) m ``` You can also put any HTML code inside of a Popup, thaks to the `IFrame` object. ``` m = folium.Map([43, -100], zoom_start=4) html = """ <h1> This popup is an Iframe</h1><br> With a few lines of code... <p> <code> from numpy import *<br> exp(-2*pi) </code> </p> """ iframe = branca.element.IFrame(html=html, width=500, height=300) popup = folium.Popup(iframe, max_width=500) folium.Marker([30, -100], popup=popup).add_to(m) m.save(os.path.join('results', 'html_popups.html')) m import pandas as pd df = pd.DataFrame(data=[['apple', 'oranges'], ['other', 'stuff']], columns=['cats', 'dogs']) m = folium.Map([43, -100], zoom_start=4) html = df.to_html(classes='table table-striped table-hover table-condensed table-responsive') popup = folium.Popup(html) folium.Marker([30, -100], popup=popup).add_to(m) m.save(os.path.join('results', 'html_popups.html')) m ``` Note that you can put another `Figure` into an `IFrame` ; this should let you do stange things... ``` # Let's create a Figure, with a map inside. f = branca.element.Figure() folium.Map([-25, 150], zoom_start=3).add_to(f) # Let's put the figure into an IFrame. iframe = branca.element.IFrame(width=500, height=300) f.add_to(iframe) # Let's put the IFrame in a Popup popup = folium.Popup(iframe, max_width=2650) # Let's create another map. m = folium.Map([43, -100], zoom_start=4) # Let's put the Popup on a marker, in the second map. folium.Marker([30, -100], popup=popup).add_to(m) # We get a map in a Popup. Not really useful, but powerful. m.save(os.path.join('results', 'map_popups.html')) m ```
github_jupyter
# Лабораторная работа 9. ООП. ``` import numpy as np import matplotlib.pyplot as plt ``` # 1. Создание классов и объектов В языке программирования Python классы создаются с помощью инструкции `class`, за которой следует произвольное имя класса, после которого ставится двоеточие; далее с новой строки и с отступом реализуется тело класса: ``` class A: # class <имя класса>: pass # <тело класса> ``` Создание экземпляра класса: ``` a = A() # имя_переменной = ИмяКласса() print(a, 'объект класса', type(a)) ``` # 2. Класс как модуль (библиотека) Класс можно представить подобно модулю (библиотеки): - в нем могут быть свои переменные со значениями и функции - у класса есть собственное пространство имен, доступ к которым возможен через имя класса: ``` class CLASS: const = 5 # атрибут класса def adder(v): # функция-метод return v + CLASS.const CLASS.const CLASS.adder(4) ``` # 3. Класс как создатель объектов ``` Object = CLASS() Object.const Object.adder(100) ``` Дело в том, что классы и объекты не просто модули. Класс создает объекты, которые в определенном смысле являются его наследниками (копиями). Это значит, что если у объекта нет собственного поля `const`, то интерпретатор ищет его уровнем выше, то есть в классе. Таким образом, если мы присваиваем объекту поле с таким же именем как в классе, то оно перекрывает, т. е. переопределяет, поле класса: ``` Object.const Object.const = 10 Object.const CLASS.const ``` Видно, что `Object.const` и `CLASS.const` – это разные переменные. `Object.const` находится в пространстве имен объекта `Object`. `CLASS.const` – в пространстве класса `CLASS`. Если не задавать поле `const` объекту `Object`, то интерпретатор поднимется выше по дереву наследования и придет в класс, где и найдет это поле. Методы также наследуются объектами класса. В данном случае у объекта `Object` нет своего собственного метода `adder`, поэтому он ищется в классе `CLASS`. Однако от класса может быть порождено множество объектов. И методы предназначаются для обработки объектов. Таким образом, когда вызывается метод, в него надо передать конкретный объект, который он будет обрабатывать. Выражение Object.adder(100) выполняется интерпретатором следующим образом: - Ищу атрибут `adder()` у объекта `Object`. Не нахожу. - Тогда иду искать в класс `CLASS`, так как он создал объект `Object`. - Здесь нахожу искомый метод. Передаю ему объект, к которому этот метод надо применить, и аргумент, указанный в скобках. Другими словами, выражение `Object.adder(100)` преобразуется в выражение `CLASS.adder(Object, 100)`. Таким образом, интерпретатор попытался передать в метод `adder()` класса `CLASS` два параметра – объект `Object` и число `100`. Но мы запрограммировали метод `adder()` так, что он принимает только один параметр. Однако: ``` Object.adder() ``` Получается странная ситуация. Ведь `adder()` вызывается не только через класс, но и через порожденные от него объекты. Однако в последнем случае всегда будет возникать ошибка. Может понадобиться метод с параметрами, но которому не надо передавать экземпляр данного класса. Для таких ситуаций предназначены статические методы. Такие методы могут вызываться через объекты данного класса, но сам объект в качестве аргумента в них не передается. В Python острой необходимости в статических методах нет, так как код может находиться за пределами класса, и программа не начинает выполняться из класса. Если нам нужна просто какая-нибудь функция, мы можем определить ее в основной ветке. Однако в Python тоже можно реализовать статические методы с помощью декоратора `@staticmethod`: ``` class CLASS: const = 5 # атрибут класса @staticmethod def adder(v): # функция-метод return v + CLASS.const Object = CLASS() Object.adder(5) ``` Статические методы в Python – это, по сути, обычные функции, помещенные в класс для удобства и находящиеся в пространстве имен этого класса. Это может быть какой-то вспомогательный код. Вообще, если в теле метода не используется ссылка на конкретный объект (чаще всего обозначаемый как `self`), имеет смысл сделать метод статическим. # 4. Изменение полей объекта В Python объекту можно не только переопределять поля и методы, унаследованные от класса, также можно добавить новые, которых нет в классе: ``` Object1 = CLASS() Object2 = CLASS() Object2.str = 'abcd' Object2.str Object1.str CLASS.str ``` Однако в программировании так делать не принято, потому что тогда объекты одного класса будут отличаться между собой по набору атрибутов. Это затруднит автоматизацию их обработки, внесет в программу хаос. Поэтому принято присваивать полям, а также получать их значения, путем вызова методов (сеттеров (`set` – установить) и геттеров (`get` – получить)): ``` class CLASS: def setName(self, n): self.name = n def getName(self): try: return self.name except: return "No name" first = CLASS() second = CLASS() first.setName("Bob") first.getName() print(second.getName()) ``` # 5. Специальные методы # 5.1. Конструктор класса (метод `__init__()`) В объектно-ориентированном программировании конструктором класса называют метод, который автоматически вызывается при создании объектов. Его также можно назвать конструктором объектов класса. Имя такого метода обычно регламентируется синтаксисом конкретного языка программирования. В Python роль конструктора играет метод `__init__()`. В Python наличие пар знаков подчеркивания спереди и сзади в имени метода говорит о том, что он принадлежит к группе методов перегрузки операторов. Если подобные методы определены в классе, то объекты могут участвовать в таких операциях, как сложение, вычитание, вызываться в качестве функций и др. При этом методы перегрузки операторов не надо вызывать по имени. Вызовом для них является сам факт участия объекта в определенной операции. В случае конструктора класса – это операция создания объекта. Так как объект создается в момент вызова класса по имени, то в этот момент вызывается метод `__init__()`, если он определен в классе. Необходимость конструкторов связана с тем, что нередко объекты должны иметь собственные свойства сразу. Пусть имеется класс `Person`, объекты которого обязательно должны иметь имя и фамилию: ``` class Person: def setName(self, n, s): self.name = n self.surname = s p1 = Person() p1.setName("Bill", "Ross") ``` Или: ``` class Person: def __init__(self, n, s): self.name = n self.surname = s ``` В свою очередь, конструктор класса не позволит создать объект без обязательных полей: ``` p2 = Person() p2 = Person("Sam", "Baker") print(p2.name, p2.surname) ``` Однако бывает, что надо допустить создание объекта, даже если никакие данные в конструктор не передаются. В таком случае параметрам конструктора класса задаются значения по умолчанию: ``` class Rectangle: def __init__(self, w = 0.5, h = 1): self.width = w self.height = h def square(self): return self.width * self.height rec1 = Rectangle(5, 2) rec2 = Rectangle() rec3 = Rectangle(3) rec4 = Rectangle(h = 4) print(rec1.square()) print(rec2.square()) print(rec3.square()) print(rec4.square()) ``` # 5.2. Конструктор и деструктор Помимо конструктора объектов, в языках программирования есть обратный ему метод – деструктор. Он вызывается для уничтожения объекта. В языке программирования Python объект уничтожается, когда исчезают все связанные с ним переменные или им присваивается другое значение, в результате чего связь со старым объектом теряется. Удалить переменную можно с помощью команды языка `del`. В классах Python функцию деструктора выполняет метод `__del__()`. ``` class Student: def __init__(self, name, surname, position=1): self.name = name self.surname = surname self.position = position def display(self): return self.name, self.surname, self.position def __del__(self): print ("Goodbye %s %s" %(self.name, self.surname)) p1 = Student('big', 'dude', 3) p2 = Student('small', 'croon', 4) p3 = Student('neutral', 'guy', 5) print (p1.display()) print (p2.display()) print (p3.display()) del p2 print(p2.display()) ``` # 5.3. Специальные методы В Python есть ряд зарезервированных имен методов создаваемого класса – специальные (или стандартные) методы. Более подробную информацию о них вы можете найти в соответствующей [документации по Python](https://docs.python.org/3/reference/datamodel.html). Например: `__bool__()` Возвращает True или False. `__call__()` Позволяет использовать объект как функцию, т.е. его можно вызвать. `__len__()` Чаще всего реализуется в коллекциях и сходными с ними по логике работы типами, которые позволяют хранить наборы данных. Для списка (`list`) `__len__()` возвращает количество элементов в списке, для строки – количество символов в строке. Вызывается функцией `len()`, встроенной в язык Python. # Метод `__setattr__()` В Python атрибуты объекту можно назначать за пределами класса: ``` class A: def __init__(self, v): self.field1 = v a = A(10) a.field2 = 20 print(a.field1, a.field2) ``` Если такое поведение нежелательно, его можно запретить с помощью метода перегрузки оператора присваивания атрибуту `__setattr__()`: ``` class A: def __init__(self, v): self.field1 = v def __setattr__(self, attr, value): if attr == 'field1': self.__dict__[attr] = value else: raise AttributeError('Произошло обращение к несуществующему атрибуту!') a = A(15) a.field1 a.field2 = 30 a.field2 a.__dict__ ``` Метод `__setattr__()`, если он присутствует в классе, вызывается всегда, когда какому-либо атрибуту выполняется присваивание. Обратите внимание, что присвоение несуществующему атрибуту также обозначает его добавление к объекту. Когда создается объект `a`, в конструктор передается число `15`. Здесь для объекта заводится атрибут `field1`. Факт попытки присвоения ему значения тут же отправляет интерпретатор в метод `__setattr__()`, где проверяется, соответствует ли имя атрибута строке `'field1'`. Если так, то атрибут и соответствующее ему значение добавляются в словарь атрибутов объекта. Нельзя в `__setattr__()` написать просто `self.field1 = value`, так как это приведет к новому рекурсивному вызову метода `__setattr__()`. Поэтому поле назначается через словарь `__dict__`, который есть у всех объектов, и в котором хранятся их атрибуты со значениями. Если параметр `attr` не соответствует допустимым полям, то искусственно возбуждается исключение `AttributeError`. Мы это видим, когда в основной ветке пытаемся обзавестись полем `field2`. # Пример 1. Числа Фибоначчи Последовательность чисел Фибоначчи задаётся рекуррентным выражением: $$ F_n = \begin{cases} 0, n = 0, \\ 1, n = 1, \\ F_{n-1}+F_{n-2}, n > 1. \end{cases} $$ Что даёт следующую последовательность: {0, 1, 1, 2, 3, 5, 8, 13, 21, 34, …}. Один из способов решения, который может показаться логичным и эффективным, — решение с помощью рекурсии: ``` def Fibonacci_Recursion(n): if n == 0: return 0 if n == 1: return 1 return Fibonacci_Recursion (n-1) + Fibonacci_Recursion (n-2) ``` Используя такую функцию, мы будем решать задачу «с конца» — будем шаг за шагом уменьшать n, пока не дойдем до известных значений. Но, как мы видели ранее эта реализация многократно повторяет решение одних и тех же задач. Это связано с тем, что одни и те же промежуточные данные вычисляются по несколько раз, а число операций нарастает с той же скоростью, с какой растут числа Фибоначчи — экспоненциально. Один из выходов из данной ситуации — сохранение уже найденных промежуточных результатов с целью их повторного использования (кеширование). Причём кеш должен храниться во внешней области памяти. ``` def Fibonacci_Recursion_cache(n, cache): if n == 0: return 0 if n == 1: return 1 if cache[n] > 0: return cache[n] cache[n] = Fibonacci_Recursion_cache (n-1, cache) + Fibonacci_Recursion_cache (n-2, cache) return cache[n] ``` Приведенное решение достаточно эффективно (за исключением накладных расходов на вызов функций). Но можно поступить ещё проще: ``` def Fibonacci(n): fib = [0]*max(2,n) fib[0] = 1 fib[1] = 1 for i in range(2, n): fib[i] = fib[i - 1] + fib[i - 2] return fib[n-1] ``` Такое решение можно назвать решением «с начала» — мы первым делом заполняем известные значения, затем находим первое неизвестное значение, потом следующее и т.д., пока не дойдем до нужного. Так и работает динамическое программирование: сначала решили все подзадачи (нашли все `F[i]` для `i < n`), затем, зная решения подзадач, нашли решение исходной задачи. # Упражнение 1 Создайте класс для вычисления чисел Фибоначчи. Каждое число Фибоначчи является объектом этого класса, которое имеет атрибуты: значение и номер. Используйте функции для инициализации (вычисления) чисел Фибоначчи как сторонние по отношению к этому классу. ``` class Fiber: n = 1 def __init__(self, n): self.n = n def calculate(self): return Fibonacci(self.n) k = Fiber(int(input('Введите необходимое число: '))) print(k.calculate()) ``` # Упражнение 2 Поместите функции для вычисления чисел Фибоначчи внутрь созданного класса как статические функции. ``` class Fiber2: # метод без @staticmethod, но принимает только число и не требует объекта def calculate(n): fib = [0]*max(2,n) fib[0] = 1 fib[1] = 1 for i in range(2, n): fib[i] = fib[i - 1] + fib[i - 2] return fib[n-1] # пример использования: print(Fiber2.calculate(int(input('Введите необходимое число: ')))) ``` # Упражнение 3 Перегрузите операции сложения, вычитания, умножения и деления для созданного класса как операции с номерами чисел Фибоначи. ``` class FiberSuper: def __init__(self, n): self.setNumber(n) def setNumber(self, n): self.n = n self.fib = Fiber2.calculate(n) def getNumber(self): return self.n def getFibonacci(self): return self.fib def __add__(self1, self2): return FiberSuper(self1.n + self2.n) def __mul__(self1, self2): return FiberSuper(self1.n * self2.n) def __sub__(self1, self2): return FiberSuper(abs(self1.n - self2.n)) def __truediv__(self1, self2): return FiberSuper(self1.n // self2.n) k1 = FiberSuper(16) k2 = FiberSuper(8) print('k1: ', k1.getNumber(), ' - ', k1.getFibonacci()) print('k2: ', k2.getNumber(), ' - ', k2.getFibonacci()) print('k1 + k2: ', (k1 + k2).getNumber(), ' - ', (k1 + k2).getFibonacci()) print('k1 * k2: ', (k1 * k2).getNumber(), ' - ', (k1 * k2).getFibonacci()) print('k1 - k2: ', (k1 - k2).getNumber(), ' - ', (k1 - k2).getFibonacci()) print('k1 / k2: ', (k1 / k2).getNumber(), ' - ', (k1 / k2).getFibonacci()) ``` # Домашнее задание (базовое): # Задание 1. Создать класс с двумя переменными. Добавить функцию вывода на экран и функцию изменения этих переменных. Добавить функцию, которая находит сумму значений этих переменных, и функцию, которая находит наибольшее значение из этих двух переменных. ``` class Couple: def __init__(self, x, y): self.x = x self.y = y def setFirst(self, x): self.x = x def getFirst(self): return self.x def setSecond(self, y): self.y = y def getSecond(self): return self.y def out(self): print('First: ', self.x) print('Second: ', self.y) def getSum(self): return self.x + self.y def getMax(self): return max(self.x, self.y) Beta = Couple(12, 8) Beta.out() print() print('Sum: ', Beta.getSum()) print('Max: ', Beta.getMax()) ``` # Задание 2. Составить описание класса многочленов от одной переменной, задаваемых степенью многочлена и массивом коэффициентов. Предусмотреть методы для вычисления значения многочлена для заданного аргумента, операции сложения, вычитания и умножения многочленов с получением нового объекта-многочлена, вывод на экран описания многочлена. ``` class Polynom: ''' Полином исключительно положительных степеней (это нужно для интерактивного задания) Можно было сделать и лучше, как всегда) ''' def __init__(self, polynom = None): if polynom is not None: self.__dict__.update(polynom) return power = int(input('Введите степень многочлена: ')) for each in range(power, -1, -1): try: self.__dict__.update({str('power' + str(each)): float(input(f'Введите коэффициент при одночлене со степенью {each}: '))}) except: self.__dict__.update({str('power' + str(each)): 0}) def count(self, x): value = 0 for each in self.__dict__.keys(): value += self.__dict__[each] * (x ** int(each[5:])) return value def form(self): form = '' keys = list(self.__dict__.keys()) keys.sort() keys.reverse() for each in keys: if self.__dict__[each] == 0: continue if form != '': form += ' + ' form += '(' + str(self.__dict__[each]) + ')' + ('*x**(' + each[5:] + ')') * int(bool(int(each[5:]))) return form def __add__(self1, self2): coefficients = {} for obj in [self1, self2]: for key in obj.__dict__.keys(): if key not in coefficients.keys(): coefficients[key] = obj.__dict__[key] else: coefficients[key] += obj.__dict__[key] return Polynom(coefficients) def __sub__(self1, self2): coefficients = self1.__dict__.copy() for key in self2.__dict__.keys(): if key not in coefficients.keys(): coefficients[key] = 0 - (self2.__dict__[key]) else: coefficients[key] -= self2.__dict__[key] return Polynom(coefficients) def __mul__(self1, self2): coefficients = {} for key1 in self1.__dict__.keys(): for key2 in self2.__dict__.keys(): new_key = 'power' + str(int(key1[5:]) * int(key2[5])) if new_key not in coefficients.keys(): coefficients[new_key] = self1.__dict__[key1] * self2.__dict__[key2] else: coefficients[new_key] += self1.__dict__[key1] * self2.__dict__[key2] return Polynom(coefficients) parabole = Polynom() print('Значение функции:', parabole.count(float(input('Введите значение аргумента: ')))) polynom1 = Polynom() print(polynom1.form()) polynom2 = Polynom() print(polynom2.form()) print((polynom1 + polynom2).form()) print('Значение суммы функций в точке равно:', (polynom1 + polynom2).count(float(input('Введите значение аргумента: ')))) print('Значение разности функций в точке равно:', (polynom2 - polynom1).count(float(input('Введите значение аргумента: ')))) print('Форма произведения функций представляется в виде y =', (polynom1 * polynom2).form()) print((polynom1 * polynom2).count(12)) ``` # Задание 3. Составить описание класса для вектора, заданного координатами его концов в трехмерном пространстве. Обеспечить операции сложения и вычитания векторов с получением нового вектора (суммы или разности), вычисления скалярного произведения двух векторов, длины вектора, косинуса угла между векторами. ``` class Vector: def __init__(self, dot1, dot2): self.begin = dot1 self.end = dot2 self.entity = [ self.end[0] - self.begin[0], self.end[1] - self.begin[1], self.end[2] - self.begin[2] ] self.length = ( (self.entity[0]) ** 2 + (self.entity[1]) ** 2 + (self.entity[2]) ** 2 ) ** 0.5 def __add__(self1, self2): return Vector([self1.begin[0], self1.begin[1], self1.begin[2]], [(self1.end[0] + self2.entity[0]), (self1.end[1] + self2.entity[1]), (self1.end[2] + self2.entity[2])]) def __sub__(self1, self2): return Vector([self1.begin[0], self1.begin[1], self1.begin[2]], [(self1.end[0] - self2.entity[0]), (self1.end[1] - self2.entity[1]), (self1.end[2] - self2.entity[2])]) def __mul__(self1, self2): return self1.entity[0] * self2.entity[0] + self1.entity[1] * self2.entity[1] + self1.entity[2] * self2.entity[2] def getLength(self): return self.length def getCos(self1, self2): return self1 * self2 / (self1.getLength() * self2.getLength()) def about(self): print('Вектор №%i:' % id(self)) print('\tКоординаты вектора:', self.entity) print('\tКоординаты начальной точки:', self.begin) print('\tКоординаты конечной точки:', self.end) print('\tДлина вектора:', self.length) vectors = [] for i in range(2): print('Задаём %i-й вектор.' % i) x1, y1, z1 = map(float, input('Введите координаты первой точки через пробел: ').split()) x2, y2, z2 = map(float, input('Введите координаты второй точки через пробел: ').split()) vectors.append(Vector([x1, y1, z1], [x2, y2, z2])) v1 = vectors[0] v1.about() v2 = vectors[1] v2.about() print('Сложим векторы.') (v1 + v2).about() print('Вычтем векторы друг из друга.') (v1 - v2).about() (v2 - v1).about() print('Длины векторов совпадают.' * int((v1 - v2).getLength() == (v2 - v1).getLength())) print('Найдём скалярное произведение векторов.') print('v1 * v2 =', v1 * v2) print('Найдём косинус угла (в радианах) между векторами.') print('cos(v1, v2) =', Vector.getCos(v1, v2)) ``` # Задание 4. Поезда. Создайте структуру с именем `train`, содержащую поля: - название пунктов отправления и назначения; - время отправления и прибытия. Перегрузить операцию сложения: два поезда можно сложить, если пункт назначения первого совпадает с пунктом отправления второго, и время прибытия первого раньше, чем отправление второго. ``` from time import mktime, gmtime, strptime, strftime mktime(gmtime()) class Train: def __init__(self, times = None, stations = None, united = False): if times is None and stations is None: self.buyTicket() return self.departure_time = times[0] self.arrival_time = times[1] self.departure_station = stations[0] self.arrival_station = stations[1] self.road_time = self.arrival_time - self.departure_time self.united = united def buyTicket(self): self.departure_station = input('Вы покупаете билет на поезд.\n\tУкажите станцию отправления: ') self.departure_time = mktime(strptime(input('\tКогда отправляется поезд?\n\t\tВведите дату (число.месяц.год): '), '%d.%m.%Y')) self.departure_time += mktime(strptime(input('\t\tВведите время (часы:минуты): '), '%H:%M')) self.arrival_station = input('\tУкажите станцию прибытия: ') self.arrival_time = mktime(strptime(input('\tКогда прибывает поезд?\n\t\tВведите дату (ЧЧ.ММ.ГГГГ): '), '%d.%m.%Y')) self.arrival_time += mktime(strptime(input('\t\tВведите время (ЧЧ:ММ): '), '%H:%M')) self.united = False print('Спасибо за покупку! Ваш билет - под номером %i.' % id(self)) def about(self): print('Поезд %s - %s%s' % (self.departure_station, self.arrival_station, ' (ОБЪЕДИНЁННЫЙ)' * int(self.united))) print('\tВремя отправления: %s' % strftime('%a, %d %b %Y %H:%M', gmtime(self.departure_time))) print('\tВремя прибытия: %s' % strftime('%a, %d %b %Y %H:%M', gmtime(self.arrival_time))) print('\tБилет на поезд: %i' % id(self)) print('\tВремени в пути: %i часов %i минут' % ((self.arrival_time - self.departure_time) // 3600, (self.arrival_time - self.departure_time) % 3600 // 60)) def __add__(self1, self2): if self1.arrival_station == self2.departure_station and self1.arrival_time < self2.departure_time: return Train(times = [self1.departure_time, self2.arrival_time], stations = [self1.departure_station, self2.arrival_station], united = True) MSK_SPB = Train([mktime(strptime('26.12.2019 18:30', '%d.%m.%Y %H:%M')), mktime(strptime('27.12.2019 5:39', '%d.%m.%Y %H:%M'))], ['Москва', 'Санкт-Петербург'], False) SPB_HSK = Train([mktime(strptime('27.12.2019 12:00', '%d.%m.%Y %H:%M')), mktime(strptime('01.01.2020 15:26', '%d.%m.%Y %H:%M'))], ['Санкт-Петербург', 'Хельсинки'], False) MSK_SPB = Train() SPB_HSK = Train() MSK_SPB.about() SPB_HSK.about() (MSK_SPB + SPB_HSK).about() ``` # Домашнее задание (дополнительное): # Библиотека. Описать класс «библиотека». Предусмотреть возможность работы с произвольным числом книг, поиска книги по какому-либо признаку (например, по автору или по году издания), добавления книг в библиотеку, удаления книг из нее, сортировки книг по разным полям. ``` class Book: def __init__(self, title = None, authors = None, link = None, description = None, language = None, yearOfPublishing = None, publishingHouse = None, ISBN = None, volume = None, cost = None, ageLimit = None): self.title = title self.authors = authors self.link = link # здесь располагается ссылка на книгу в интернете try: self.mainAuthor = self.authors.pop(0) except: self.mainAuthor = None self.description = description self.language = language self.yearOfPublishing = yearOfPublishing self.publishingHouse = publishingHouse self.ISBN = ISBN self.volume = volume self.cost = cost self.ageLimit = ageLimit def split_str(string, length): for i in range(0, len(string), length): yield string[i : i + length].strip() def new(): print('Вы написали книгу? Поздравляем! Давайте заполним информацию о ней и опубликуем!') try: self = Book() self.title = input('\tУкажите название книги: ') self.mainAuthor = input('\tУкажите ваше ФИО - или инициалы: ') self.authors = list(map(str, input('\tБыли ли у вашей книги соавторы? ' + 'Укажите их через запятую - или оставьте поле ввода пустым: ').split(', '))) self.description = input('\tВведите описание своей книги: ') self.language = input('\tВведите язык, на котором вы написали книгу: ') self.ageLimit = int(input('\tВведите возраст, с которого вашу книгу можно читать: ')) self.volume = int(input('\tВведите объём печатного текста в страницах формата А5: ')) self.link = input('\tНаконец, если ваша книга опубликована, укажите на неё ссылку - или оставьте поле ввода пустым: ') if self.link == '': self.link = None if input('\tКстати, не хотите её опубликовать?) Введите "Да", чтобы перейти к публикации: ') == 'Да': self.publish() print('\tИнформация о книге успешно заполнена.') except: print('Оу... К сожалению, информация о книге была введена неправильно, и создание электронной версии ' + 'не может быть продолжено.') self = None finally: return self def publish(self): print() print('### Статья "Публикация книг", автор - Титов Климентий.') print('"""') print('Опубликовать свою книгу позволяет платформа Самиздата от Литрес: https://selfpub.ru/. ' + 'Выполняйте следующую последовательность действий:') print('1. Зарегистрируйтесь на сайте') print('2. Сохраните текст работы в документе DOCX или книге FB2') print('3. Укажите всю необходимую информацию о книге') print('4. Выберите способ распространения книги. Например, чтобы иметь возможность распространять печатную версию, ' + 'выберите Базовый или Безлимитный способ') print('5. Создайте эстетичную обложку') print('6. И, наконец, отправьте книгу на модерацию.') print('После успешной модерации ваша книга будет автоматически опубликована. Не забудьте заполнить данные о книге ' + 'здесь: вам нужно будет задать необходимые значения при помощи методов setISBN(ISBN), setYearOfPublishing' + '(yearOfPublishing), setPublishingHouse(publishingHouse), setCost(cost) и setLink(link). ' + 'И, конечно же, наслаждайтесь результатом!') print('"""') print() def setISBN(self, ISBN): self.ISBN = ISBN def setYearOfPublishing(self, yearOfPublishing): self.yearOfPublishing = yearOfPublishing def setPublishingHouse(self, publishingHouse): self.publishingHouse = publishingHouse def setCost(self, cost): self.cost = cost def setLink(self, link): self.link = link def about(self): print(f'Книга "{self.title}"') print(f'\tАвтор - {self.mainAuthor}') if self.authors != []: print('\tСоавторы:') for author in self.authors: print(f'\t\t{author}') if self.description: print('\tОписание:') print('\t\t"""\n\t\t' + '\n\t\t'.join(Book.split_str(self.description, 80)) + '\n\t\t"""') if self.language: print(f'\tЯзык: {self.language}') if self.yearOfPublishing: print(f'\tГод публикации - {self.yearOfPublishing}') if self.publishingHouse: print(f'\tИздательство: {self.publishingHouse}') if self.ISBN: print(f'\tISBN: {self.ISBN}') if self.volume: print(f'\tОбъём книги: {self.volume} стр.') if self.cost: print(f'\tСтоимость книги: {self.cost} руб.') if self.ageLimit: print(f'\tВозрастное ограничение: {self.ageLimit}+') if self.link: print(f'\tСсылка на книгу: {self.link}') def properties(): return ['mainAuthor', 'authors', 'description', 'language', 'yearOfPublushing', 'publishingHouse', 'ISBN', 'volume', 'cost', 'ageLimit', 'link'] _1984 = Book(title = '1984', authors = ['Джордж Оруэлл'], link = 'https://www.litres.ru/dzhordzh-oruell/1984/', description = 'Своеобразный антипод второй великой антиутопии XX века – «О дивный новый мир» ' + 'Олдоса Хаксли. Что, в сущности, страшнее: доведенное до абсурда «общество потребления» ' + '– или доведенное до абсолюта «общество идеи»? По Оруэллу, нет и не может быть ничего ужаснее ' + 'тотальной несвободы…', language = 'Русский', yearOfPublishing = 2014, publishingHouse = 'Издательство АСТ', ISBN = '978-5-17-080115-2', volume = 320, cost = 119, ageLimit = 16) _1984.about() Satan = Book.new() Property = Book.new() Seveina = Book(title = 'Севейна', authors = ['Титов Климентий', 'Снежская Виктория'], yearOfPublishing = 2019) TheOldManandtheSea = Book(title = 'The Old Man and the Sea', authors = ['Эрнест Хемингуэй']) TheGreatGatsby = Book(title = 'The Great Gatsby', authors = ['Фрэнсис Фиджеральд Скотт']) class Library: storage = {} # формат данных {ID: Book} readers = {} # формат данных {ФИО: взятые книги [ID1, ID2,..]} def __init__(self, name = None, address = None, owner = None, workers = None, contacts = None): self.name = name self.address = address self.owner = owner self.workers = workers self.contacts = contacts def printWorkers(self): print('Сотрудники:') for workerIndex in range(len(self.workers)): name = self.workers[workerIndex] print(f'\tID {workerIndex}\tФИО {name}') def printBooks(self, sortingKey = 'order'): print('Книги:') if sortingKey == 'order':Именно for bookIndex in self.storage.keys(): title = self.storage[bookIndex].title print(f'\tID {bookIndex}\tНазвание "{title}"') else: try: books = list(self.storage.items()) books.sort(key=lambda i: i[1][eval(sortingKey)]) for book in books: prop = eval(f'book[1].{sortingKey}') ID = book[0] print(f'\tID {ID}\tСвойство "{sortingKey}": {prop}') except: print('Не удалось вывести отсортированный список книг.') def printReaders(self): print('Читатели:') for reader in self.readers.keys(): books = self.readers[reader] print(f'\tФИО {reader}\tКниги: {books}') def isInProcess(self, ID): for reader in self.readers.keys(): if ID in self.readers[reader]: return True return False def shell(self): print(f'Оболочка библиотеки "{self.name}":') print(f'\tРабота с организацией') print(f'\t000. Добавить сотрудника') print(f'\t001. Удалить сотрудника') print(f'\tРабота с книгами') print(f'\t100. Добавить книгу') print(f'\t101. Удалить книгу') print(f'\t102. Вывести список книг') print(f'\t103. Принудительно вернуть книгу') print(f'\t104. Отредактировать свойства книги') print(f'\t105. Поиск по библиотеке') print(f'\t106. Просмотр свойств книги') print(f'\tРабота с читателями') print(f'\t200. Добавить нового читателя') print(f'\t201. Удалить читателя (если список задолженностей пуст)') print(f'\t202. Взять книгу') print(f'\t203. Вернуть книгу') print(f'\tВнештатные ситуации') print(f'\t300. Книга утеряна') print(f'\t301. Написана новая книга') print(f'\t302. Ликвидировать предприятие') print(f'\t-1. Выйти из оболочки') while True: action = input('Введите номер действия: ') if action == '000': # добавить сотрудника self.workers.append(input('Введите ФИО нового сотрудника: ')) print('Сотрудник успешно добавлен.') elif action == '001': # удалить сотрудника self.printWorkers() ID = input('Введите ID работника, которого хотите уволить - или оставьте поле ввода пустым: ') if ID == '': continue try: ID = int(ID) del self.workers[ID] except: print('Попытка увольнения не удалась. Может, ваш сотрудник восстал против вас?..') elif action == '100': # добавить книгу corners = list(map(str, input('Перечислите названия объектов Book через точку с запятой, если они заданы - ' + 'или оставьте поле ввода пустым: ').split('; '))) self.append(corners) elif action == '101': # удалить книгу self.printBooks() try: ID = int(input('Введите id книги: ')) self.remove(ID) except: print('Удаление книги не удалось.') elif action == '102': # вывести список книг self.printBooks() elif action == '103': # принудительный возврат книги self.printBooks() try: ID = int(input('Введите id книги: ')) self.back(ID) except: print('Возврат книги не удался.') elif action == '104': # отредактировать свойства книги self.printBooks() try: ID = int(input('Введите id книги: ')) print('Достуные свойства редактирования:', Book.properties()) key = input('Введите свойство книги, которое вы хотите отредактировать (будьте внимательны при написании свойства): ') value = input('Введите значение (строки - в кавычках, числа - без, списки поддерживаются): ') book = self.storage[ID] exec(f'book.{key} = {value}') except: print('Возврат книги не удался.') elif action == '105': # поиск по библиотеке print('Достуные свойства поиска:', Book.properties()) key = input('Введите свойство книги, по которому вы хотите найти книги (будьте внимательны при написании свойства): ') value = input('Введите значение (строки - в кавычках, числа - без, списки поддерживаются): ') try: for bookIndex in self.storage.keys(): if eval(f'self.storage[bookIndex].{key}') == eval(value): title = self.storage[bookIndex].title print(f'\tID {bookIndex}\tНазвание {title}') except: print('Поиск не удался.') elif action == '106': # просмотр свойств книги self.printBooks() try: ID = int(input('Введите id книги: ')) self.storage[ID].about() except: print('Просмотр свойств не удался.') elif action == '200': # добавить читателя name = input('Введите ФИО: ') if name not in self.readers.keys(): self.readers[name] = [] print('Читатель успешно добавлен.') else: print('Такой читатель уже существует.') elif action == '201': # удалить читателя self.printReaders() name = input('Введите ФИО: ') if name not in self.readers.keys(): print('Такого читателя не существует.') continue elif self.readers[name] != []: print('Читатель не вернул все книги.') continue else: del self.readers[name] print('Удаление прошло успешно.') elif action == '202': # взять книгу self.printBooks() try: ID = int(input('Введите id книги: ')) if not self.isInProcess(ID): self.printReaders() name = input('Введите ФИО: ') self.readers[name].append(ID) print('Книга взята.') else: print('Книга сейчас у читателя, её нельзя взять.') except: print('Взять такую книгу нельзя.') elif action == '203': # вернуть книгу self.printReaders() try: name = input('Введите ФИО: ') books = self.readers[name] for book in books: title = self.storage[book].title print(f'\tID {book}\tНазвание "{title}"') ID = int(input('Введите id книги: ')) self.readers[name].remove(ID) except: print('Книгу вернуть не удалось.') elif action == '300': # книга утеряна self.printBooks() try: ID = int(input('Введите id книги: ')) self.bookWasLost(ID) except: print('Пропажа не была зарегистрирована.') elif action == '301': # написана новая книга book = Book.new() if book is not None: self.append([book]) print('Книга добавлена в библиотеку.') elif action == '302': # ликвидировать предприятие really = (input('Вы уверены? Введите "Да" - или оставьте поле ввода пустым: ') == 'Да') if really: self.__del__() elif action == '-1': return def new(): print('Вы решили создать свою собственную библиотеку?! Да здравствует либертарианский рынок!') self = Library() self.owner = input('\tПрежде всего, укажите ФИО человека, который будет владельцем библиотеки: ') self.workers = list(map(str, input('\tВы уже наняли работников? Если да, перечислите их через запятую ' + '- или оставьте поле ввода пустым: ').split(', '))) self.name = input('\tКак вы назовёте своё предприятие? ') self.address = input('\tУкажите юридический адрес: ') self.contacts = list(map(str, input('\tУкажите контакты организации (номер телефона, эл. почту, ссылки) через пробел: ').split())) print('Поздравляем, вы не иначе как создали свою библиотеку! Можете подавать документы на регистрацию предприятия в ФНС России!') return self def append(self, books): if books == []: self.setupCard() for book in books: try: ID = self.getNewID() exec(f'self.storage[ID] = {book}') except: print(f'Объекта {book} не существует.') def setupCard(self): print('Заполнение информации о новой книге библиотеки.') try: book = Book() book.title = input('\tНазвание книги: ') book.mainAuthor = input('\tАвтор: ') book.authors = list(map(str, input('\tСоавторы (через запятую): ').split(', '))) book.description = input('\tАннотация: ') book.language = input('\tЯзык текста: ') book.ageLimit = int(input('\tМинимальный возраст читателя: ')) book.volume = int(input('\tОбъём печатного текста в страницах формата А5: ')) book.ISBN = input('\tISBN: ') book.yearOfPublishing = int(input('\tГод публикации: ')) book.publishingHouse = input('\tИздательство: ') book.cost = int(input('\tСтоимость: ')) book.link = input('\tСсылка на книгу в Интернете: ') ID = self.getNewID() self.storage[ID] = book print('Книга была успешно добавлена в библиотеку.') except: print('Заполнение было прервано из-за некорректных данных. Будьте внимательны - попробуйте ещё раз.') def getNewID(self): if len(list(self.storage.keys())) != 0: return max(list(storage.keys())) + 1 else: return 1 def bookWasLost(self, ID): print('Стоимость книги должны возместить.') def remove(self, ID): if input('Введите "Да", чтобы подтвердить удаление книги: ') == 'Да': if ID in self.storage.keys(): del self.storage[ID] for readerIndex in self.readers.keys(): if ID in self.readers[readerIndex]: self.readers[readerIndex].remove(ID) print('Удаление совершено успешно.') def back(self, ID): for readerIndex in self.readers.keys(): if ID in self.readers[readerIndex]: self.readers[readerIndex].remove(ID) print('Возврат совершён успешно.') break AGATA = Library.new() Beta = Library(name = 'Beta', address = '', owner = 'Mark CDA', workers = [], contacts = []) AGATA.shell() Seveina = Book('Севейна') ``` # Обобщённое число. Создайте класс, обобщающий понятие комплексных, двойных и дуальных чисел. Такие числа объединены одной формой записи: $$ c = a + ib,$$ где `c` – обобщённое число (комплексное, двойное или дуальное), `a` и `b` – вещественные числа, i – некоммутирующий символ. Именно из-за наличия символа `i` число `c` не просто сумма `a` и `b`. Такие числа можно представлять как вектор на плоскости `(a,b)`. А символ `i` обладает следующим свойством: - для комплексных чисел $$ i^2 = -1 $$ - для двойных чисел $$ i^2 = 1 $$ - для дуальных чисел $$ i^2 = 0 $$ Перегрузить для них базовые операции: сложения, вычитания, умножения и деления. Например, операция умножения для таких чисел имеет вид: $$ (a_1+b_1i)\cdot (a_2+b_2i)=a_1a_2+b_1a_2i+a_1b_2i+b_1b_2i^{2}=(a_1a_2+b_1b_2i^{2})+(b_1a_2+a_1b_2)i. $$ Статус: `задание не решено`.
github_jupyter
# Qcodes example with InstrumentGroup driver This notebooks explains how to use the `InstrumentGroup` driver. ## About The goal of the `InstrumentGroup` driver is to combine several instruments as submodules into one instrument. Typically, this is meant to be used with the `DelegateInstrument` driver. An example usage of this is to create an abstraction for devices on a chip. ## Usage The way it's used is mainly by specifying an entry in the station YAML. For instance, to create a Chip that has one or more Devices on it that point to different source parameters. The example below shows three devices, each of which is initialised in one of the supported ways. Device1 has only DelegateParameters, while device2 and device3 have both DelegateParameters and channels added. Device3 adds its channels using a custom channel wrapper class. ``` %%writefile example.yaml instruments: dac: type: qcodes.tests.instrument_mocks.MockDAC init: num_channels: 3 lockin1: type: qcodes.tests.instrument_mocks.MockLockin lockin2: type: qcodes.tests.instrument_mocks.MockLockin MockChip_123: type: qcodes.instrument.delegate.InstrumentGroup init: submodules_type: qcodes.instrument.delegate.DelegateInstrument submodules: device1: parameters: gate: - dac.ch01.voltage source: - lockin1.frequency - lockin1.amplitude - lockin1.phase - lockin1.time_constant drain: - lockin1.X - lockin1.Y device2: parameters: readout: - lockin1.phase channels: gate_1: dac.ch01 device3: parameters: readout: - lockin1.phase channels: type: qcodes.tests.instrument_mocks.MockCustomChannel gate_1: channel: dac.ch02 current_valid_range: [-0.5, 0] gate_2: channel: dac.ch03 current_valid_range: [-1, 0] set_initial_values_on_load: true initial_values: device1: gate.step: 5e-4 gate.inter_delay: 12.5e-4 device2: gate_1.voltage.post_delay: 0.01 device3: gate_2.voltage.post_delay: 0.03 import qcodes as qc station = qc.Station(config_file="example.yaml") lockin1 = station.load_lockin1() lockin2 = station.load_lockin2() dac = station.load_dac() chip = station.load_MockChip_123(station=station) chip.device1.gate() dac.ch01.voltage() chip.device1.gate(1.0) chip.device1.gate() dac.ch01.voltage() chip.device1.source() chip.device1.drain() ``` Device with channels/gates: ``` chip.device2.gate_1 ``` Setting voltages to a channel/gate of device2: ``` print(chip.device2.gate_1.voltage()) chip.device2.gate_1.voltage(-0.74) print(chip.device2.gate_1.voltage()) ``` Check initial values of device3, from which only gate_2.voltage.post_delay was set. ``` chip.device3.gate_1.voltage.post_delay chip.device3.gate_2.voltage.post_delay ```
github_jupyter
``` from __future__ import print_function import sisl import numpy as np import matplotlib.pyplot as plt from functools import partial %matplotlib inline ``` TBtrans is capable of calculating transport in $N\ge 1$ electrode systems. In this example we will explore a 4-terminal graphene GNR cross-bar (one zGNR, the other aGNR) system. ``` graphene = sisl.geom.graphene(orthogonal=True) R = [0.1, 1.43] hop = [0., -2.7] ``` Create the two electrodes in $x$ and $y$ directions. We will force the systems to be nano-ribbons, i.e. only periodic along the ribbon. In `sisl` there are two ways of accomplishing this. 1. Explicitly set number of auxiliary supercells 2. Add vacuum beyond the orbital interaction ranges The below code uses the first method. Please see if you can change the creation of `elec_x` by adding vacuum. **HINT**: Look at the documentation for the `sisl.Geometry` and search for vacuum. To know the orbital distance look up `maxR` in the geometry class as well. ``` elec_y = graphene.tile(3, axis=0) elec_y.set_nsc([1, 3, 1]) elec_y.write('elec_y.xyz') elec_x = graphene.tile(5, axis=1) elec_x.set_nsc([3, 1, 1]) elec_x.write('elec_x.xyz') ``` Subsequently we create the electronic structure. ``` H_y = sisl.Hamiltonian(elec_y) H_y.construct((R, hop)) H_y.write('ELEC_Y.nc') H_x = sisl.Hamiltonian(elec_x) H_x.construct((R, hop)) H_x.write('ELEC_X.nc') ``` Now we have created the electronic structure for the electrodes. All that is needed is the electronic structure of the device region, i.e. the crossing nano-ribbons. ``` dev_y = elec_y.tile(30, axis=1) dev_y = dev_y.translate( -dev_y.center(what='xyz') ) dev_x = elec_x.tile(18, axis=0) dev_x = dev_x.translate( -dev_x.center(what='xyz') ) ``` Remove any atoms that are *duplicated*, i.e. when we overlay these two geometries some atoms are the same. ``` device = dev_y.add(dev_x) device.set_nsc([1,1,1]) duplicates = [] for ia in dev_y: idx = device.close(ia, 0.1) if len(idx) > 1: duplicates.append(idx[1]) device = device.remove(duplicates) ``` Can you explain why `set_nsc([1, 1, 1])` is called? And if so, is it necessary to do this step? --- Ensure the lattice vectors are big enough for plotting. Try and convince your-self that the lattice vectors are unimportant for tbtrans in this example. *HINT*: what is the periodicity? ``` device = device.add_vacuum(70, 0).add_vacuum(20, 1) device = device.translate( device.center(what='cell') - device.center(what='xyz') ) device.write('device.xyz') ``` Since this system has 4 electrodes we need to tell tbtrans where the 4 electrodes are in the device. The following lines prints out the fdf-lines that are appropriate for each of the electrodes (`RUN.fdf` is already filled correctly): ``` print('elec-Y-1: semi-inf -A2: {}'.format(1)) print('elec-Y-2: semi-inf +A2: end {}'.format(len(dev_y))) print('elec-X-1: semi-inf -A1: {}'.format(len(dev_y) + 1)) print('elec-X-2: semi-inf +A1: end {}'.format(-1)) H = sisl.Hamiltonian(device) H.construct([R, hop]) H.write('DEVICE.nc') ``` # Exercises In this example we have more than 1 transmission paths. Before you run the below code which plots all relevant transmissions ($T_{ij}$ for $j>i$), consider if there are any symmetries, and if so, determine how many different transmission spectra you should expect? Please plot the geometry using your favourite geometry viewer (`molden`, `Jmol`, ...). The answer is not so trivial. ``` tbt = sisl.get_sile('siesta.TBT.nc') ``` Make easy function calls for plotting energy resolved quantites: ``` E = tbt.E Eplot = partial(plt.plot, E) # Make a shorthand version for the function (simplifies the below line) T = tbt.transmission t12, t13, t14, t23, t24, t34 = T(0, 1), T(0, 2), T(0, 3), T(1, 2), T(1, 3), T(2, 3) Eplot(t12, label=r'$T_{12}$'); Eplot(t13, label=r'$T_{13}$'); Eplot(t14, label=r'$T_{14}$'); Eplot(t23, label=r'$T_{23}$'); Eplot(t24, label=r'$T_{24}$'); Eplot(t34, label=r'$T_{34}$'); plt.ylabel('Transmission'); plt.xlabel('Energy [eV]'); plt.ylim([0, None]); plt.legend(); ``` - In `RUN.fdf` we have added the flag `TBT.T.All` which tells tbtrans to calculate *all* transmissions, i.e. between all $i\to j$ for all $i,j \in \{1,2,3,4\}$. This flag is by default `False`, why? - Create 3 plots each with $T_{1j}$ and $T_{j1}$ for all $j\neq1$. ``` # Insert plot of T12 and T21 # Insert plot of T13 and T31 # Insert plot of T14 and T41 ``` - Considering symmetries, try to figure out which transmissions ($T_{ij}$) are unique? - Plot the bulk DOS for the 2 differing electrodes. - Plot the spectral DOS injected by all 4 electrodes. ``` # Helper routines, this makes BDOS(...) == tbt.BDOS(..., norm='atom') BDOS = partial(tbt.BDOS, norm='atom') ADOS = partial(tbt.ADOS, norm='atom') ``` Bulk density of states: ``` Eplot(..., label=r'$BDOS_1$'); Eplot(..., label=r'$BDOS_2$'); plt.ylabel('DOS [1/eV/N]'); plt.xlabel('Energy [eV]'); plt.ylim([0, None]); plt.legend(); ``` Spectral density of states for all electrodes: - As a final exercise you can explore the details of the density of states for single atoms. Take for instance atom 205 (204 in Python index) which is in *both* GNR at the crossing. Feel free to play around with different atoms, subset of atoms (pass a `list`) etc. ``` Eplot(..., label=r'$ADOS_1$'); ... plt.ylabel('DOS [1/eV/N]'); plt.xlabel('Energy [eV]'); plt.ylim([0, None]); plt.legend(); ``` - For 2D structures one can easily plot the DOS per atom via a scatter plot in `matplotlib`, here is the skeleton code for that, you should select an energy point and figure out how to extract the atom resolved DOS (you will need to look-up the documentation for the `ADOS` method to figure out which flag to use. ``` Eidx = tbt.Eindex(...) ADOS = [tbt.ADOS(i, ....) for i in range(4)] f, axs = plt.subplots(2, 2, figsize=(10, 10)) a_xy = tbt.geometry.xyz[tbt.a_dev, :2] for i in range(4): A = ADOS[i] A *= 100 / A.max() # normalize to maximum 100 (simply for plotting) axs[i // 2][i % 2].scatter(a_xy[:, 0], a_xy[:, 1], A, c="bgrk"[i], alpha=.5); plt.xlabel('x [Ang]'); plt.ylabel('y [Ang]'); plt.axis('equal'); ```
github_jupyter
``` #pip install xlwt openpyxl xlsxwriter xlrd import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl import seaborn as sns ``` # Loading in Calibration datasets ``` #CO2 only df_Eguchi_CO2= pd.read_excel('Solubility_Datasets_V1.xlsx', sheet_name='Eguchi_CO2', index_col=0) df_Allison_CO2= pd.read_excel('Solubility_Datasets_V1.xlsx', sheet_name='Allison_CO2', index_col=0) df_Dixon_CO2= pd.read_excel('Solubility_Datasets_V1.xlsx', sheet_name='Dixon_CO2', index_col=0) df_MagmaSat_CO2= pd.read_excel('Solubility_Datasets_V1.xlsx', sheet_name='MagmaSat_CO2', index_col=0) df_Shishkina_CO2=pd.read_excel('Solubility_Datasets_V1.xlsx', sheet_name='Shishkina_CO2', index_col=0) #H2O Only df_Iacono_H2O= pd.read_excel('Solubility_Datasets_V1.xlsx', sheet_name='Iacono_H2O', index_col=0) df_Shishkina_H2O=pd.read_excel('Solubility_Datasets_V1.xlsx', sheet_name='Shishkina_H2O', index_col=0) df_MagmaSat_H2O= pd.read_excel('Solubility_Datasets_V1.xlsx', sheet_name='MagmSat_H2OExt', index_col=0) df_Dixon_H2O=pd.read_excel('Solubility_Datasets_V1.xlsx', sheet_name='Dixon_H2O', index_col=0) df_Moore_H2O=pd.read_excel('Solubility_Datasets_V1.xlsx', sheet_name='Moore_H2O', index_col=0) #Mixed CO2-H2O df_Iacono_CO2H2O= pd.read_excel('Solubility_Datasets_V1.xlsx', sheet_name='Iacono_H2O-CO2', index_col=0) df_MagmaSat_CO2H2O= pd.read_excel('Solubility_Datasets_V1.xlsx', sheet_name='MagmaSat_CO2H2O', index_col=0) ``` # Subdividing up the Allison dataset by the different systems ``` #San Francisco Volcanic Field df_Allison_CO2_SFVF=df_Allison_CO2.loc[df_Allison_CO2['Location']=='SFVF'] #Sunset Crater df_Allison_CO2_SunsetCrater=df_Allison_CO2.loc[df_Allison_CO2['Location']=='SunsetCrater'] #Erebus df_Allison_CO2_Erebus=df_Allison_CO2.loc[df_Allison_CO2['Location']=='Erebus'] #Vesuvius df_Allison_CO2_Vesuvius=df_Allison_CO2.loc[df_Allison_CO2['Location']=='Vesuvius'] #Etna df_Allison_CO2_Etna=df_Allison_CO2.loc[df_Allison_CO2['Location']=='Etna'] #Stromboli df_Allison_CO2_Stromboli=df_Allison_CO2.loc[df_Allison_CO2['Location']=='Stromboli'] ``` # Calculating min and max P and T for each model ``` # Calculating limits - Magmasat read off their graph minDixonP_H2O=df_Dixon_H2O["P (bars)"].min() maxDixonP_H2O=df_Dixon_H2O["P (bars)"].max() minDixonT_H2O=1200 maxDixonT_H2O=1200 minDixonP_CO2=df_Dixon_CO2["P (bars)"].min() maxDixonP_CO2=df_Dixon_CO2["P (bars)"].max() minDixonT_CO2=1200 maxDixonT_CO2=1200 minDixonP_CO2H2O=df_Dixon_CO2["P (bars)"].min() maxDixonP_CO2H2O=df_Dixon_CO2["P (bars)"].max() minDixonT_CO2H2O=1200 maxDixonT_CO2H2O=1200 minMooreP_H2O=df_Moore_H2O["P (bars)"].min() maxMooreP_H2O=df_Moore_H2O["P (bars)"].max() maxMooreP_H2O_Pub=3000 minMooreT_H2O=df_Moore_H2O["T (C)"].min() maxMooreT_H2O=df_Moore_H2O["T (C)"].max() minIaconoP_H2O=df_Iacono_H2O["P (bar)"].min() maxIaconoP_H2O=df_Iacono_H2O["P (bar)"].max() minIaconoT_H2O=df_Iacono_H2O["T (K)"].min()-273.15 maxIaconoT_H2O=df_Iacono_H2O["T (K)"].max()-273.15 minIaconoP_CO2H2O=df_Iacono_CO2H2O["P (bar)"].min() maxIaconoP_CO2H2O=df_Iacono_CO2H2O["P (bar)"].max() minIaconoT_CO2H2O=df_Iacono_CO2H2O["T (K)"].min()-273.15 maxIaconoT_CO2H2O=df_Iacono_CO2H2O["T (K)"].max()-273.15 minEguchiP_CO2=10000*df_Eguchi_CO2["P(GPa)"].min() maxEguchiP_CO2=10000*df_Eguchi_CO2["P(GPa)"].max() minEguchiT_CO2=df_Eguchi_CO2["T(°C)"].min() maxEguchiT_CO2=df_Eguchi_CO2["T(°C)"].max() minAllisonP_CO2=df_Allison_CO2["Pressure (bars)"].min() maxAllisonP_CO2=df_Allison_CO2["Pressure (bars)"].max() minAllisonP_CO2_SFVF=df_Allison_CO2_SFVF["Pressure (bars)"].min() maxAllisonP_CO2_SFVF=df_Allison_CO2_SFVF["Pressure (bars)"].max() minAllisonP_CO2_SunsetCrater=df_Allison_CO2_SunsetCrater["Pressure (bars)"].min() maxAllisonP_CO2_SunsetCrater=df_Allison_CO2_SunsetCrater["Pressure (bars)"].max() minAllisonP_CO2_Erebus=df_Allison_CO2_Erebus["Pressure (bars)"].min() maxAllisonP_CO2_Erebus=df_Allison_CO2_Erebus["Pressure (bars)"].max() minAllisonP_CO2_Vesuvius=df_Allison_CO2_Vesuvius["Pressure (bars)"].min() maxAllisonP_CO2_Vesuvius=df_Allison_CO2_Vesuvius["Pressure (bars)"].max() minAllisonP_CO2_Etna=df_Allison_CO2_Etna["Pressure (bars)"].min() maxAllisonP_CO2_Etna=df_Allison_CO2_Etna["Pressure (bars)"].max() minAllisonP_CO2_Stromboli=df_Allison_CO2_Stromboli["Pressure (bars)"].min() maxAllisonP_CO2_Stromboli=df_Allison_CO2_Stromboli["Pressure (bars)"].max() minAllisonT_CO2=1200 maxAllisonT_CO2=1200 minShishkinaP_H2O=10*df_Shishkina_H2O["P (MPa)"].min() maxShishkinaP_H2O=10*df_Shishkina_H2O["P (MPa)"].max() minShishkinaT_H2O=df_Shishkina_H2O["T (°C)"].min() maxShishkinaT_H2O=df_Shishkina_H2O["T (°C)"].max() minShishkinaP_CO2=10*df_Shishkina_CO2["P (MPa)"].min() maxShishkinaP_CO2=10*df_Shishkina_CO2["P (MPa)"].max() minShishkinaT_CO2=df_Shishkina_CO2["T (°C)"].min() maxShishkinaT_CO2=df_Shishkina_CO2["T (°C)"].max() # Measured off Magmasat graph minMagmasatP_CO2=10*0 maxMagmasatP_CO2=10*3000 minMagmasatT_CO2=1139 maxMagmasatT_CO2=1730 minMagmasatP_H2O=10*0 maxMagmasatP_H2O=10*2000 minMagmasatT_H2O=550 maxMagmasatT_H2O=1418 ``` # Table of calibration limits ``` columns=['Publication', 'Species', 'Min P (bars)', 'Max P (bars)', 'Min T (C)', 'Max T (C)', 'notes'] df_PT2=pd.DataFrame([['Dixon 1997', 'H2O', minDixonP_H2O, maxDixonP_H2O, minDixonT_H2O, maxDixonT_H2O, '-'], ['Dixon 1997', 'CO2', minDixonP_CO2, maxDixonP_CO2, minDixonT_CO2, maxDixonT_CO2, '-'], ['Moore et al. 1998 (cal datasat)', 'H2O', minMooreP_H2O, maxMooreP_H2O, minMooreT_H2O, maxMooreT_H2O, '2 samples in dataset with >3kbar P'], ['Moore et al. 1998 (author range)', 'H2O', 0, 3000, 700, 1200, 'Paper says reliable up to 3kbar'], ['Iacono-Marziano et al., 2012', 'H2O', minIaconoP_H2O, maxIaconoP_H2O, minIaconoT_H2O, maxIaconoT_H2O, '-'], ['Iacono-Marziano et al., 2012', 'CO2-H2O', minIaconoP_CO2H2O, maxIaconoP_CO2H2O, minIaconoT_CO2H2O, maxIaconoT_CO2H2O, '-'], ['Shishkina et al., 2014', 'H2O', minShishkinaP_H2O, maxShishkinaP_H2O, minShishkinaT_H2O, maxShishkinaT_H2O, '-'], ['Shishkina et al., 2014', 'CO2', minShishkinaP_CO2, maxShishkinaP_CO2, minShishkinaT_CO2, maxShishkinaT_CO2, '-'], ['Ghiorso and Gualda., 2015 (MagmaSat)', 'H2O', minMagmasatP_H2O, maxMagmasatP_H2O, minMagmasatT_H2O, maxMagmasatT_H2O, '-'], ['Ghiorso and Gualda., 2015 (MagmaSat)', 'CO2', minMagmasatP_CO2, maxMagmasatP_CO2, minMagmasatT_CO2, maxMagmasatT_CO2, '-'], ['Eguchi and Dasgupta, 2018', 'CO2', minEguchiP_CO2, maxEguchiP_CO2, minEguchiT_CO2, maxEguchiT_CO2, '-'], ['Allison et al. 2019 (All Data)', 'CO2', minAllisonP_CO2, maxAllisonP_CO2, minAllisonT_CO2, maxAllisonT_CO2, '-'], ['Allison et al. 2019 (SFVF)', 'CO2', minAllisonP_CO2_SFVF, maxAllisonP_CO2_SFVF, minAllisonT_CO2, maxAllisonT_CO2, '-'], ['Allison et al. 2019 (Sunset Crater)', 'CO2', minAllisonP_CO2_SunsetCrater, maxAllisonP_CO2_SunsetCrater, minAllisonT_CO2, maxAllisonT_CO2, '-'], ['Allison et al. 2019 (Erebus)', 'CO2', minAllisonP_CO2_Erebus, maxAllisonP_CO2_Erebus, minAllisonT_CO2, maxAllisonT_CO2, '-'], ['Allison et al. 2019 (Vesuvius)', 'CO2', minAllisonP_CO2_Vesuvius, maxAllisonP_CO2_Vesuvius, minAllisonT_CO2, maxAllisonT_CO2, '-'], ['Allison et al. 2019 (Etna)', 'CO2', minAllisonP_CO2_Etna, maxAllisonP_CO2_Etna, minAllisonT_CO2, maxAllisonT_CO2, '-'], ['Allison et al. 2019 (Etna)', 'CO2', minAllisonP_CO2_Stromboli, maxAllisonP_CO2_Stromboli, minAllisonT_CO2, maxAllisonT_CO2, '-'], ], columns=columns).set_index('Publication') #save to excel file for easy import into manuscript with pd.ExcelWriter("Table_of_Calibration_Limits.xlsx") as writer: df_PT2.to_excel(writer, 'Table') df_PT2 ``` # Things to include for Dixon CO2 model Caution: 1. This CO2 model is only valid where C dissolves as carbonate ions, and is not applicable for intermediate-silicic compositions where C is also present as molecular CO2. 2. T is assumed to be constant at 1200C in the equations of Dixon. There is some temperature dependence in the implementation of this model through the fugacity and <font color='red'>{insert other terms where this is true}. </font> 3. The compositional dependence of CO2 in the Dixon, 1997 model is incorperated emperically through the parameter Pi. Dixon (1997) provide an equation for Pi in terms of oxide fractions at 1200C, 1kbar. However, they also show that in the North arch Volcanic field, there is a strong correlation between Pi and SiO2, allowing a simplification of the compositional dependence in terms of SiO2. This was implemented in Volatilecalc, and is used in this model. Note: [Part A](#pA)<br>Equation 1 will only be valid if your samples have similar major element systematics to the calibration dataset of Dixon, 1997. We provide a plot in notebook X to assess this. Crucially, if the full Pi term in your sample suite does not follow the same trajectory with SiO2 as the North Arch dataset, this simplification will lead to innaccurate results. [Part B](#pA)<br> Equation 1 is only valid for 40-49 wt% SiO2. In VolatileCalc, you cannot enter a SiO2 value>49 wt%. In the literature, for samples with >49 wt% SiO2, the SiO2 content has been set to 49wt% to allow the Dixon model to be used (e.g., Tucker et al., 2019; Coombs et al. 2005). Newman and Lowenstern suggest that calculating the result with SiO2=49wt% would generally be valid for basalts with ~52 wt% SiO2. In our code, samples with SiO2>49 are calculated assuming SiO2=49. Here's how this is implemented in our code: if sample['SiO2'] > 48.9: return 3.817e-7 else: return 8.7e-6 - 1.7e-7*sample['SiO2'] [Part C](#pA)<br> It is unclear whether the Pi dependence, and by extension, equation 1 are valid at pressures >1000 bars. Lesne et al. (2011) suggest it may hold to 2000 bars, in VolatileCalc, the limit is placed at 5000 bars. 2) The correlation between Pi and SiO2 was only parameterized between 40-49 wt% SiO2. In our code, if SiO2 of the sample>49 wt%, Extrapolation beyond this Specific errors to spit out if you do exceed our parameters 1. Your SiO2>49 wt%, please see caution statement 2. This pressure is above the limit of 2000 bars that Lesne et al. 2011 suggest the Dixon compositoinal dependence may be valid too. 3. This pressure is above the upper limit of 5,000 bars as suggested by Volatile Calc (Newman and Lowenstern, 2001)" 4. This pressure is above the maximum experimentally calibrated pressure reported by Dixon (1997). Extrapolation should be stable to 20,000 bars." 5. This pressure is above the maximum extrapolated pressure reported by Dixon et al. (1995)" 6. In the Dixon model, T is assumed to be constant at 1200C. <font color='red'>Simon, not sure the best way to explain that although it is out of range, Temp isn't super sensitive for basalts </font> # Things to include for the Moore model Caution: 1. This is an emperical model, so care should be taken extrapolating the results beyond the calibration range of the dataset. Specific errors 1. PSat>3000 : Your P is >3000 bars, authors warn against extrapolating beyond this point due to limitations of calibration dataset, as well as the fact high P may be problematic due to critical behavoir 2. Temp>1200, Temp< 700 - You are outside the calibration range for temperature defined by the authors - caution is need intepreting these results. # Things to include for Iacono-Marziano model Caution 1. This semi-empirical model has limited composition range. In particular, the authors warn that the effect of MgO, FeO and Na2O on solubiltiy are poorly constrained due to limited variation in their dataset. In particular, they emphasize that they only have a single pressure for Na-rich melts, so high Na2O melts are not well calibrated at various pressures 2. The temperature range is limited to 1200-1300 C, with combined H2O-CO2 between 1100-1400. 3. This model ignores the effect of ferric/ferrous iron, although Papale has showed this has a big effect. Specific errors - 1. Your temperature is out of range for X 2. Your pressures is out of range of X. 3. Your MgO is out of range of their dataset - the authors specifically warn the effect of MgO on solubility is poorly constrained due to limited variability in the calibration dataset. <font color='red'>Simon, not sure if we want errors like only N samples in database have MgO contents equal to yours, as there are a few at higher ones </font> 4. Your FeO is out of range - the authors specifically warn the effect of FeO on solubility is poorly constrained due to limited variability in the calibration dataset. <font color='red'>Simon, not sure if we want errors like only N samples in database have FeO contents equal to yours, as there are a few at higher ones </font> 5. Your Na2O is out of range - the authors specifically warn that high Na2O melts are not handled well, as the database only contains these at 1 pressure. <font color='red'>Simon, not sure if we want errors like only N samples in database have Na2O contents equal to yours, as there are a few at higher ones </font> ``` # As few outliers, might be better to say only 1 composition in database has high enough FeO or something plt.hist(df_Iacono_CO2H2O["MgO"], bins = [0,1,2,3,4,5,6,7,8,9,10,11]) plt.xlabel("MgO, wt%") plt.ylabel("Number of samples") plt.title("histogram MgO Iacono") plt.show() plt.hist(df_Iacono_CO2H2O["FeOT"], bins = [0,1,2,3,4,5,6,7,8,9,10,11]) plt.xlabel("FeO*, wt%") plt.ylabel("Number of samples") plt.title("histogram FeOT Iacono") plt.show() plt.hist(df_Iacono_CO2H2O["Na2O"], bins = [0,1,2,3,4,5,6,7,8,9,10,11]) plt.xlabel("Na$_2$O, wt%") plt.ylabel("Number of samples") plt.title("histogram Na2O Iacono") plt.show() ``` # Things to include for Shishkina model Caution: 1. This is an emperical model, so care should be taken extrapolating the results beyond the calibration range of the dataset. 2. This CO2 model is only valid where C dissolves as carbonate ions, and is not applicable for intermediate-silicic compositions where C is also present as molecular CO2. 3. This model only provides H2O and CO2 models separatly. They are combined in this study in <font color='red'>Simon what is best way to explain how its done here </font> 4. Note than H2O eq can be used for mafic-int melts, at relatively oxidised conditions (NNO+1 to NNO+4). Pure CO2 fluid experiments are more reduced, Fe2+/T>0.8 where reported. Warn against use of CO2 for highly alkali compositions.<font color='red'>Simon, not sure whether we want to implement warnings for this, or just state it in the info/caution section </font> Specific errors 1. Your pressure is out of range 2. Your temp is out of range 3. Your SiO2 is > 54 wt%, which is the limit of the calibration range of the CO2 dataset, the authors specifically warn that the model isn't applicable for intermediate-silicic compositions where C is also present as molecular CO2. The H2O model extends up to 69 wt%. 4. Your K2O+Na2O is outside the range for the CO2 model of X <font color='red'>Simon this is based on caution they say to take for alkali compositions </font> ``` # Simon - do we want to code in things like this? df_Shishkina_CO2["K2O+Na2O"]=df_Shishkina_CO2["K2O"]+df_Shishkina_CO2["Na2O"] df_Shishkina_H2O["K2O+Na2O"]=df_Shishkina_H2O["K2O"]+df_Shishkina_H2O["Na2O"] columns=['Model', 'Oxide', 'Min', 'Max'] Shish_Lim=pd.DataFrame([['Shishkina-H2O', 'SiO2', df_Shishkina_H2O["SiO2"].min(), df_Shishkina_H2O["SiO2"].max()], ['Shishkina-CO2', 'SiO2', df_Shishkina_CO2["SiO2"].min(), df_Shishkina_CO2["SiO2"].max()], ['Shishkina-CO2', 'K2O+Na2O', df_Shishkina_CO2["K2O+Na2O"].min(), df_Shishkina_CO2["K2O+Na2O"].max()], ['Shishkina-H2O', 'K2O+Na2O', df_Shishkina_H2O["K2O+Na2O"].min(), df_Shishkina_H2O["K2O+Na2O"].max()], #['Shishkina-H2O', 'SiO2', df_Shishkina_H2O["SiO2"].min(), df_Shishkina_H2O["SiO2"].max()], #['Shishkina-CO2', 'SiO2', df_Shishkina_CO2["SiO2"].min()],df_Shishkina_CO2["SiO2"].min() ], columns=columns) Shish_Lim ``` # Magmasat Specific errors 1. Your Temp is <800C. Authors warn that below this, the calibration dataset only have H2O experiments, and very few, so concern with extrapolation to lower temp. # Things to include for Eguchi and Dasgupta, 2018 Caution 1. This model is for CO2 only, and was only calibrated on H2O-poor compositions (~0.2-1 wt%). The authors suggest that for hydrous melts, a mixed CO2-H2O fluid saturation model must be used. The authors show comparisons in their paper; the difference between magmasat and this model is <30% for up to 4.5 wt% H2O, concluding that this model does a reasaonble job of predicting CO2 solubility up to 2-4 wt% <font color='red'>This is suprising to me, given the differences I found with Allison... Would it work/not work to combine this model with a different H2O model as allison do?S </font> Specific errors 1. your H2O>1 wt%. The model is only calibrated on water-poor compositions. The authors suggest the model works reasonably well up to H2O contents of 2-3 wt%. 2. Your P is >50000 bars, the authors warn the model works poorly at higher pressures, possibly due to a structural change in silicate melt at pressures above 5 GPa 3. Your P is <503 bars, which is the minimum pressure in the calibration dataset. # Things to include for Allison model Caution: 1. For the Allison model, please select which of their 5 systems, SFVF, Sunset crater, Erebus, Vesuvius, Etna and Stromboli is most suitable for your system using the element diagrams in the "calibration" notebook. <font color='red'>Simon should we try clustering in SiO2-Na2O-K2O space </font> 2. Note that the pressure calibration range for SFVF, Sunset Crater, Erebus and Vesuvius only incorperates pressures from ~4000-6000 bars. For Etna the authors added data from Iacono-Marziano et al. 2012 and Lesne et al. 2011, extending the calibration dataset down to ~485 bars, while for strombili, they include data from Lesne extending it down to ~269 bars. 3. Temperature is assumed to be fixed at 1200C in the Allison model. 4. Although this model is technically a CO2-only model, in all their calculations they combine their CO2 model with the H2O model of Lesne et al. 2011. You should implement a water model as well to get reliable answers Specific errors- 1. Your P>7000 bars, the spreadsheet provided by Allison et al. (2019) would not have given an answer. 2. Your P<50 bars, the spreadsheet provided by Allison et al. (2019) would not have given an answer. 3. <font color='red'>Include pressure range for each model? and build errors out of that </font> 4. You have not choosen a water model. Your results may be unrelible.
github_jupyter
# Обратные связи в контуре управления Для рассмотренных в предыдущих лекциях регуляторов требуется оценивать состояние объекта управления. Для построения таких оценок необходимо реализовать обратные связи в контуре управления. На практике для этого используются специальные устройства: датчики. # Случайные величины Случайная величина -- это переменная, значениее которой определяется в результате эксперимента, подверженного влиянию случайных факторов. Случайные величины характеризуются функция плотности вероятности \begin{equation} p(a \leq \xi \leq b) = \int_{a}^{b} p(\xi) \,d\xi \end{equation} которая определяет вероятность попадания значения $\xi$ в интервал $[a \quad b]$. Математическим ожиданием случайной величины называется \begin{equation} \mathbb{E}[\xi] = \int_{-\infty}^{\infty} \xi \cdot p(\xi) \,d\xi \end{equation} Дисперсия случайной величины \begin{equation} \mathbb{D}[\xi] = \mathbb{E}[\left(\xi - \mathbb{E}[\xi]\right)^2] \end{equation} Ковариация двух случайных величин \begin{equation} \Sigma[\xi_1, \xi_2] = \mathbb{E}[(\xi_1 - \mathbb{E}[\xi_1]) (\xi_2 - \mathbb{E}[\xi_2])] \end{equation} ``` # [ПРИМЕР 1] Измерения случайной величины import numpy as np xi = np.random.random() print(xi) # [ПРИМЕР 2] Распределение случайной величины import numpy as np import matplotlib.pyplot as plt np.random.seed(200) N = 999 xi = np.random.random(N) # plot xi fig1 = plt.figure(figsize=(10,5)) ax1 = fig1.add_subplot(1,1,1) ax1.set_title("Random variable xi") ax1.plot(range(N), xi, color = 'b') ax1.set_ylabel(r'xi') ax1.set_xlabel(r'n') ax1.grid(True) # [ПРИМЕР 3] Нормальное распределение import numpy as np import scipy.stats as st import matplotlib.pyplot as plt import numpy as np x = np.random.normal(3, 1, 100000) _, bins, _ = plt.hist(x, 50, density = True, alpha = 0.5) mu, sigma = st.norm.fit(x) best_fit_line = st.norm.pdf(bins, mu, sigma) plt.plot(bins, best_fit_line) # [ПРИМЕР 4] Математическое ожидание и дисперсия import numpy as np import matplotlib.pyplot as plt #np.random.seed(200) N = 999 xi = np.random.normal(0, 1, N) e = np.mean(xi) print("Expected value: ", (e)) d = np.mean((xi - e)**2) print("Variance: ", (d)) # plot xi fig1 = plt.figure(figsize=(10,5)) ax1 = fig1.add_subplot(1,1,1) ax1.set_title("Random variable xi") ax1.plot(range(N), xi, color = 'b') ax1.set_ylabel(r'xi') ax1.set_xlabel(r'n') ax1.set_ylim([-4 * d, 4 * d]) ax1.grid(True) mean = [0, 0] covariance_mat = [[1., -0.5], [-0.5, 1.]] x, y = np.random.multivariate_normal(mean, covariance_mat, 10000000).T plt.figure(figsize = (3, 3 )) plt.hist2d(x, y, bins=(1000, 1000), cmap = plt.cm.jet) plt.subplots_adjust(bottom = 0, top = 1, left = 0, right = 1) plt.xlim(-5, 5) plt.ylim(-5, 5) plt.show() ``` # Фильтр Калмана (Линейные системы) Модель системы (процесса): $$x_k = A_k \cdot x_{k-1} + B_k \cdot u_k + w_k,$$ где $w_k$~$N(0, Q_k)$ - нормально распределённый случайный процесс, характеризующийся нулевым математическим ожиданием и матрицей ковариации $Q_k$. Модель наблюдений (измерений): $$y_k = H_k \cdot x_k + v_k,$$ где $v_k$ - нормально распределённый случайный процесс, характеризующийся нулевым математическим ожиданием и матрицей ковариации $R_k$. Требуется получить оценку вектора состояния системы $\hat{x}_k$, зная аналогичную оценку на предыдущем шаге ($\hat{x}_{k-1}$), выход системы (вектор измерений) $y_k$ и вектор управляющих параметров $u_k$. ## Прогноз Прогноз (экстраполяция) вектора состояния с помощью модели процесса: $$\overline{x}_k = A_k \cdot \hat{x}_{k-1} + B_k \cdot u_k ,$$ Прогноз матрицы ковариации ошибок: $$\overline{P}_k = A_k\cdot \hat{P}_{k-1}\cdot A_k^T + Q_k$$ ## Коррекция Вычисление коэффициента усиления Калмана: $$K_k = \overline{P}_k \cdot H_{k}^T \cdot \left(H_k\cdot \overline{P}_k\cdot H_k^T + R_k\right)^{-1}$$ Оценка матрицы ковариации ошибки: $$\hat{P}_k = \left(I - K_k\cdot H_k\right)\cdot \overline{P}_k$$ Оценка вектора состояния: $$\hat{x}_k = \overline{x}_k + K_k\cdot\left(y_k - H_k\cdot\overline{x}_k\right)$$ # Пример Рассмотрим задачу о движении точки по прямой под действием случайных (постоянных внутри каждого такта управления) ускорений. Вектор состояния системы включает в себя координату $x$ и скорость $v$, то есть $x_k = [x \quad v]^T$. Уравнения движения: $$x_k = A_k \cdot x_{k-1} + G_k \cdot a_k,$$ где $$A_k = \begin{pmatrix} 1 & \Delta t \\ 0 & 1 \end{pmatrix}, \quad G_k = \begin{pmatrix} 0.5 \Delta t^2 \\ \Delta t \end{pmatrix}. $$ Матрица ковариации шума процесса: $$Q = G\cdot G^T \cdot \sigma_a^2,$$ где $\sigma_a$ характеризует случайное распределение ускорений. На каждом такте процесса измеряется координата точки. Таким образом, матрица наблюдения $$H = \begin{pmatrix} 1 & 0 \end{pmatrix},$$ а модель наблюдений $$y_k = H \cdot x_k + v_k,$$ где $v_k$ - нормально распределённый шум измерений ($\sigma_m$). Матрица ковариации шума измерений: $R = \left[ \sigma_m^2 \right].$ ``` import numpy as np import matplotlib.pyplot as plt from random import normalvariate class RealWorld: def __init__(self, sigma_acc, sigma_meas, dt): self.time = 0.0 self.time_step = dt self.position = 0.0 self.sigma_acc = sigma_acc self.velocity = 0.1 self.measurement = None # шум измерений self.sigma_meas = sigma_meas def measure(self): if self.measurement == None: self.measurement = self.position + normalvariate(0, self.sigma_meas) return self.measurement def step(self): self.time += self.time_step acceleration = normalvariate(0, self.sigma_acc) self.position += self.velocity * self.time_step + 0.5 * acceleration * self.time_step**2 self.velocity += acceleration * self.time_step self.measurement = None ``` # Пример Рассмотрим задачу о движении точки по прямой под действием случайных (постоянных внутри каждого такта управления) ускорений. Вектор состояния системы включает в себя координату $x$ и скорость $v$, то есть $x_k = [x \quad v]^T$. Уравнения движения: $$x_k = A_k \cdot x_{k-1} + G_k \cdot a_k,$$ где $$A_k = \begin{pmatrix} 1 & \Delta t \\ 0 & 1 \end{pmatrix}, \quad G_k = \begin{pmatrix} 0.5 \Delta t^2 \\ \Delta t \end{pmatrix}. $$ Матрица ковариации шума процесса: $$Q = G\cdot G^T \cdot \sigma_a^2,$$ где $\sigma_a$ характеризует случайное распределение ускорений. На каждом такте процесса измеряется координата точки. Таким образом, матрица наблюдения $$H = \begin{pmatrix} 1 & 0 \end{pmatrix},$$ а модель наблюдений $$y_k = H \cdot x_k + v_k,$$ где $v_k$ - нормально распределённый шум измерений ($\sigma_m$). Матрица ковариации шума измерений: $R = \left[ \sigma_m^2 \right].$ ``` sigma_a = 0.01 # нормально распределённые ускорения # дисперсия шума измерений sigma_measurement = 10. world = RealWorld(sigma_a, sigma_measurement, 0.5) #оператор эволюции A = np.array([[1., world.time_step],[0., 1.]]) G = np.array([0.5 * world.time_step**2, world.time_step]) # шум процесса Q = np.outer(G, G) * sigma_a**2 # матрица ковариации ошибки position_uncertainty = 1. velocity_uncertainty = 1. P = np.array([[position_uncertainty, 0.],[0., velocity_uncertainty]]) #модель наблюдений, измеряем только положение H = np.array([1., 0.]) # дисперсия шума измерений R = np.array([sigma_measurement**2]) episode_len = 1000 data = np.zeros((6, episode_len)) for i in range(episode_len): world.step() measurement = world.measure() if i == 0: # первое измерение x_est = np.array([measurement, 0.]) elif i == 1: # второе измерение x_est = np.array([measurement, ( measurement - data[4, i-1] ) / world.time_step]) else: # если i >=2 начинаем применять модель ################################################################## # прогноз vel_est = data[5, i-1] pos_est = data[4, i-1] + vel_est * world.time_step x_pred = np.array([pos_est, vel_est]) # прогноз матрицы ковариации ошибки P_pred = A.dot(P).dot(A.T) + Q ################################################################## # Коррекция K = P_pred.dot(H.T) / (H.dot(P_pred).dot(H.T) + R) P = (np.eye(2) - K.dot(H)).dot(P_pred) x_est = x_pred + K.dot(measurement - H.dot(x_pred)) data[:, i] = np.array([world.time, world.position, world.velocity, measurement, x_est[0], x_est[1]]) # plot fig1 = plt.figure(figsize=(16,8)) ax1 = fig1.add_subplot(1,2,1) ax2 = fig1.add_subplot(1,2,2) # r ax1.set_title("position") ax1.plot(data[0, :], data[3, :], 'k', label = 'pos_mes') ax1.plot(data[0, :], data[1, :], 'r', label = 'pos_world') #ax1.plot(data[0, :], data[4, :]-data[1, :], 'g', label = 'pos_est') ax1.set_ylabel(r'r') ax1.set_xlabel(r't, [s]') ax1.grid(True) ax1.legend() # v ax2.set_title("velocity") ax2.plot(data[0, :], data[2, :], 'r', label = 'v') #ax2.plot(data[0, :], data[5, :], 'g', label = 'v_est') ax2.set_ylabel(r'v') ax2.set_xlabel(r't, [s]') ax2.grid(True) ax2.legend() fig2 = plt.figure(figsize=(16,8)) ax3 = fig2.add_subplot(1,2,1) ax4 = fig2.add_subplot(1,2,2) # r ax3.set_title("position") ax3.plot(data[0, :], data[1, :], 'r', label = 'pos_world') ax3.plot(data[0, :], data[4, :], 'g', label = 'pos_est') ax3.set_ylabel(r'r') ax3.set_xlabel(r't, [s]') ax3.grid(True) ax3.legend() # v ax4.set_title("velocity") ax4.plot(data[0, :], data[2, :], 'r', label = 'v') ax4.plot(data[0, :], data[5, :], 'g', label = 'v_est') ax4.set_ylabel(r'v') ax4.set_xlabel(r't, [s]') ax4.grid(True) ax4.legend() ``` # Модель измерений Рассмотрим движение системы \begin{equation} \dot{x} = f(x). \end{equation} Вектор измерений $z$ зависит от состояния системы, а также содержит случайную компоненту \begin{equation} z(x) = h(x) + \xi. \end{equation} Функция $h(x)$ связывает состояние системы с измерением датчика. Например, если датчик GNNS-приемника в одномерной задаче движения тележки по рельсам смещен от центра тележки $x$ на расстояние $r$, можно записать \begin{equation} h(x) = x + r. \end{equation} # Расширенный фильтр Калмана (Extended Kalman Filter) Расширенный фильтр Калмана подразумевает, как правило, нелиинейную модель системы (процесса): \begin{equation} \dot{x} = f(x) + w(t). \end{equation} Шум $w$ имеет нормальное распределение, нулевое математическое ожидание и матрицу ковариации $Q$. Модель наблюдений также может описываться нелинейным уравнением \begin{equation} y = h(x) + v(t), \end{equation} где $v$ имеет нормальное распределение, нулевое математическое ожидание и матрицу ковариации $R$. Однако, как правило, считается, что измерения обрабатываются фильтром периодически с частотой такта управления, поэтому модель записывают в видее соотношенийми между вектором состояния на момент получения измерений $x_k = x(t_k)$ и набором измерений $y_k = y(t_k)$: \begin{equation} y_k = h(x_k) + v_k. \end{equation} Алгоритм снова выполняется в два этапа - прогноз и коррекция. ## Прогноз Прогноз (экстраполяция) вектора состояния с помощью нелинейной модели процесса: $$\overline{x}_k = \hat{x}_{k-1} + \int_{t_{k-1}}^{t_k} f(x)dt,$$ Прогноз матрицы ковариации ошибок: \begin{equation} \overline{P}_k = \Phi_k \cdot \hat{P}_{k-1} \cdot \Phi_k^T + Q, \end{equation} где \begin{equation} \Phi_k = I + F\cdot \Delta t = I +\frac{\partial f(x)}{\partial x}\cdot(t_k -t_{k-1}) \end{equation} ## Коррекция Здесь отличие от линейного алгоритма в необходимости линеаризовать модель наблюдений, чтобы получить матрицу $H$: \begin{equation} H_k = \frac{\partial h(x}{\partial x}. \end{equation} Вычисление коэффициента усиления Калмана: $$K_k = \overline{P}_k \cdot H_{k}^T \cdot \left(H_k\cdot \overline{P}_k\cdot H_k^T + R_k\right)^{-1}$$ Оценка матрицы ковариации ошибки: $$\hat{P}_k = \left(I - K_k\cdot H_k\right)\cdot \overline{P}_k$$ Оценка вектора состояния: $$\hat{x}_k = \overline{x}_k + K_k\cdot\left(y_k - h(\overline{x}_k)\right)$$ # Пример: Вращение твёрдого тела с неподвижным центром масс Ориентация твердого тела описывается кватернионом $q$, задающим положение связанной с телом системы координат относительно некоторой неподвижной системы координат. Говорят, что кватернион $q^{\mathrm{BI}}$ задает ориентацию некоторого базиса (B) относительно некоторого другого базиса (I), если представление любого вектора $\mathbf{v}$ в этих системах координат определяется соотношением: $$\mathbf{v}^{\mathrm{B}} = q^{\mathrm{BI}}\circ\mathbf{v}^{\mathrm{I}}\circ \tilde{q}^{\mathrm{BI}}$$ Кинематические уравнения твердого тела записываются как: \begin{equation}\label{eq:quat}\tag{1} \dot{q} = \frac{1}{2}q\circ \boldsymbol{\omega}, \end{equation} где $q$ - кватернион ориентации тела, $\boldsymbol{\omega}$ - угловая скорость тела в рпоекциях на связанные с телом оси. Модель движения твёрдого тела с неодвижной точкой дополняется динамическими уравнениями Эйлера \begin{equation}\label{eq:euler}\tag{2} \mathbf{J}\cdot \dot{\boldsymbol{\omega}} + \boldsymbol{\omega} \times \mathbf{J}\cdot \boldsymbol{\omega} = \mathbf{T}, \end{equation} где $\mathbf{J}$ - тензор инерции тела, $\mathbf{T}$ - главный момнет сил, действующих на тело. Таким образом, вектор состояния состояит из 4х компонент кватерниона ориентации и 3х компонент вектора угловой скорости. А модель процесса состоит из уравнений \eqref{eq:quat} и \eqref{eq:euler}.
github_jupyter
``` import spotipy from spotipy.oauth2 import SpotifyOAuth import pandas as pd import time scope = 'user-top-read user-library-read' sp = spotipy.Spotify(client_credentials_manager=SpotifyOAuth(scope=scope)) sp.user_playlists(sp.current_user()['id']) results = sp.current_user_top_artists(time_range='short_term', limit=50) all_genres = [genre for r in results['items'] for genre in r['genres'] ] all_genres from collections import Counter top_genres = Counter(all_genres) top_genres = {key : value for key, value in sorted(top_genres.items(), key=lambda k: k[1], reverse=True)} top_genres results top_genres_and_artists = [[r['name'], r['id'], r['genres']] if len(r['genres']) > 0 else [r['name'], r['id'], ['unknown genre']] for r in results['items']] top_genres_and_artists artists = [] for artist_name, artist_id, genres in top_genres_and_artists: if 'indie soul' in genres: artists.append([artist_name, artist_id]) artists def get_top_genres(): results = sp.current_user_top_artists(time_range='short_term', limit=50) all_genres = [genre for r in results['items'] for genre in r['genres']] top_genres = Counter(all_genres) top_genres = {key : value for key, value in sorted(top_genres.items(), key=lambda k: k[1], reverse=True)} return top_genres def get_top_artists(top_genres): # TO-DO: Let user select from top genres top_genre = list(top_genres.keys())[0] # Get the only one top genre for now print("Selected genre: %s" % (top_genre)) artists = [] for artist_name, artist_id, genres in top_genres_and_artists: if top_genre in genres: artists.append([artist_name, artist_id]) print("Selected artists belonging to this genre: ", artists) return artists def get_discography(artist_id, min_track_duration=30000): tracks = [] album_ids = [album['id'] for album in sp.artist_albums(artist_id)['items'] if album['album_type'] != 'compilation' ] for album_id in album_ids: # track_ids = [track['id'] for track in sp.album_tracks(album_id)['items'] if track['duration_ms'] > min_track_duration] for track in sp.album_tracks(album_id)['items']: # There are unexpected results while retrieving the discography of an artist # Only get the albums that the artist owns flag = False for artist in track['artists']: if artist['id'] == artist_id: flag = True break if flag and track['duration_ms'] > min_track_duration: tracks.append(track['id']) if len(tracks) == 100: break return tracks def get_all_features(artists): t = time.time() df = pd.DataFrame() for artist_name, artist_id in artists: try: tracks = get_discography(artist_id) except: time.sleep(2) tracks = get_discography(artist_id) while(len(tracks) > 0): if len(df) == 0: df = pd.DataFrame(sp.audio_features(tracks=tracks[:100])) df['artist_name'] = artist_name df['artist_id'] = artist_id # Could not add track names in here # API does not return audio features of all tracks # There might be a restriction on different markets # df['track_name'] = track_names[:100] else: df_feats = pd.DataFrame(sp.audio_features(tracks=tracks[:100])) df_feats['artist_name'] = artist_name df_feats['artist_id'] = artist_id # df_feats['track_name'] = track_names[:100] df = df.append(df_feats) tracks = tracks[100:] print(time.time() - t) return df top_genres = get_top_genres() artists = get_top_artists(top_genres) df = get_all_features(artists) df.tempo.median() df.tempo.mean() (df.tempo.min() + df.tempo.max()) / 2 df.sample(800) def return_playlist(**kwargs): """ danceability='default', energy='default', speechiness='default', acousticness='default', instrumentalness='default', liveness='default', valence='default', tempo='default' """ # Select tracks based on the provided ranges top_genres = get_top_genres() artists = get_top_artists(top_genres) df = get_all_features(artists) print(len(df)) # Sort dataframe based on provided features # Randomly return tracks based on sorted # TO-DO: Select tracks based on user market for feature, value in kwargs.items(): avg = (df[feature].min() + df[feature].max()) / 2 if value == 'high': # df.sort_values(feature, ascending=False, inplace=True) df = df[df[feature] > avg] elif value == 'low': # df.sort_values(feature, ascending=True, inplace=True) df = df[df[feature] < avg] print(len(df)) # df = df.head(len(df)//3) try: return df.sample(25) except: return df playlist = return_playlist(danceability='high', instrumentalness='low', valence="low", tempo="low", energy="low") playlist def get_playlist_tracks(playlist): track_uris = playlist['uri'].to_list() artist_names = playlist['artist_name'].to_list() track_names = [track['name'] for track in sp.tracks(track_uris)['tracks']] tracks = ["{} by {}".format(track, artist) for track, artist in zip(track_names, artist_names)] return tracks, track_uris _, ids = get_playlist_tracks(playlist) ids string = "" for i in ids: string += i string += " " string string[:-1].split(" ") ```
github_jupyter
# Graphics ``` import matplotlib.pyplot as plt import numpy as np from PIL import Image ``` ## Heat Kernel ``` alpha = 1 / 2 d = 1 K = lambda t, x, y: (4 * np.pi * alpha * t) ** (-d / 2) * np.exp(-(np.abs(x - y) ** 2) / (4 * alpha * t)) t = 1 x = 0 y = np.linspace(-5, 5, 100) plt.figure(figsize=(10, 2.5)) #plt.title('Noyau de la chaleur') plt.plot(y, K(t, x, y)) plt.xlabel('$y$') plt.ylabel('$K(1, 0, y)$') plt.tight_layout() plt.savefig('heatKernel.png', transparent=True) ``` ## Cover image ``` piCreature = plt.imread('piCreature.png') fire = plt.imread('fire.jpg') # http://designbeep.com/2012/09/04/32-free-high-resolution-fire-textures-for-designers/ ice = plt.imread('ice.jpg') # http://www.antarcticglaciers.org/antarctica-2/photographs/ice-textures-and-patterns/ px, py, _ = piCreature.shape fx, fy, _ = fire.shape ix, iy, _ = ice.shape fxx = np.arange(0, fx - 1, fx // px) fyy = np.arange(0, fy - 1, fy // py) fireResize = fire[fxx, :, :] fireResize = fireResize[:, fyy, :] ixx = np.arange(0, ix - 1, ix // px) iyy = np.arange(0, iy - 1, iy // py) iceResize = ice[ixx, :, :] iceResize = iceResize[:, iyy, :] fireCut = fireResize[:px, :py, :] iceCut = ice[:px, :py, :] plt.imshow(piCreature) plt.imshow(fireCut) plt.imshow(iceCut) newPi = piCreature.copy()[..., :3] newPi2 = piCreature.copy()[..., :3] newPi[piCreature[..., 3] !=0] = (piCreature[..., :3] * fireCut)[piCreature[..., 3] !=0] / 255 newPi2[piCreature[..., 3] ==0] = (piCreature[..., :3] * iceCut)[piCreature[..., 3] ==0] / 255 # end = np.maximum(np.minimum(newPi + newPi2 - 1, 1), 0) end = (newPi + newPi2) / 2 layer = np.ones((px, py)) # https://towardsdatascience.com/image-processing-with-python-5b35320a4f3c # Gaussian formula def gaus(std, mean, x): return (1/(std))*np.e**(-(x-mean)**2/(2*std**2)) # function used to normalize values to between 0-1 def norm(vals): return [(v-min(vals))/(max(vals)-min(vals)) for v in vals] # function to build x/y Gaussian function from width (x) and height (y) def build_gaus(width, height): # get a uniform range of floats in the range 0-1 for the x/y axes x_vals = np.arange(0, width, 1) y_vals = np.arange(0, height, 1) # calculate standard deviation/mean - meaningless in this case # but required to produce Gaussian x_std, y_std = np.std(x_vals), np.std(y_vals) x_m, y_m = np.mean(x_vals), np.mean(y_vals) # create Gaussians for both x/y axes x_gaussian = [gaus(x_std, x_m, x) for x in x_vals] y_gaussian = [gaus(y_std, y_m, y) for y in y_vals] # normalize the Gaussian to 0-1 x_gaussian = np.array(norm(x_gaussian)) y_gaussian = np.array(norm(y_gaussian)) return x_gaussian, y_gaussian # first we build our x/y Gaussian functions x_gaus, y_gaus = build_gaus(layer.shape[0], layer.shape[1]) factor = 0.5 # apply the Gaussian functions to our image array layer = layer * (x_gaus**factor)[:, None] layer = layer * (y_gaus**factor)[None, :] # factor changes vignette strength #layer = layer.T # transpose back to original shape layer[piCreature[..., 0] < 0.5] = 1 layer[layer < 0.2] = 0.2 layer = (layer - 0.2) / 0.8 plt.imshow(layer) final = np.concatenate((end, layer[:, :, None]), axis=2) plt.imshow(final) im = Image.fromarray((final * 255).astype(np.uint8)) im.save("cover.png") orange = np.array([179, 3, 38]) blue = np.array([58, 76, 192]) piCreature2 = piCreature.copy() piCreature2[piCreature2[..., 3] != 0] = 1 newPi3 = piCreature2.copy()[..., :3] newPi4 = piCreature2.copy()[..., :3] newPi3[piCreature2[..., 3] !=0] = (piCreature2[..., :3] * orange)[piCreature2[..., 3] !=0] / 255 newPi4[piCreature2[..., 3] ==0] = (piCreature2[..., :3] * blue)[piCreature2[..., 3] ==0] / 255 end2 = np.maximum(np.minimum(newPi3 + newPi4 - 1, 1), 0) im = Image.fromarray((end2 * 255).astype(np.uint8)) im.save("piHeat.png") ```
github_jupyter
# Latitude, Longitude for any pixel in a GeoTiff File How to generate the latitude and longitude for a pixel at any given position in a GeoTiff file. ``` from osgeo import ogr, osr, gdal # opening the geotiff file ds = gdal.Open('G:\BTP\Satellite\Data\Test2\LE07_L1GT_147040_20050506_20170116_01_T2\LE07_L1GT_147040_20050506_20170116_01_T2_B1.TIF') col, row, band = ds.RasterXSize, ds.RasterYSize, ds.RasterCount print(col, row, band) xoff, a, b, yoff, d, e = ds.GetGeoTransform() print(xoff, a, b, yoff, d, e) # details about the params: GDAL affine transform parameters # xoff,yoff = left corner # a,e = weight,height of pixels # b,d = rotation of the image (zero if image is north up) def pixel2coord(x, y): """Returns global coordinates from coordinates x,y of the pixel""" xp = a * x + b * y + xoff yp = d * x + e * y + yoff return(xp, yp) x,y = pixel2coord(col/2,row/2) print (x, y) ``` #### These global coordinates are in a *projected coordinated system*, which is a representation of the spheroidal earth's surface, but flattened and distorted onto a plane. #### To convert these into latitude and longitude, we need to convert these coordinates into *geographic coordinate system*. ``` # get the existing coordinate system old_cs= osr.SpatialReference() old_cs.ImportFromWkt(ds.GetProjectionRef()) # create the new coordinate system wgs84_wkt = """ GEOGCS["WGS 84", DATUM["WGS_1984", SPHEROID["WGS 84",6378137,298.257223563, AUTHORITY["EPSG","7030"]], AUTHORITY["EPSG","6326"]], PRIMEM["Greenwich",0, AUTHORITY["EPSG","8901"]], UNIT["degree",0.01745329251994328, AUTHORITY["EPSG","9122"]], AUTHORITY["EPSG","4326"]]""" new_cs = osr.SpatialReference() new_cs.ImportFromWkt(wgs84_wkt) # create a transform object to convert between coordinate systems transform = osr.CoordinateTransformation(old_cs,new_cs) # converting into geographic coordinate system lonx, latx, z = transform.TransformPoint(x,y) print (latx, lonx, z) # rb = ds.GetRasterBand(1) px,py = col/2,row/2 # the pixel location pix = ds.ReadAsArray(px,py,1,1) print pix[0][0] # pixel value ``` # Reverse Geocoding Converting a lat/long to a physical address or location. We want the name of the DISTRICT. ## -------------------------------------------------------------------------------- ### API 1: Not so accurate ## -------------------------------------------------------------------------------- ``` coordinates = (latx,lonx) results = rg.search(coordinates) print results print type(results) print type(results[0]) results[0] k = 4 # If we want k*k pixels in total from the image for i in range(0,col,col/k): for j in range(0,row,row/k): # fetching the lat and lon coordinates x,y = pixel2coord(i,j) lonx, latx, z = transform.TransformPoint(x,y) # fetching the name of district coordinates = (latx,lonx) results = rg.search(coordinates) # The pixel value for that location px,py = i,j pix = ds.ReadAsArray(px,py,1,1) pix = pix[0][0] # printing s = "The pixel value for the location Lat: {0:5.1f}, Long: {1:5.1f} ({2:15}) is {3:7}".format(latx,lonx,results[0]["name"],pix) print (s) ``` ## -------------------------------------------------------------------------------- ### API 2 ## -------------------------------------------------------------------------------- ``` g = geocoder.google([latx,lonx], method='reverse') print type(g) print g print g.city print g.state print g.state_long print g.country print g.country_long print g.address ``` ###### The above wrapper for Google API is not good enough for us. Its not providing us with the district. ##### Lets try another python library available for the Google Geo API ``` results = Geocoder.reverse_geocode(latx, lonx) print results.city print results.country print results.street_address print results.administrative_area_level_1 print results.administrative_area_level_2 ## THIS GIVES THE DISTRICT !! <---------------- print results.administrative_area_level_3 ``` ##### This is what we need, we are getting the district name for given lat,lon coordinates ``` ## Converting the unicode string to ascii string v = results.country print type(v) v = v.encode("ascii") print type(v) print v ``` ##### Now lets check for an image from Rajasthan ``` k = 4 # If we want k*k pixels in total from the image for i in range(0,col,col/k): for j in range(0,row,row/k): # fetching the lat and lon coordinates x,y = pixel2coord(i,j) lonx, latx, z = transform.TransformPoint(x,y) # fetching the name of district results = Geocoder.reverse_geocode(latx, lonx) # The pixel value for that location px,py = i,j pix = ds.ReadAsArray(px,py,1,1) pix = pix[0][0] # printing if results.country.encode('ascii') == 'India': s = "Lat: {0:5.1f}, Long: {1:5.1f}, District: {2:12}, Pixel Val: {3:7}".format(latx,lonx,results.administrative_area_level_2,pix) print (s) ``` # Bing Maps REST API ``` import requests # To make the REST API Call import json (latx,lonx) url = "http://dev.virtualearth.net/REST/v1/Locations/" point = str(latx)+","+str(lonx) key = "Aktjg1X8bLQ_KhLQbVueYMhXDEMo7OaTweIkBvFojInYE4tVxoTp1bGKWbtU_OPJ" response = requests.get(url+point+"?key="+key) print(response.status_code) data = response.json() print(type(data)) data s = data["resourceSets"][0]["resources"][0]["address"]["adminDistrict2"] s = s.encode("ascii") s url = "http://dev.virtualearth.net/REST/v1/Locations/" key = "Aktjg1X8bLQ_KhLQbVueYMhXDEMo7OaTweIkBvFojInYE4tVxoTp1bGKWbtU_OPJ" ``` ## Bing API Test #### For 100 pixel locations ``` k = 10 # If we want k*k pixels in total from the image for i in range(0,col,col/k): for j in range(0,row,row/k): ############### fetching the lat and lon coordinates ####################################### x,y = pixel2coord(i,j) lonx, latx, z = transform.TransformPoint(x,y) ############### fetching the name of district ############################################## point = str(latx)+","+str(lonx) response = requests.get(url+point+"?key="+key) data = response.json() s = data["resourceSets"][0]["resources"][0]["address"] if s["countryRegion"].encode("ascii") != "India": print ("Outside Indian Territory") continue district = s["adminDistrict2"].encode("ascii") ############### The pixel value for that location ########################################## px,py = i,j pix = ds.ReadAsArray(px,py,1,1) pix = pix[0][0] # printing s = "Lat: {0:5.1f}, Long: {1:5.1f}, District: {2:12}, Pixel Val: {3:7}".format(latx,lonx,district,pix) print (s) ``` # We have another player in the ground! Can Reverse Geocode by using the python libraries `shapely` and `fiona` with a shapefile for all the district boundaries of India ``` import fiona from shapely.geometry import Point, shape # Change this for Win7 base = "/Users/macbook/Documents/BTP/Satellite/Data/Maps/Districts/Census_2011" fc = fiona.open(base+"/2011_Dist.shp") def reverse_geocode(pt): for feature in fc: if shape(feature['geometry']).contains(pt): return feature['properties']['DISTRICT'] return "NRI" k = 10 # If we want k*k pixels in total from the image for i in range(0,col,col/k): for j in range(0,row,row/k): ############### fetching the lat and lon coordinates ####################################### x,y = pixel2coord(i,j) lonx, latx, z = transform.TransformPoint(x,y) ############### fetching the name of district ############################################## point = Point(lonx,latx) district = reverse_geocode(point) if district=="NRI": print ("Outside Indian Territory") continue ############### The pixel value for that location ########################################## px,py = i,j pix = ds.ReadAsArray(px,py,1,1) pix = pix[0][0] # printing s = "Lat: {0:5.1f}, Long: {1:5.1f}, District: {2:12}, Pixel Val: {3:7}".format(latx,lonx,district,pix) print (s) ``` # Now we can proceed to GenFeatures Notebook
github_jupyter
``` import os, sys module_path = os.path.abspath(os.path.join('..')) sys.path.append(module_path) import random from src.loader import * from src.metrics import Metrics, avg_dicts from tqdm import tqdm class Random: """ Random baseline: probability of 1/(Avg seg length) that a sentence ends a seg """ evalu = Metrics() def __init__(self): pass def __call__(self, *args): return self.validate(*args) def validate(self, dirname): """ Sample N floats in range [0,1]. If a float is less than the inverse of the average segment length, then say that is a predicted segmentation """ if 'probability' not in self.__dict__: self.probability, self.labels, self.counts = self.parametrize(dirname) samples = [random.random() for _ in self.labels] preds = [1 if s <= self.probability else 0 for s in samples] batch = PseudoBatch(self.counts, self.labels) metrics_dict = self.evalu(batch, preds) return batch, preds, metrics_dict def parametrize(self, dirname): """ Return 1 / average segment as random probability pred, test_dir's labels """ counts = flatten([self.parse_files(f) for f in crawl_directory(dirname)]) labels = counts_to_labels(counts) avg_segs = sum(counts) / len(counts) probability = 1 / avg_segs return probability, labels, counts def parse_files(self, filename, minlen=1): """ Count number of segments in each subsection of a document """ counts, subsection = [], '' with open(filename, encoding='utf-8', errors='strict') as f: # For each line in the file, skipping initial break for line in f.readlines()[1:]: # This '========' indicates a new subsection if line.startswith('========'): counts.append(len(sent_tokenizer.tokenize(subsection.strip()))) subsection = '' else: subsection += ' ' + line # Edge case of last subsection needs to be appended counts.append(len(sent_tokenizer.tokenize(subsection.strip()))) return [c for c in counts if c >= minlen] def cross_validate(self, *args, trials=100): """ """ dictionaries = [] for seed in tqdm(range(trials)): random.seed(seed) batch, preds, metrics_dict = self.validate(*args) dictionaries.append(metrics_dict) merged = avg_dicts(dictionaries) return merged random_baseline = Random() # _, _, metrics_dict = random_baseline.validate('../data/wiki_50/test') metrics_dict = random_baseline.cross_validate('../data/wiki_50/test', trials=100) for k, v in metrics_dict.items(): print(k, ':', v) ```
github_jupyter
# TV Script Generation In this project, you'll generate your own [Simpsons](https://en.wikipedia.org/wiki/The_Simpsons) TV scripts using RNNs. You'll be using part of the [Simpsons dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data) of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at [Moe's Tavern](https://simpsonswiki.com/wiki/Moe's_Tavern). ## Get the Data The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc.. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper data_dir = './data/simpsons/moes_tavern_lines.txt' text = helper.load_data(data_dir) # Ignore notice, since we don't use it for analysing the data text = text[81:] ``` ## Explore the Data Play around with `view_sentence_range` to view different parts of the data. ``` view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) scenes = text.split('\n\n') print('Number of scenes: {}'.format(len(scenes))) sentence_count_scene = [scene.count('\n') for scene in scenes] print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene))) sentences = [sentence for scene in scenes for sentence in scene.split('\n')] print('Number of lines: {}'.format(len(sentences))) word_count_sentence = [len(sentence.split()) for sentence in sentences] print('Average number of words in each line: {}'.format(np.average(word_count_sentence))) print() print('The sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ``` ## Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation ### Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call `vocab_to_int` - Dictionary to go from the id to word, we'll call `int_to_vocab` Return these dictionaries in the following tuple `(vocab_to_int, int_to_vocab)` ``` import numpy as np import problem_unittests as tests from collections import Counter def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ # TODO: Implement Function words = ' '.join([ c for c in text]) count = Counter(words.split()) vocab = sorted(count, key=count.get, reverse=True) vocab_to_int = {word: index for index, word in enumerate(vocab, 1)} int_to_vocab = {index: word for index, word in enumerate(vocab, 1)} return vocab_to_int, int_to_vocab """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables) ``` ### Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||". ``` def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function punctuations = { '.' : '||period||', ',' : '||comma||', '"' : '||quotation_mark||', ';' : '||semicolon||', '!' : '||exclamation_mark||', '?' : '||question_mark||', '(' : '||left_parentheses', ')' : '||right_parentheses', '--' : '||dash||', '\n' : '||return||' } return punctuations """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup) ``` ## Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables) ``` # Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import numpy as np import problem_unittests as tests int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() ``` ## Build the Neural Network You'll build the components necessary to build a RNN by implementing the following functions below: - get_inputs - get_init_cell - get_embed - build_rnn - build_nn - get_batches ### Check the Version of TensorFlow and Access to GPU ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ``` ### Input Implement the `get_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) `name` parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple `(Input, Targets, LearningRate)` ``` def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ # TODO: Implement Function inputs = tf.placeholder(tf.int32, [None, None], name="input") targets = tf.placeholder(tf.int32, [None, None], name="targets") lr = tf.placeholder(tf.float32, name='learning_rate') return inputs, targets, lr """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs) ``` ### Build RNN Cell and Initialize Stack one or more [`BasicLSTMCells`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/BasicLSTMCell) in a [`MultiRNNCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell). - The Rnn size should be set using `rnn_size` - Initalize Cell State using the MultiRNNCell's [`zero_state()`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell#zero_state) function - Apply the name "initial_state" to the initial state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity) Return the cell and initial state in the following tuple `(Cell, InitialState)` ``` def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ # TODO: Implement Function lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.MultiRNNCell([lstm, lstm]) initialize_state = tf.identity(cell.zero_state(batch_size, tf.float32), name='initial_state') return cell, initialize_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell) ``` ### Word Embedding Apply embedding to `input_data` using TensorFlow. Return the embedded sequence. ``` def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ # TODO: Implement Function embeddings = tf.Variable(tf.random_normal([vocab_size, embed_dim], stddev=0.1), name='embeddings') embed = tf.nn.embedding_lookup(embeddings, input_data, name='embed') return embed """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_embed(get_embed) ``` ### Build RNN You created a RNN Cell in the `get_init_cell()` function. Time to use the cell to create a RNN. - Build the RNN using the [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) - Apply the name "final_state" to the final state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity) Return the outputs and final_state state in the following tuple `(Outputs, FinalState)` ``` def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ # TODO: Implement Function outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) final_state = tf.identity(final_state, name='final_state') return outputs, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_rnn(build_rnn) ``` ### Build the Neural Network Apply the functions you implemented above to: - Apply embedding to `input_data` using your `get_embed(input_data, vocab_size, embed_dim)` function. - Build RNN using `cell` and your `build_rnn(cell, inputs)` function. - Apply a fully connected layer with a linear activation and `vocab_size` as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState) ``` def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logits, FinalState) """ # TODO: Implement Function embedded_inputs = get_embed(input_data, vocab_size, rnn_size) outputs, final_state = build_rnn(cell, embedded_inputs) logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None, weights_initializer=tf.truncated_normal_initializer(stddev=0.1), biases_initializer=tf.zeros_initializer()) return logits, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn) ``` ### Batches Implement `get_batches` to create batches of input and targets using `int_text`. The batches should be a Numpy array with the shape `(number of batches, 2, batch size, sequence length)`. Each batch contains two elements: - The first element is a single batch of **input** with the shape `[batch size, sequence length]` - The second element is a single batch of **targets** with the shape `[batch size, sequence length]` If you can't fill the last batch with enough data, drop the last batch. For exmple, `get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2)` would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2], [ 7 8], [13 14]] # Batch of targets [[ 2 3], [ 8 9], [14 15]] ] # Second Batch [ # Batch of Input [[ 3 4], [ 9 10], [15 16]] # Batch of targets [[ 4 5], [10 11], [16 17]] ] # Third Batch [ # Batch of Input [[ 5 6], [11 12], [17 18]] # Batch of targets [[ 6 7], [12 13], [18 1]] ] ] ``` Notice that the last target value in the last batch is the first input value of the first batch. In this case, `1`. This is a common technique used when creating sequence batches, although it is rather unintuitive. ``` def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # Calculate the number of batches num_batches = len(int_text) // (batch_size * seq_length) # Drop long batches. Transform into a numpy array and reshape it for our purposes np_text = np.array(int_text[:num_batches * (batch_size * seq_length)]) # Reshape the data to give us the inputs sequence. in_text = np_text.reshape(-1, seq_length) # Roll (shift) and reshape to get target sequences (maybe not optimal) tar_text = np.roll(np_text, -1).reshape(-1, seq_length) output = np.zeros(shape=(num_batches, 2, batch_size, seq_length), dtype=np.int) # Prepare the output for idx in range(0, in_text.shape[0]): jj = idx % num_batches ii = idx // num_batches output[jj,0,ii,:] = in_text[idx,:] output[jj,1,ii,:] = tar_text[idx,:] return output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches) ``` ## Neural Network Training ### Hyperparameters Tune the following parameters: - Set `num_epochs` to the number of epochs. - Set `batch_size` to the batch size. - Set `rnn_size` to the size of the RNNs. - Set `embed_dim` to the size of the embedding. - Set `seq_length` to the length of sequence. - Set `learning_rate` to the learning rate. - Set `show_every_n_batches` to the number of batches the neural network should print progress. ``` # Number of Epochs num_epochs = 50 # Batch Size batch_size = 128 # RNN Size rnn_size = 512 # Embedding Dimension Size embed_dim = 256 # Sequence Length seq_length = 16 # Learning Rate learning_rate = 0.007 # Show stats for every n number of batches show_every_n_batches = 100 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save' ``` ### Build the Graph Build the graph using the neural network you implemented. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data_shape[0], rnn_size) logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim) # Probabilities for generating words probs = tf.nn.softmax(logits, name='probs') # Loss function cost = seq2seq.sequence_loss( logits, targets, tf.ones([input_data_shape[0], input_data_shape[1]])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ``` ## Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the [forums](https://discussions.udacity.com/) to see if anyone is having the same problem. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ batches = get_batches(int_text, batch_size, seq_length) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(num_epochs): state = sess.run(initial_state, {input_text: batches[0][0]}) for batch_i, (x, y) in enumerate(batches): feed = { input_text: x, targets: y, initial_state: state, lr: learning_rate} train_loss, state, _ = sess.run([cost, final_state, train_op], feed) # Show every <show_every_n_batches> batches if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0: print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format( epoch_i, batch_i, len(batches), train_loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_dir) print('Model Trained and Saved') ``` ## Save Parameters Save `seq_length` and `save_dir` for generating a new TV script. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params((seq_length, save_dir)) ``` # Checkpoint ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() seq_length, load_dir = helper.load_params() ``` ## Implement Generate Functions ### Get Tensors Get tensors from `loaded_graph` using the function [`get_tensor_by_name()`](https://www.tensorflow.org/api_docs/python/tf/Graph#get_tensor_by_name). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple `(InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)` ``` def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ # TODO: Implement Function input_tensor = loaded_graph.get_tensor_by_name('input:0') initial_state_tensor = loaded_graph.get_tensor_by_name('initial_state:0') final_state_tensor = loaded_graph.get_tensor_by_name('final_state:0') probs_tensor = loaded_graph.get_tensor_by_name('probs:0') return (input_tensor, initial_state_tensor, final_state_tensor, probs_tensor) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_tensors(get_tensors) ``` ### Choose Word Implement the `pick_word()` function to select the next word using `probabilities`. ``` def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ # TODO: Implement Function return int_to_vocab[np.argmax(probabilities)] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word) ``` ## Generate TV Script This will generate the TV script for you. Set `gen_length` to the length of TV script you want to generate. ``` gen_length = 200 # homer_simpson, moe_szyslak, or Barney_Gumble prime_word = 'moe_szyslak' """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_dir + '.meta') loader.restore(sess, load_dir) # Get Tensors from loaded model input_text, initial_state, final_state, probs = get_tensors(loaded_graph) # Sentences generation setup gen_sentences = [prime_word + ':'] prev_state = sess.run(initial_state, {input_text: np.array([[1]])}) # Generate sentences for n in range(gen_length): # Dynamic Input dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]] dyn_seq_length = len(dyn_input[0]) # Get Prediction probabilities, prev_state = sess.run( [probs, final_state], {input_text: dyn_input, initial_state: prev_state}) pred_word = pick_word(probabilities[0][dyn_seq_length-1], int_to_vocab) #pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab) gen_sentences.append(pred_word) # Remove tokens tv_script = ' '.join(gen_sentences) for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' tv_script = tv_script.replace(' ' + token.lower(), key) tv_script = tv_script.replace('\n ', '\n') tv_script = tv_script.replace('( ', '(') print(tv_script) ``` # The TV Script is Nonsensical It's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of [another dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data). We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course. # Submitting This Project When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.
github_jupyter
<a href="https://colab.research.google.com/github/tvml/ml2021/blob/main/codici/ae.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` IS_COLAB = ('google.colab' in str(get_ipython())) if IS_COLAB: %tensorflow_version 2.x import tensorflow as tf from tensorflow.keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D, Flatten, Reshape from tensorflow.keras.models import Model, Sequential from tensorflow.keras import regularizers from tensorflow.keras.datasets import mnist import matplotlib.pyplot as plt import numpy as np print(tf.__version__) from platform import python_version print(python_version()) if IS_COLAB: from google.colab import drive drive.mount('/gdrive') filepath = "/gdrive/My Drive/colab_data/" else: filepath = "../ml_store" def save_model(m,filename): model_json = m.to_json() with open(filepath+filename+".json", "w") as json_file: json_file.write(model_json) # serialize weights to HDF5 m.save_weights(filepath+filename+".h5") print("Saved model to disk") def load_model_weights(filename, model): model.load_weights(filepath+filename+".h5") print("Loaded weights from disk") return model def load_model(filename): json_file = open(filepath+filename+'.json', 'r') loaded_model_json = json_file.read() json_file.close() m = model_from_json(loaded_model_json) # load weights into new model m.load_weights(filepath+filename+".h5") print("Loaded model from disk") return m # this is the size of our encoded representations encoding_dim = 32 input_size = 784 ae = Sequential() # Encoder Layers ae.add(Dense(encoding_dim, input_shape=(input_size,), activation='relu')) # Decoder Layers ae.add(Dense(input_size, activation='sigmoid')) ae.summary() ae.compile(optimizer='adadelta', loss='binary_crossentropy') (x_train, _), (x_test, _) = mnist.load_data() x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:]))) x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:]))) print(x_train.shape) print(x_test.shape) #ae = load_model_weights('ae', ae) history = ae.fit(x_train, x_train, epochs=50, batch_size=256, shuffle=True, validation_data=(x_test, x_test)) save_model(ae,'ae') ae.layers[0].get_weights()[0][780,:] input_img = Input(shape=(input_size,)) encoder_layer1 = ae.layers[0] encoder = Model(input_img, encoder_layer1(input_img)) encoder.summary() num_images = 10 np.random.seed(42) random_test_images = np.random.randint(x_test.shape[0], size=num_images) encoded_imgs = encoder.predict(x_test) decoded_imgs = ae.predict(x_test) plt.figure(figsize=(18, 4)) for i, image_idx in enumerate(random_test_images): # plot original image ax = plt.subplot(3, num_images, i + 1) plt.imshow(x_test[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot encoded image ax = plt.subplot(3, num_images, num_images + i + 1) plt.imshow(encoded_imgs[image_idx].reshape(8, 4)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot reconstructed image ax = plt.subplot(3, num_images, 2*num_images + i + 1) plt.imshow(decoded_imgs[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() ae1 = Sequential() # Encoder Layers ae1.add(Dense(encoding_dim, input_shape=(input_size,), activation='relu', activity_regularizer=regularizers.l1(10e-6))) # Decoder Layers ae1.add(Dense(input_size, activation='sigmoid')) ae1.summary() ae1.compile(optimizer='adadelta', loss='binary_crossentropy') #ae1 = load_model_weights('ae1', ae1) ae1.fit(x_train, x_train, epochs=150, batch_size=256, shuffle=True, validation_data=(x_test, x_test)) save_model(ae1,'ae1') input_img = Input(shape=(input_size,)) encoder_layer1 = ae1.layers[0] encoder1 = Model(input_img, encoder_layer1(input_img)) encoder1.summary() num_images = 10 np.random.seed(42) random_test_images = np.random.randint(x_test.shape[0], size=num_images) encoded_imgs = encoder1.predict(x_test) decoded_imgs = ae1.predict(x_test) plt.figure(figsize=(18, 4)) for i, image_idx in enumerate(random_test_images): # plot original image ax = plt.subplot(3, num_images, i + 1) plt.imshow(x_test[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot encoded image ax = plt.subplot(3, num_images, num_images + i + 1) plt.imshow(encoded_imgs[image_idx].reshape(8, 4)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot reconstructed image ax = plt.subplot(3, num_images, 2*num_images + i + 1) plt.imshow(decoded_imgs[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() ae2 = Sequential() # Encoder Layers ae2.add(Dense(4 * encoding_dim, input_shape=(784,), activation='relu')) ae2.add(Dense(2 * encoding_dim, activation='relu')) ae2.add(Dense(encoding_dim, activation='relu')) # Decoder Layers ae2.add(Dense(2 * encoding_dim, activation='relu')) ae2.add(Dense(4 * encoding_dim, activation='relu')) ae2.add(Dense(784, activation='sigmoid')) ae2.summary() ae2.compile(optimizer='adam', loss='binary_crossentropy') ae2 = load_model_weights('ae2', ae2) ae2.fit(x_train, x_train, epochs=50, batch_size=256, validation_data=(x_test, x_test)) save_model(ae2,'ae2') input_img = Input(shape=(input_size,)) encoder_layer1 = ae2.layers[0] encoder_layer2 = ae2.layers[1] encoder_layer3 = ae2.layers[2] encoder2 = Model(input_img, encoder_layer3(encoder_layer2(encoder_layer1(input_img)))) encoder2.summary() num_images = 10 np.random.seed(42) random_test_images = np.random.randint(x_test.shape[0], size=num_images) encoded_imgs = encoder2.predict(x_test) decoded_imgs = ae2.predict(x_test) plt.figure(figsize=(18, 4)) for i, image_idx in enumerate(random_test_images): # plot original image ax = plt.subplot(3, num_images, i + 1) plt.imshow(x_test[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot encoded image ax = plt.subplot(3, num_images, num_images + i + 1) plt.imshow(encoded_imgs[image_idx].reshape(8, 4)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot reconstructed image ax = plt.subplot(3, num_images, 2*num_images + i + 1) plt.imshow(decoded_imgs[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() x_train_r = x_train.reshape((len(x_train), 28, 28, 1)) x_test_r = x_test.reshape((len(x_test), 28, 28, 1)) autoencoder = Sequential() # Encoder Layers autoencoder.add(Conv2D(16, (3, 3), activation='relu', padding='same', input_shape=x_train_r.shape[1:])) autoencoder.add(MaxPooling2D((2, 2), padding='same')) autoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same')) autoencoder.add(MaxPooling2D((2, 2), padding='same')) autoencoder.add(Conv2D(8, (3, 3), strides=(2,2), activation='relu', padding='same')) # Flatten encoding for visualization autoencoder.add(Flatten()) autoencoder.add(Reshape((4, 4, 8))) # Decoder Layers autoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same')) autoencoder.add(UpSampling2D((2, 2))) autoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same')) autoencoder.add(UpSampling2D((2, 2))) autoencoder.add(Conv2D(16, (3, 3), activation='relu')) autoencoder.add(UpSampling2D((2, 2))) autoencoder.add(Conv2D(1, (3, 3), activation='sigmoid', padding='same')) autoencoder.summary() encoder = Model(inputs=autoencoder.input, outputs=autoencoder.get_layer('flatten_1').output) encoder.summary() autoencoder.compile(optimizer='adam', loss='binary_crossentropy') autoencoder = load_model_weights('ae3',autoencoder) autoencoder.fit(x_train_r, x_train_r, epochs=10, batch_size=128, validation_data=(x_test_r, x_test_r)) save_model(autoencoder, 'ae3') num_images = 10 np.random.seed(42) random_test_images = np.random.randint(x_test.shape[0], size=num_images) encoded_imgs = encoder.predict(x_test_r) decoded_imgs = autoencoder.predict(x_test_r) plt.figure(figsize=(18, 4)) for i, image_idx in enumerate(random_test_images): # plot original image ax = plt.subplot(3, num_images, i + 1) plt.imshow(x_test[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot encoded image ax = plt.subplot(3, num_images, num_images + i + 1) plt.imshow(encoded_imgs[image_idx].reshape(16, 8)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot reconstructed image ax = plt.subplot(3, num_images, 2*num_images + i + 1) plt.imshow(decoded_imgs[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() ```
github_jupyter
``` import tensorflow as tf from matplotlib import pyplot as plt %matplotlib inline plt.style.use('ggplot') ``` ## Data ``` n_observations = 10000 xs = np.linspace(-3,3,n_observations) ys = np.sin(xs) + np.random.uniform(-0.5,0.5,n_observations) plt.plot(xs,ys, marker='+',alpha=0.4) ``` ## Cost ``` sess = tf.Session() X = tf.placeholder(tf.float32, name='X') Y = tf.placeholder(tf.float32, name='Y') n = tf.random_normal([1000],stddev=0.1).eval(session=sess) plt.hist(n) W = tf.Variable(tf.random_normal([1], dtype=tf.float32, stddev=0.1), name='weight') B = tf.Variable(tf.constant([0], dtype=tf.float32), name='bias') Y_pred = X * W + B cost = tf.abs(Y_pred - Y) # sum over all samples (similar to np.mean) cost = tf.reduce_mean(cost) ``` ## Training ``` sess = tf.InteractiveSession() # Plot the true data distribution fig, ax = plt.subplots(1, 1) ax.scatter(xs, ys, alpha=0.15, marker='+') # with tf.Session() as sess: # we already have an interactive session open # init all the variables in the graph # This will set `W` and `b` to their initial random normal value. sess.run(tf.global_variables_initializer()) # We now run a loop over epochs prev_training_cost = 0.0 n_iterations = 500 optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost) for it_i in range(n_iterations): sess.run(optimizer, feed_dict={X: xs, Y: ys}) training_cost = sess.run(cost, feed_dict={X: xs, Y: ys}) # every 10 iterations if it_i % 10 == 0: # let's plot the x versus the predicted y ys_pred = Y_pred.eval(feed_dict={X: xs}, session=sess) # We'll draw points as a scatter plot just like before # Except we'll also scale the alpha value so that it gets # darker as the iterations get closer to the end ax.plot(xs, ys_pred, 'k', alpha=float(it_i) / (n_iterations/2.)) fig.show() plt.draw() # And let's print our training cost: mean of absolute differences # print(training_cost) # Allow the training to quit if we've reached a minimum if np.abs(prev_training_cost - training_cost) < 0.000001: break # Keep track of the training cost prev_training_cost = training_cost ``` ## Stochastic/minibatch gradient descent ``` idxs = np.arange(100) rand_idxs = np.random.permutation(idxs) batch_size = 10 n_batches = len(rand_idxs) // batch_size print('# of batches:', n_batches) def distance(p1, p2): return tf.abs(p1 - p2) def train(X, Y, Y_pred, n_iterations=100, batch_size=200, learning_rate=0.02): cost = tf.reduce_mean(distance(Y_pred, Y)) optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) fig, ax = plt.subplots(1, 1) ax.scatter(xs, ys, alpha=0.15, marker='+') ax.set_xlim([-4, 4]) ax.set_ylim([-2, 2]) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # We now run a loop over epochs prev_training_cost = 0.0 for it_i in range(n_iterations): idxs = np.random.permutation(range(len(xs))) n_batches = len(idxs) // batch_size for batch_i in range(n_batches): idxs_i = idxs[batch_i * batch_size: (batch_i + 1) * batch_size] sess.run(optimizer, feed_dict={X: xs[idxs_i], Y: ys[idxs_i]}) training_cost = sess.run(cost, feed_dict={X: xs, Y: ys}) if it_i % 10 == 0: ys_pred = Y_pred.eval(feed_dict={X: xs}, session=sess) ax.plot(xs, ys_pred, 'k', alpha=it_i / float(n_iterations)) print(training_cost) fig.show() plt.draw() Y_pred = tf.Variable(tf.random_normal([1]), name='bias') for pow_i in range(0, 4): W = tf.Variable( tf.random_normal([1], stddev=0.1), name='weight_%d' % pow_i) Y_pred = tf.add(tf.mul(tf.pow(X, pow_i), W), Y_pred) # And then we'll retrain with our new Y_pred train(X, Y, Y_pred) 0.487767 from tensorflow.python.framework import ops ops.reset_default_graph() g = tf.get_default_graph() sess = tf.InteractiveSession() n_observations = 10000 xs = np.linspace(-3,3,n_observations) ys = np.sin(xs) + np.random.uniform(-0.5,0.5,n_observations) X = tf.placeholder(tf.float32, shape=[1, None], name='X') Y = tf.placeholder(tf.float32, shape=[1, None], name='Y') W = tf.Variable(tf.random_normal([1], dtype=tf.float32, stddev=0.1), name='weight') B = tf.Variable(tf.constant([0], dtype=tf.float32), name='bias') Y_pred = X * W + B def linear(X, n_input, n_output, activation=None, scope=None): with tf.variable_scope(scope or "linear"): W = tf.get_variable( name='W', shape=[n_input, n_output], initializer=tf.random_normal_initializer(mean=0.0, stddev=0.1)) b = tf.get_variable( name='b', shape=[n_output], initializer=tf.constant_initializer()) h = tf.matmul(X, W) + b if activation is not None: h = activation(h) return h # h = linear(X, 2, 10, scope='layer1') # h2 = linear(h, 10, 10, scope='layer2') # h3 = linear(h2, 10, 3, scope='layer3') # [op.name for op in tf.get_default_graph().get_operations()] # train on previous example simple = linear(X,1,1) def distance(p1, p2): return tf.abs(p1 - p2) n_iterations = 100 batch_size = 10 cost = tf.reduce_mean(tf.reduce_sum(distance(Y_pred, Y), 1)) optimizer = tf.train.AdamOptimizer(0.001).minimize(cost) # # optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) sess.run(tf.global_variables_initializer()) # # We now run a loop over epochs prev_training_cost = 0.0 for it_i in range(n_iterations): idxs = np.random.permutation(range(len(xs))) n_batches = len(idxs) // batch_size for batch_i in range(n_batches): idxs_i = idxs[batch_i * batch_size: (batch_i + 1) * batch_size] sess.run(optimizer, feed_dict={X: xs[idxs_i], Y: ys[idxs_i]}) # training_cost = sess.run(cost, feed_dict={X: xs, Y: ys}) # print(it_i, training_cost) def train2(X, Y, Y_pred, n_iterations=100, batch_size=50, learning_rate=0.02): # n_iterations = 500 # batch_size = 50 cost = tf.reduce_mean(tf.reduce_sum(distance(Y_pred, Y), 1)) optimizer = tf.train.AdamOptimizer(0.001).minimize(cost) # optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) sess.run(tf.global_variables_initializer()) # We now run a loop over epochs prev_training_cost = 0.0 for it_i in range(n_iterations): idxs = np.random.permutation(range(len(xs))) n_batches = len(idxs) // batch_size for batch_i in range(n_batches): idxs_i = idxs[batch_i * batch_size: (batch_i + 1) * batch_size] sess.run(optimizer, feed_dict={X: xs[idxs_i], Y: ys[idxs_i]}) training_cost = sess.run(cost, feed_dict={X: xs, Y: ys}) print(it_i, training_cost) # if (it_i + 1) % 20 == 0: # ys_pred = Y_pred.eval(feed_dict={X: xs}, session=sess) # fig, ax = plt.subplots(1, 1) # # img = np.clip(ys_pred.reshape(img.shape), 0, 255).astype(np.uint8) # plt.imshow(img) # plt.show() train2(X, Y, Y_pred) ```
github_jupyter
``` from google.colab import drive drive.mount('/content/drive', force_remount=True) cd /content/drive/MyDrive/sop-covid/voice/model_rnn/breath !unzip ../../data_rnn/data_breath.zip import numpy as np import tensorflow as tf import tensorflow.keras as keras import matplotlib.pyplot as plt import pickle import os import sys sys.path.append('..') from utils import * SEED = 1 np.random.seed(SEED) tf.random.set_seed(SEED) train_X = np.load(os.path.join('data_breath', 'train_X.npy')) train_y = np.load(os.path.join('data_breath', 'train_y.npy')) valid_X = np.load(os.path.join('data_breath', 'valid_X.npy')) valid_y = np.load(os.path.join('data_breath', 'valid_y.npy')) test_X = np.load(os.path.join('data_breath', 'test_X.npy')) test_y = np.load(os.path.join('data_breath', 'test_y.npy')) sc = Scaler() sc.fit(train_X, (0, 1)) train_X_n = sc.transform(train_X, 'normalize') train_X_s = sc.transform(train_X, 'standardize') valid_X_n = sc.transform(valid_X, 'normalize') valid_X_s = sc.transform(valid_X, 'standardize') test_X_n = sc.transform(test_X, 'normalize') test_X_s = sc.transform(test_X, 'standardize') # Undersampling the majority class to make the distribution 50:50. train_X_n_under = train_X[:int(2 * train_y.sum())] train_y_under = train_y[:int(2 * train_y.sum())] valid_X_n_under = valid_X[:int(2 * valid_y.sum())] valid_y_under = valid_y[:int(2 * valid_y.sum())] test_X_n_under = test_X[:int(2 * test_y.sum())] test_y_under = test_y[:int(2 * test_y.sum())] # Hyperparameters learning_rate = 1e-3 epochs = 100 batch_size = 256 # Callback for early stopping es_callback = keras.callbacks.EarlyStopping( monitor='val_loss', min_delta=0.0001, patience=5, restore_best_weights=True ) # Callback for reducing learning rate on loss plateauing plateau_callback = keras.callbacks.ReduceLROnPlateau( monitor='val_loss', factor=0.1, patience=5, min_delta=0.0001 ) metrics = [ keras.metrics.BinaryAccuracy(name='acc'), keras.metrics.Precision(name='precision'), keras.metrics.Recall(name='recall'), keras.metrics.AUC(name='auc'), keras.metrics.TruePositives(name='tp'), keras.metrics.FalsePositives(name='fp'), keras.metrics.TrueNegatives(name='tn'), keras.metrics.FalseNegatives(name='fn') ] model = keras.Sequential([ keras.layers.LSTM(32, activation='tanh', return_sequences=True, input_shape=train_X.shape[1:]), keras.layers.LSTM(32, activation='tanh', return_sequences=False), keras.layers.Dense(32, activation='relu'), keras.layers.Dense(1, activation='sigmoid') ]) model.compile( optimizer=keras.optimizers.Adam(lr=learning_rate), loss='binary_crossentropy', metrics=metrics ) # %%script echo "Comment line with %%script echo to run this cell." history = model.fit( train_X_n_under, train_y_under, epochs=epochs, batch_size=batch_size, validation_data=(valid_X_n_under, valid_y_under), callbacks=[es_callback, plateau_callback], shuffle=True ) # %%script echo "Comment line with %%script echo to run this cell." model.save('lstm.h5') # %%script echo "Comment line with %%script echo to run this cell." with open('lstm_history.pickle', 'wb') as f: pickle.dump(history.history, f) model = keras.models.load_model('lstm.h5') with open('lstm_history.pickle', 'rb') as f: history = pickle.load(f) model.predict(test_X_n_under) model.evaluate(test_X_n_under, test_y_under) plt.plot( np.arange(1, len(history['loss']) + 1), history['loss'], color='b', label='Training' ) plt.plot( np.arange(1, len(history['val_loss']) + 1), history['val_loss'], color='r', label='Validation' ) plt.xlabel('Epoch') plt.ylabel('Loss') plt.xticks(np.arange(0, len(history['loss']) + 1, 5)) plt.legend() plt.plot( np.arange(1, len(history['acc']) + 1), history['acc'], color='b', label='Training' ) plt.plot( np.arange(1, len(history['val_acc']) + 1), history['val_acc'], color='r', label='Validation' ) plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.xticks(np.arange(0, len(history['loss']) + 1, 5)) plt.legend() ```
github_jupyter
ERROR: type should be string, got "https://www.testdome.com/questions/python/two-sum/14289?questionIds=14288,14289&generatorId=92&type=fromtest&testDifficulty=Easy\n\nWrite a function that, given a list and a target sum, returns zero-based indices of any two distinct elements whose sum is equal to the target sum. If there are no such elements, the function should return (-1, -1).\n\nFor example, `find_two_sum([1, 3, 5, 7, 9], 12)` should return a tuple containing any of the following pairs of indices:\n```\n1 and 4 (3 + 9 = 12)\n2 and 3 (5 + 7 = 12)\n3 and 2 (7 + 5 = 12)\n4 and 1 (9 + 3 = 12)\n```\n\n```\n# Это единственный комментарий который имеет смысл\n# I s\ndef find_index(m,a):\n try:\n return a.index(m)\n except :\n return -1\n \n \ndef find_two_sum(a, s):\n '''\n >>> (3, 5) == find_two_sum([1, 3, 5, 7, 9], 12)\n True\n '''\n if len(a)<2: \n return (-1,-1)\n\n idx = dict( (v,i) for i,v in enumerate(a) )\n\n for i in a:\n m = s - i\n k = idx.get(m,-1)\n if k != -1 :\n return (i,k)\n\n return (-1, -1)\n\n\nprint(find_two_sum([1, 3, 5, 7, 9], 12))\n\n\nif __name__ == '__main__':\n import doctest; doctest.testmod()\n```\n\nhttps://stackoverflow.com/questions/28309430/edit-ipython-cell-in-an-external-editor\n\n\nEdit IPython cell in an external editor\n---\n\nThis is what I came up with. I added 2 shortcuts:\n\n- 'g' to launch gvim with the content of the current cell (you can replace gvim with whatever text editor you like).\n- 'u' to update the content of the current cell with what was saved by gvim.\nSo, when you want to edit the cell with your preferred editor, hit 'g', make the changes you want to the cell, save the file in your editor (and quit), then hit 'u'.\n\nJust execute this cell to enable these features:\n\n```\n%%javascript\n\nIPython.keyboard_manager.command_shortcuts.add_shortcut('g', {\n handler : function (event) {\n \n var input = IPython.notebook.get_selected_cell().get_text();\n \n var cmd = \"f = open('.toto.py', 'w');f.close()\";\n if (input != \"\") {\n cmd = '%%writefile .toto.py\\n' + input;\n }\n IPython.notebook.kernel.execute(cmd);\n //cmd = \"import os;os.system('open -a /Applications/MacVim.app .toto.py')\";\n //cmd = \"!open -a /Applications/MacVim.app .toto.py\";\n cmd = \"!code .toto.py\";\n\n IPython.notebook.kernel.execute(cmd);\n return false;\n }}\n);\n\nIPython.keyboard_manager.command_shortcuts.add_shortcut('u', {\n handler : function (event) {\n function handle_output(msg) {\n var ret = msg.content.text;\n IPython.notebook.get_selected_cell().set_text(ret);\n }\n var callback = {'output': handle_output};\n var cmd = \"f = open('.toto.py', 'r');print(f.read())\";\n IPython.notebook.kernel.execute(cmd, {iopub: callback}, {silent: false});\n return false;\n }}\n);\n# v=getattr(a, 'pop')(1)\ns='print 4 7 '\ncommands={\n 'print':print,\n 'len':len\n }\n\n\ndef exec_string(s):\n global commands\n chunks=s.split()\n func_name=chunks[0] if len(chunks) else 'blbl'\n func=commands.get(func_name,None)\n \n params=[int(x) for x in chunks[1:]]\n if func:\n func(*params)\n\nexec_string(s)\n```\n\n# Symmetric Difference\n\nhttps://www.hackerrank.com/challenges/symmetric-difference/problem\n\n#### Task \nGiven sets of integers, and , print their symmetric difference in ascending order. The term symmetric difference indicates those values that exist in either or but do not exist in both.\n\n#### Input Format\n\nThe first line of input contains an integer, . \nThe second line contains space-separated integers. \nThe third line contains an integer, . \nThe fourth line contains space-separated integers.\n\n##### Output Format\n\nOutput the symmetric difference integers in ascending order, one per line.\n\n#### Sample Input\n````\n4\n2 4 5 9\n4\n2 4 11 12\n````\n##### Sample Output\n````\n5\n9\n11\n12\n````\n\n```\nM = int(input())\nm =set((map(int,input().split())))\nN = int(input())\nn =set((map(int,input().split())))\nm ^ n\nS='add 5 6'\nmethod, *args = S.split()\nprint(method)\nprint(*map(int,args))\nmethod,(*map(int,args))\n\n# methods\n# (*map(int,args))\n\n# command='add'.split()\n# method, args = command[0], list(map(int,command[1:]))\n# method, args\nfor _ in range(2):\n met, *args = input().split()\n print(met, args)\n try:\n pass\n\n# methods[met](*list(map(int,args)))\n except:\n pass\nclass Stack:\n def __init__(self):\n self.data = []\n\n def is_empty(self):\n return self.data == []\n\n def size(self):\n return len(self.data)\n\n def push(self, val):\n self.data.append(val)\n\n def clear(self):\n self.data.clear()\n \n def pop(self):\n return self.data.pop()\n\n def __repr__(self):\n return \"Stack(\"+str(self.data)+\")\"\ndef sum_list(ls):\n if len(ls)==0:\n return 0\n elif len(ls)==1:\n return ls[0]\n else:\n return ls[0] + sum_list(ls[1:])\n\ndef max_list(ls):\n print(ls)\n if len(ls)==0:\n return None\n elif len(ls)==1:\n return ls[0]\n else:\n\n m = max_list(ls[1:])\n return ls[0] if ls[0]>m else m\n \ndef reverse_list(ls):\n if len(ls)<2:\n return ls\n \n return reverse_list(ls[1:])+ls[0:1]\n\n\ndef is_ana(s=''):\n if len(s)<2:\n return True\n return s[0]==s[-1] and is_ana(s[1:len(s)-1])\n \n \n \nprint(is_ana(\"abc\"))\nimport turtle\n\nmyTurtle = turtle.Turtle()\nmyWin = turtle.Screen()\n\ndef drawSpiral(myTurtle, lineLen):\n if lineLen > 0:\n myTurtle.forward(lineLen)\n myTurtle.right(90)\n drawSpiral(myTurtle,lineLen-5)\n\ndrawSpiral(myTurtle,100)\n# myWin.exitonclick()\nt.forward(100)\nfrom itertools import combinations_with_replacement\nlist(combinations_with_replacement([1,1,3,3,3],2))\nhash((1,2))\n# 4 \n# a a c d\n# 2\n\n\nfrom itertools import combinations\n\n# N=int(input())\n# s=input().split()\n# k=int(input())\n\ns='a a c d'.split()\nk=2\n\n\ncombs=list(combinations(s,k))\n\n\nprint('{:.4f}'.format(len([x for x in combs if 'a' in x])/len(combs)))\n\n# ------------------------------------------\n\nimport random\n\nnum_trials=10000\nnum_found=0\n\nfor i in range(num_trials):\n if 'a' in random.sample(s,k):\n num_found+=1\n \n\n\nprint('{:.4f}'.format(num_found/num_trials))\ndir(5)\n```\n\n"
github_jupyter
``` !pip install unidecode googletrans !pip install squarify import re import time import tweepy import folium import squarify import warnings import collections import numpy as np import pandas as pd from PIL import Image from folium import plugins from datetime import datetime from textblob import TextBlob import matplotlib.pyplot as plt from unidecode import unidecode from googletrans import Translator from geopy.geocoders import Nominatim from wordcloud import WordCloud, STOPWORDS # Adicione suas credenciais para a API do Twitter CONSUMER_KEY = YOUR_CONSUMER_KEY CONSUMER_SECRET = YOUR_CONSUMER_SECRET ACCESS_TOKEN = YOUR_ACCESS_TOKEN ACCESS_TOKEN_SECRET = YOUR_ACCESS_TOKEN_SECRET ``` # Implementação da classe para obter os tweets ``` class TweetAnalyzer(): def __init__(self, consumer_key, consumer_secret, access_token, access_token_secret): ''' Conectar com o tweepy ''' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) self.conToken = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True, retry_count=5, retry_delay=10) def __clean_tweet(self, tweets_text): ''' Tweet cleansing. ''' clean_text = re.sub(r'RT+', '', tweets_text) clean_text = re.sub(r'@\S+', '', clean_text) clean_text = re.sub(r'http\S+', '', clean_text) clean_text = clean_text.replace("\n", " ") return clean_text def search_by_keyword(self, keyword, count=10, result_type='mixed', lang='en', tweet_mode='extended'): ''' Search for the twitters thar has commented the keyword subject. ''' tweets_iter = tweepy.Cursor(self.conToken.search, q=keyword, tweet_mode=tweet_mode, rpp=count, result_type=result_type, since=datetime(2020,7,31,0,0,0).date(), lang=lang, include_entities=True).items(count) return tweets_iter def prepare_tweets_list(self, tweets_iter): ''' Transforming the data to DataFrame. ''' tweets_data_list = [] for tweet in tweets_iter: if not 'retweeted_status' in dir(tweet): tweet_text = self.__clean_tweet(tweet.full_text) tweets_data = { 'len' : len(tweet_text), 'ID' : tweet.id, 'User' : tweet.user.screen_name, 'UserName' : tweet.user.name, 'UserLocation' : tweet.user.location, 'TweetText' : tweet_text, 'Language' : tweet.user.lang, 'Date' : tweet.created_at, 'Source': tweet.source, 'Likes' : tweet.favorite_count, 'Retweets' : tweet.retweet_count, 'Coordinates' : tweet.coordinates, 'Place' : tweet.place } tweets_data_list.append(tweets_data) return tweets_data_list def sentiment_polarity(self, tweets_text_list): tweets_sentiments_list = [] for tweet in tweets_text_list: polarity = TextBlob(tweet).sentiment.polarity if polarity > 0: tweets_sentiments_list.append('Positive') elif polarity < 0: tweets_sentiments_list.append('Negative') else: tweets_sentiments_list.append('Neutral') return tweets_sentiments_list analyzer = TweetAnalyzer(consumer_key = CONSUMER_KEY, consumer_secret = CONSUMER_SECRET, access_token = ACCESS_TOKEN, access_token_secret=ACCESS_TOKEN_SECRET) keyword = ("'Black is King' OR 'black is king' OR 'Beyonce' OR 'beyonce' OR #blackisking OR '#BlackIsKing' OR 'black is king beyonce'") count = 5000 tweets_iter = analyzer.search_by_keyword(keyword, count) tweets_list = analyzer.prepare_tweets_list(tweets_iter) tweets_df = pd.DataFrame(tweets_list) ``` # Análises ## Qual o tweet mais curtido e retweetado? ``` likes_max = np.max(tweets_df['Likes']) likes = tweets_df[tweets_df.Likes == likes_max].index[0] print(f"O tweet com mais curtidas é: {tweets_df['TweetText'][likes]}") print(f"Numero de curtidas: {likes_max}") retweet_max = np.max(tweets_df['Retweets']) retweet = tweets_df[tweets_df.Retweets == retweet_max].index[0] print(f"O tweet com mais retweets é: {tweets_df['TweetText'][retweet]}") print(f"Numero de curtidas: {retweet_max}") ``` ## Qual a porcentagem dos sentimentos captado? ``` tweets_df['Sentiment'] = analyzer.sentiment_polarity(tweets_df['TweetText']) sentiment_percentage = tweets_df.groupby('Sentiment')['ID'].count().apply(lambda x : 100 * x / count) sentiment_percentage.plot(kind='bar') plt.show() plt.savefig('sentiments_tweets.png', bbox_inches='tight', pad_inches=0.5) ``` ## Quais as palavras mais atribuídas? ``` words = ' '.join(tweets_df['TweetText']) words_clean = " ".join([word for word in words.split()]) warnings.simplefilter('ignore') mask = np.array(Image.open('crown.png')) wc = WordCloud(stopwords=STOPWORDS, mask=mask, max_words=1000, max_font_size=100, min_font_size=10, random_state=42, background_color='white', mode="RGB", width=mask.shape[1], height=mask.shape[0], normalize_plurals=True).generate(words_clean) plt.imshow(wc, interpolation="bilinear") plt.axis("off") plt.savefig('black_is_king_cloud.png', dpi=300) plt.show() ``` ## Quais são as fonte de tweets mais utilizados? ``` # Calcular quantidade de source source_list = tweets_df['Source'].tolist() occurrences = collections.Counter(source_list) source_df = pd.DataFrame({'Total':list(occurrences.values())}, index=occurrences.keys()) sources_sorted = source_df.sort_values('Total', ascending=True) # Plotar gráfico plt.style.use('ggplot') plt.rcParams['axes.edgecolor']='#333F4B' plt.rcParams['axes.linewidth']=0.8 plt.rcParams['xtick.color']='#333F4B' plt.rcParams['ytick.color']='#333F4B' my_range=list(range(1,len(sources_sorted.index)+1)) ax = sources_sorted.Total.plot(kind='barh',color='#1f77b4', alpha=0.8, linewidth=5, figsize=(15,15)) ax.get_xaxis().set_major_formatter(plt.FuncFormatter(lambda x, loc: "{:,}".format(int(x)))) plt.savefig('source_tweets.png', bbox_inches='tight', pad_inches=0.5) # Distribuição das 5 primeiras fontes mais utilizadas squarify.plot(sizes=sources_sorted['Total'][:5], label=sources_sorted.index, alpha=.5) plt.axis('off') plt.show() ``` ## De quais regiões vieram os tweets ``` geolocator = Nominatim(user_agent="TweeterSentiments") latitude = [] longitude = [] for user_location in tweets_df['UserLocation']: try: location = geolocator.geocode(user_location) latitude.append(location.latitude) longitude.append(location.longitude) except: continue coordenadas = np.column_stack((latitude, longitude)) mapa = folium.Map(zoom_start=3.) mapa.add_child(plugins.HeatMap(coordenadas)) mapa.save('Mapa_calor_tweets.html') mapa ``` ## Análise temporal dos tweets ``` data = tweets_df data['Date'] = pd.to_datetime(data['Date']).apply(lambda x: x.date()) tlen = pd.Series(data['Date'].value_counts(), index=data['Date']) tlen.plot(figsize=(16,4), color='b') plt.savefig('timeline_tweets.png', bbox_inches='tight', pad_inches=0.5) ```
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import math # Define functions # Scale values def scale(arr): return (np.array(arr) - np.mean(arr)) / np.std(arr) # Find the slope of the best fitting line def fit_slope(x, y): return (np.mean(x) * np.mean(y) - np.mean(x * y)) / (np.mean(x)**2 - np.mean(x**2)) # Rotate point counterclockwise def rotate(origin, point, angle): """ Rotate a point counterclockwise by a given angle around a given origin. The angle should be given in radians. """ ox, oy = origin px, py = point qx = ox + math.cos(angle) * (px - ox) - math.sin(angle) * (py - oy) qy = oy + math.sin(angle) * (px - ox) + math.cos(angle) * (py - oy) return qx, qy # Calculate variation def variation(array, mean=0): return np.sum((mean - array) ** 2) / (array.shape[0]-1) # Find the coordination of the intersection between 2 lines def lines_intersection(coefficients1, coefficients2): coefficients1, coefficients2 = np.copy(coefficients1), np.copy(coefficients2) if coefficients1.shape[0] < coefficients2.shape[0]: coefficients1 = np.pad(coefficients1, (0,coefficients2.shape[0]-coefficients1.shape[0]), 'constant', constant_values=0) bias1 = 0 bias2 = coefficients2[-1] elif coefficients1.shape[0] > coefficients2.shape[0]: coefficients2 = np.pad(coefficients2, (0, coefficients1.shape[0]-coefficients2.shape[0]), 'constant', constant_values=0) bias2 = 0 bias1 = coefficients1[-1] else: bias1 = coefficients1[-1] bias2 = coefficients2[-1] bias_sum = bias2 - bias1 coefficients1 = coefficients1[:-1] coefficients2 = coefficients2[:-1] total = 0 for i in range(coefficients1.shape[0]): total += coefficients1[i] + (-1 * coefficients2[i]) # No intersection if total == 0: return None x = (1/total) * bias_sum y = [x * coefficients1 + bias1][0][0] return [x,y] # Find a prependicular line (can be moved from some point (origin)) def find_prependicular_line(coefficients, origin=None): coefficients = np.copy(coefficients) bias = coefficients[-1] slopes = coefficients[:-1] slopes = -1 * np.reciprocal(slopes, dtype='float') coefficients[:-1] = slopes if origin is None: origin = np.zeros(coefficients.shape[0]-1) bias += np.sum(slopes * -1 * origin[:-1]) + origin[-1] coefficients[-1] = bias return coefficients def project_points_onto_line(x, y, coefficients): if x.shape[0] != y.shape[0]: return None projections_x = np.zeros(x.shape[0]) projections_y = np.zeros(x.shape[0]) for i in range(x.shape[0]): # pr is prependicular line pr_slope, pr_b = find_prependicular_line(coefficients, np.array([x[i], y[i]])) inter_x, inter_y = lines_intersection(coefficients, np.array([pr_slope, pr_b])) projections_x[i] = inter_x projections_y[i] = inter_y return projections_x, projections_y # original data x1 = np.array([10, 11, 8, 3, 2, 1], dtype='float') x2 = np.array([6, 4, 5, 3, 2.8, 1], dtype='float') # Visualize plt.scatter(x1,x2) plt.axis('equal') plt.axvline(0) plt.axhline(0) plt.title("Original data") # Shifting data to center x1_avg_pt = np.average(x1) x2_avg_pt = np.average(x2) x1 -= x1_avg_pt x2 -= x2_avg_pt # Visualize shifted data plt.scatter(x1, x2) plt.axis('equal') plt.axvline(0) plt.axhline(0) plt.title("Original data shifted to 0 as the new origin") # Finding best fitting line which intersects with the origin pc1_x = np.arange(np.min(x1), np.max(x1)+1) pc1_slope = fit_slope(x1,x2) pc1_y = pc1_slope * pc1_x + 0 # Visualize best fitting line (PC1) plt.scatter(x1, x2) plt.plot(pc1_x, pc1_y) plt.axis('equal') plt.axvline(0) plt.axhline(0) plt.title("PC1") # Finding PC2 (prependicular on PC1) pc2_x = np.arange(np.min(x2), np.max(x2)+1) pc2_slope = -1 * (1/pc1_slope) pc2_y = pc2_slope * pc2_x # # Scale values using Pythagorean theoery # a = 1/pc1_slope # b = 1 # c = np.sqrt([a**2 + b**2]) # # x1, x2, pc1_x, pc1_y, pc2_y, pc2_x = x1/c, x2/c, pc1_x/c, pc1_y/c, pc2_y/c, pc2_x/c # Visualize PC1 & PC2 plt.scatter(x1, x2) plt.plot(pc1_x, pc1_y) plt.plot(pc2_x, pc2_y, color='red') plt.axis('equal') plt.axvline(0) plt.axhline(0) plt.title("PC1 (blue) & PC2 (red)") # Projection of data points onto PC1 projections_pc1_x, projections_pc1_y = project_points_onto_line(x1, x2, np.array([pc1_slope, 0])) # Visualize projections of data points on PC1 plt.scatter(x1, x2) plt.scatter(projections_pc1_x, projections_pc1_y) plt.plot(pc1_x, pc1_y, color='red') plt.axis('equal') plt.axvline(0) plt.axhline(0) plt.title("Projection of data points onto PC1") # Projection of data points onto PC2 projections_pc2_x, projections_pc2_y = project_points_onto_line(x1, x2, np.array([pc2_slope, 0])) # Visualize of projection of data points onto PC2 plt.scatter(x1, x2) plt.scatter(projections_pc2_x, projections_pc2_y) plt.plot(pc2_x, pc2_y, color='red') plt.axis('equal') plt.axvline(0) plt.axhline(0) plt.title("Projection of data points onto PC2") # Visualize original data & projections plt.scatter(x1, x2) plt.scatter(projections_pc1_x, projections_pc1_y, color='red') plt.scatter(projections_pc2_x, projections_pc2_y, color='green') plt.plot(pc1_x, pc1_y, color='red') plt.plot(pc2_x, pc2_y, color='green') plt.axis('equal') plt.axvline(0) plt.axhline(0) plt.title("Original data & their projections on PC1 and PC2") # Rotating all data points so PC1 & PC2 are identical onto X,Y axes degrees = math.atan(pc1_slope) * -1 origin = [0,0] [x1,x2] = rotate(origin, [x1,x2], degrees) [pc1_x, pc1_y] = rotate(origin, [pc1_x, pc1_y], degrees) [pc2_x, pc2_y] = rotate(origin, [pc2_x, pc2_y], degrees) [projections_pc1_x, projections_pc1_y] = rotate(origin, [projections_pc1_x, projections_pc1_y], degrees) [projections_pc2_x, projections_pc2_y] = rotate(origin, [projections_pc2_x, projections_pc2_y], degrees) # Visualize projections of data after rotation plt.scatter(projections_pc1_x, projections_pc1_y) plt.scatter(projections_pc2_x, projections_pc2_y) plt.axis('equal') plt.axvline(0) plt.axhline(0) plt.title("Projections of data after rotation") # Visualize the corresponding projections plt.scatter(projections_pc1_x, projections_pc2_y) plt.axis('equal') plt.axvline(0) plt.axhline(0) plt.title("Corresponding Projections of PC1 & PC2") # Calculate variations for PC1 and PC2 and the sum of them pc1_variation = variation(projections_pc1_x, mean=0) pc2_variation = variation(projections_pc2_y, mean=0) total_variation = pc1_variation + pc2_variation # Calculate percentages of PC1 and PC2 pc1_percentage = pc1_variation / total_variation * 100 pc2_percentage = pc2_variation / total_variation * 100 # yeah I know it's 100 - pc1_percentage :) # Visualize Percentages plt.bar(["PC1", "PC2"], [pc1_percentage, pc2_percentage], width=0.25) plt.title("PC1 and PC2 Percentages") print(f"PC1 Variation: {pc1_variation}\t PC1 Percentage: {pc1_percentage}") print(f"PC2 Variation: {pc2_variation}\t PC2 Percentage: {pc2_percentage}") ```
github_jupyter
# Document Classification & Clustering - Lecture What could we do with the document-term-matrices (dtm[s]) created in the previous notebook? We could visualize them or train an algorithm to do some specific task. We have covered both classification and clustering before, so we won't focus on the particulars of algorithms. Instead we'll focus on the unique problems of dealing with text input for these models. ## Contents * [Part 1](#p1): Vectorize a whole Corpus * [Part 2](#p2): Tune the vectorizer * [Part 3](#p3): Apply Vectorizer to Classification problem * [Part 4](#p4): Introduce topic modeling on text data **Business Case**: Your managers at Smartphone Inc. have asked to develop a system to bucket text messages into two categories: **spam** and **not spam (ham)**. The system will be implemented on your companies products to help users identify suspicious texts. # Spam Filter - Count Vectorization Method ``` import pandas as pd import numpy as np pd.set_option('display.max_colwidth', 200) ``` **Import the data and take a look at it** ``` def load(): url = "https://raw.githubusercontent.com/sokjc/BayesNotBaes/master/sms.tsv" df = pd.read_csv(url, sep='\t', header=None, names=['label', 'msg']) df = df.rename(columns={"msg":"text"}) # encode target df['label_num'] = df['label'].map({'ham': 0, 'spam': 1}) return df pd.set_option('display.max_colwidth', 200) df = load() df.tail() ``` Notice that this text isn't as coherent as the job listings. We'll proceed like normal though. What is the ratio of Spam to Ham messages? ``` df['label'].value_counts() df['label'].value_counts(normalize=True) ``` **Model Validation - Train Test Split** (Cross Validation would be better here) ``` from sklearn.model_selection import train_test_split X = df['text'] y = df['label_num'] X_train, X_test, y_train, y_test = \ train_test_split(X, y, test_size=0.2, random_state=812) print(X_train.shape, X_test.shape, y_train.shape, y_test.shape, sep='\n') ``` **Count Vectorizer** Today we're just going to let Scikit-Learn do our text cleaning and preprocessing for us. Lets run our vectorizer on our text messages and take a peek at the tokenization of the vocabulary ``` from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(max_features=None, ngram_range=(1,1), stop_words='english') vectorizer.fit(X_train) print(vectorizer.get_feature_names()[300:325]) ``` Now we'll complete the vectorization with `.transform()` ``` train_word_counts = vectorizer.transform(X_train) # not necessary to save to a dataframe, but helpful for previewing X_train_vectorized = pd.DataFrame(train_word_counts.toarray(), columns=vectorizer.get_feature_names()) print(X_train_vectorized.shape) X_train_vectorized.head() ``` We also need to vectorize our `X_test` data, but **we need to use the same vocabulary as the training dataset**, so we'll just call `.transform()` on `X_test` to get our `X_test_vectorized` ``` test_word_counts = vectorizer.transform(X_test) X_test_vectorized = pd.DataFrame(test_word_counts.toarray(), columns=vectorizer.get_feature_names()) print(X_test_vectorized.shape) X_test_vectorized.head() ``` Lets run some classification models and see what kind of accuracy we can get! # Model Selection ``` from sklearn.metrics import accuracy_score def assess_model(model, X_train, X_test, y_train, y_test, vect_type='Count'): model.fit(X_train, y_train) train_predictions = model.predict(X_train) test_predictions = model.predict(X_test) result = {} result['model'] = str(model).split('(')[0] result['acc_train'] = accuracy_score(y_train, train_predictions) result['acc_test'] = accuracy_score(y_test, test_predictions) result['vect_type'] = vect_type print(result) return result from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import MultinomialNB # Multinomial Naive Bayes from sklearn.ensemble import RandomForestClassifier models = [LogisticRegression(random_state=42, solver='lbfgs'), MultinomialNB(), RandomForestClassifier()] results = [] for model in models: result = assess_model( model, X_train_vectorized, X_test_vectorized, y_train, y_test) results.append(result) pd.DataFrame.from_records(results) ``` # Spam Filter - TF-IDF Vectorization Method ``` from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer( max_features=None, ngram_range=(1,1), stop_words='english') # fit to train vectorizer.fit(X_train) print(vectorizer) # apply to train train_word_counts = vectorizer.transform(X_train) X_train_vectorized = pd.DataFrame(train_word_counts.toarray(), columns=vectorizer.get_feature_names()) print(X_train_vectorized.shape) X_train_vectorized.head() # apply to test test_word_counts = vectorizer.transform(X_test) X_test_vectorized = pd.DataFrame(test_word_counts.toarray(), columns=vectorizer.get_feature_names()) print(X_test_vectorized.shape) X_test_vectorized.head() models = [LogisticRegression(random_state=42, solver='lbfgs'), MultinomialNB(), RandomForestClassifier()] for model in models: result = assess_model( model, X_train_vectorized, X_test_vectorized, y_train, y_test, vect_type='Tfidf') results.append(result) pd.DataFrame.from_records(results) ``` # Sentiment Analysis The objective of **sentiment analysis** is to take a text phrase and determine if its sentiment is: Postive, Neutral, or Negative. Suppose that you wanted to use NLP to classify reviews for your company's products as either positive, neutral, or negative. Maybe you don't trust the star ratings left by the users and you want an additional measure of sentiment from each review - maybe you would use this as a feature generation technique for additional modeling, or to identify disgruntled customers and reach out to them to improve your customer service, etc. Sentiment Analysis has also been used heavily in stock market price estimation by trying to track the sentiment of the tweets of individuals after breaking news comes out about a company. Does every word in each review contribute to its overall sentiment? Not really. Stop words for example don't really tell us much about the overall sentiment of the text, so just like we did before, we will discard them. ### NLTK Movie Review Sentiment Analysis `pip install -U nltk` ``` import random import nltk def load_movie_reviews(): from nltk.corpus import movie_reviews nltk.download('movie_reviews') nltk.download('stopwords') print("Total reviews:", len(movie_reviews.fileids())) print("Positive reviews:", len(movie_reviews.fileids('pos'))) print("Negative reviews:", len(movie_reviews.fileids('neg'))) # Get Reviews and randomize reviews = [(list(movie_reviews.words(fileid)), category) for category in movie_reviews.categories() for fileid in movie_reviews.fileids(category)] random.shuffle(reviews) documents = [] sentiments = [] for review in reviews: # Add sentiment to list if review[1] == "pos": sentiments.append(1) else: sentiments.append(0) # Add text to list review_text = " ".join(review[0]) documents.append(review_text) df = pd.DataFrame({"text": documents, "sentiment": sentiments}) return df df = load_movie_reviews() df.head() ``` ### Train Test Split ``` X = df['text'] y = df['sentiment'] X_train, X_test, y_train, y_test = \ train_test_split(X, y, test_size=0.2, random_state=42) ``` # Sentiment Analysis - CountVectorizer ## Generate vocabulary from train dataset ``` from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(max_features=None, ngram_range=(1,1), stop_words='english') vectorizer.fit(X_train) train_word_counts = vectorizer.transform(X_train) X_train_vectorized = pd.DataFrame(train_word_counts.toarray(), columns=vectorizer.get_feature_names()) print(X_train_vectorized.shape) X_train_vectorized.head() test_word_counts = vectorizer.transform(X_test) X_test_vectorized = pd.DataFrame(test_word_counts.toarray(), columns=vectorizer.get_feature_names()) print(X_test_vectorized.shape) X_test_vectorized.head() ``` ### Model Selection ``` models = [LogisticRegression(random_state=42, solver='lbfgs'), MultinomialNB(), RandomForestClassifier()] results = [] for model in models: result = assess_model( model, X_train_vectorized, X_test_vectorized, y_train, y_test, vect_type='Count') results.append(result) pd.DataFrame.from_records(results) ``` # Sentiment Analysis - tfidfVectorizer ``` from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer(max_features=2000, ngram_range=(1,2), min_df = 5, max_df = .80, stop_words='english') vectorizer.fit(X_train) train_word_counts = vectorizer.transform(X_train) X_train_vectorized = pd.DataFrame(train_word_counts.toarray(), columns=vectorizer.get_feature_names()) print(X_train_vectorized.shape) X_train_vectorized.head() test_word_counts = vectorizer.transform(X_test) X_test_vectorized = pd.DataFrame(test_word_counts.toarray(), columns=vectorizer.get_feature_names()) print(X_test_vectorized.shape) X_test_vectorized.head() ``` ### Model Selection ``` for model in models: result = assess_model( model, X_train_vectorized, X_test_vectorized, y_train, y_test, vect_type='tfidf') results.append(result) pd.DataFrame.from_records(results) ``` # Using NLTK to clean the data ### Importing the data fresh to avoid variable collisions ``` df = load_movie_reviews() ``` ### Cleaning function to apply to each document ``` from nltk.corpus import stopwords import string # turn a doc into clean tokens def clean_doc(doc): # split into tokens by white space tokens = doc.split() # remove punctuation from each token table = str.maketrans('', '', string.punctuation) tokens = [w.translate(table) for w in tokens] # remove remaining tokens that are not alphabetic tokens = [word for word in tokens if word.isalpha()] # filter out stop words stop_words = set(stopwords.words('english')) tokens = [w for w in tokens if not w in stop_words] # filter out short tokens tokens = [word for word in tokens if len(word) > 1] return tokens df_nltk = pd.DataFrame() df_nltk['text'] = df.text.apply(clean_doc) df_nltk['sentiment'] = df.sentiment df_nltk.head() ``` ### Reformat reviews for sklearn ``` documents = [] for review in df_nltk.text: review = " ".join(review) documents.append(review) sentiment = list(df_nltk.sentiment) new_df = pd.DataFrame({'text': documents, 'sentiment': sentiment}) new_df.head() ``` ### Train Test Split ``` X = new_df.text y = new_df.sentiment X_train, X_test, y_train, y_test = \ train_test_split(X, y, test_size=0.2, random_state=42) ``` ### Vectorize the reviews ``` from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer(max_features=None, ngram_range=(1,1), stop_words='english') vectorizer.fit(X_train) train_word_counts = vectorizer.transform(X_train) X_train_vectorized = pd.DataFrame(train_word_counts.toarray(), columns=vectorizer.get_feature_names()) print(X_train_vectorized.shape) X_train_vectorized.head() test_word_counts = vectorizer.transform(X_test) X_test_vectorized = pd.DataFrame(test_word_counts.toarray(), columns=vectorizer.get_feature_names()) print(X_test_vectorized.shape) X_test_vectorized.head() ``` ### Model Selection ``` models = [LogisticRegression(random_state=42, solver='lbfgs'), MultinomialNB(), RandomForestClassifier()] results = [] for model in models: result = assess_model( model, X_train_vectorized, X_test_vectorized, y_train, y_test, vect_type='Tfidf') results.append(result) pd.DataFrame.from_records(results) # import xgboost as xgb from xgboost.sklearn import XGBClassifier clf = XGBClassifier( #hyper params n_jobs = -1, ) clf.fit(X_train_vectorized, y_train, eval_metric = 'auc') ```
github_jupyter
``` a = 'ok' b = 'test' print(a+b) print(a*2) name = 'Bob' print(f'Hello, {name}') greeting = 'Hello, {}' with_name = greeting.format(name) print(with_name) size = input('Enter the size of your house: ') integer = int(size) floating = float(size) print(integer, floating) square_meters = integer / 10.8 print(f'{integer} square feet is {square_meters} square meters.') print(f'{integer} square feet is {square_meters:.2f} square meters.') user_age = input('Enter your age: ') years = int(user_age) months = years * 12 days = months * 30 hours = days * 24 minutes = hours * 60 seconds = minutes * 60 print(f'Your age, {years}, is equal to {months} months or {seconds} seconds.') friends = {'Bob', 'Anne', 'Rolf'} abroad = {'Bob', 'Rolf'} local_friends = friends.difference(abroad) print(local_friends) local_friends_opposite = abroad.difference(friends) print(local_friends_opposite) other_friends = {'Maria', 'Jose'} all_friends = friends.union(other_friends) print(all_friends) abroad.add('Lara') print(abroad) friends_study_science = {'Ellen', 'Renato', 'Bob', 'Rolf'} abroad_study_science = abroad.intersection(friends_study_science) print(abroad_study_science) colors = {'blue', 'red', 'white', 'black'} user_color = input('Enter a color that you think is in the game: ').lower() if user_color in colors: print('You are right!') else: print("Sorry, you're wrong") friends = ['Suzy', 'Ellie', 'Sarah', 'Anna', 'Sayuri'] friends_starts_s = [] friends_starts_s_list_comprehension = [] for friend in friends: if friend.startswith('S'): friends_starts_s.append(friend) print(friends_starts_s) # using list comprehension friends_starts_s_list_comprehension = [friend for friend in friends if friend.startswith('S')] print(friends_starts_s_list_comprehension) student_attendance = {'Rolf': 96, 'Bob': 80, 'Anne': 100} for student, attendance in student_attendance.items(): print(f'{student} has {attendance}% of attendance') attendance_values = student_attendance.values() print(sum(attendance_values) / len(attendance_values)) person = ('Jose', 30, 'artist') name, _, profession = person print(name, profession) friends = ['Ella', 'Ellie'] def add_friend(): friend_name = input('Enter your friend name: ') f = friends + [friend_name] print(f) add_friend() def say_hello(name, surname='Doe'): print(f'Hello, {name} {surname}.') say_hello(surname='Filly', name='Phil') say_hello('Filly', 'Phil') say_hello('Filly', surname='Phil') say_hello('Phil') def add(x, y): return x + y # transform into Lambda add = lambda x, y: x + y print(add(5, 7)) # you can also call it right away, like an IIFE print((lambda x, y: x + y)(5, 7)) # Another Example def double(x): return x * 2 sequence = [1, 3, 5, 7] doubled = [double(x) for x in sequence] doubled_inline = [(lambda x: x * 2)(y) for y in sequence] print(doubled) print(doubled_inline) # same thing - you can use map, it will go through each number in the sequence and apply double on it, it will then return a list with it # NOTE: it is a little bit slower than list comprehension doubled_same = list(map(double, sequence)) print(doubled_same) def multiply(*args): print(args) total = 1 for arg in args: total = total * arg return total multiply(1,3,5) def add(x,y): return x + y nums = [3, 5] print(add(*nums)) # it will destructure the nums when calling add, so 3 will be x and 5 will be y # Another way nums = {'x': 15, 'y': 25} print(add(x=nums['x'], y=nums['y'])) # instead of doing like that, we can use `**` print(add(**nums)) # Going back to the mulpiply example and using with another function def apply(*args, operator): if operator == '*': return multiply(*args) # we need to add the `*` to destructure, otherwise we will send a tuple and the multiply function will create a tuple with the tuple elif operator == '+': return sum(args) else: return 'No valid operator provided to apply()' print(apply(1, 3, 6, 9, operator='*')) # we need to use the keyword argument for operator, otherwise the `*args` from the function will get everything as the args and the operator will be missing. def named(**kwargs): print(kwargs) named(name='Bob', age=25) # Another option def named1(name, age): print(name, age) details = {'name': 'Bob', 'age': 25} named1(**details) named(**details) def print_nicely(**kwargs): named(**kwargs) for arg, value in kwargs.items(): print(f'{arg}: {value}') print_nicely(name='Bob', age=25) def both(*args, **kwargs): print(args) print(kwargs) both(1, 3, 5, name='Bob', age=25) # create the Student class class Student: # all objects has the self ('this'), but they can have other properties, like name or grades def __init__(self, name, grades): self.name = name self.grades = grades def average(self): return sum(self.grades) / len(self.grades) # create a new student student1 = Student('Matt', (90, 90, 80, 75, 80)) student2 = Student('Rob', (40, 50, 60, 75, 60)) print(student1.name) print(student2.grades) print(Student.average(student1)) #same as below print(student1.average()) class Person: def __init__(self, name, age): self.name = name self.age = age bob = Person('Bob', 35) print(bob) class Person_modified: def __init__(self, name, age): self.name = name self.age = age # what to print when we print the string representation of the instance def __str__(self): return f'I am {self.name}, and I have {self.age} years.' # this method goal is to be unambiguous and it should return a string that allows us to recreate the object very easily def __repr__(self): return f"<Person('{self.name}', {self.age})>" bob_modified = Person_modified('Bob', 35) print(bob_modified) # I am Bob, and I have 35. # in order to print the __repr__ method, you can call it or comment the __str__ and just print the instance: print(bob_modified.__repr__()) class ClassTest: def instance_method(self): print(f'Called instance_method of {self}') @classmethod def class_method(cls): print(f'Called class_method of {cls}') @staticmethod def static_method(): print('Called static_method') test = ClassTest() # instance method because it is called on the instance - it will receive 'self', which is the instance and you can use it in the return test.instance_method() # class method because it is called on the class - it will receive 'cls', which is the class and you can use it in the return => Very used as factory ClassTest.class_method() # static method is called without 'passing' the object/instance to it, it is really just a function that you pasted inside the class, it doesn't have any info of the class or the instance ClassTest.static_method() # Another Example class Book: TYPES = ('hardcover', 'paperback') def __init__(self, name, book_type, weight): self.name = name self.book_type = book_type self.weight = weight def __repr__(self): return f'<Book {self.name}, {self.book_type}, weighing {self.weight}g>' # factory => create a new instance within the class using the class ==> since cls is the class, you can use Book or cls, but it is best practices to use cls, also because of inheritance @classmethod def hardcover(cls, name, page_weight): return cls(name, Book.TYPES[0], page_weight + 100) @classmethod def paperback(cls, name, page_weight): return cls(name, Book.TYPES[1], page_weight + 100) book = Book.hardcover('Harry Potter', 1500) light = Book.hardcover('Python', 600) print(book) print(light) class Device: def __init__(self, name, connected_by): self.name = name self.connected_by = connected_by self.connected = True def __str__(self): # the '!r' calls the repr method on self.name, so it adds the quotes automatically return f'Device {self.name!r} ({self.connected_by})' def disconnect(self): self.connected = False print('Disconnected.') # create a Printer class who inherits from Device, so you have access to all the methods from the Device class and can also add new methods specific to the Printer class class Printer(Device): def __init__(self, name, connected_by, capacity): # get the parent class with super() and then call the __init__ method of it passing the variables => this way you don't have to copy everything again super().__init__(name, connected_by) self.capacity = capacity self.remaining_pages = capacity def __str__(self): return f'{super().__str__()} ({self.remaining_pages} pages remaining.)' def print(self, pages): if not self.connected: print('Your printer is not connected!') return print(f'Printing {pages} pages') self.remaining_pages -= pages headphones = Device('Headphones', 'Bluetooth') print(headphones) printer = Printer('Printer', 'USB', 500) printer.print(20) print(printer) printer.disconnect() printer.print(30) class Bookshelf: def __init__(self, quantity): self.quantity = quantity def __str__(self): # python ternary operator: 'true' if 'condition' else 'false' end = 's.' if self.quantity > 1 else '.' return f'Bookshelf with {self.quantity} book{end}' shelf = Bookshelf(300) # with inheritance ==> not the best way, you are saying that books are also bookshelves, which is not technically true. Also, you are completely overriding the __str__ method from Bookshelf and you are not using the Bookshelf anywhere. class Book_inheritance(Bookshelf): def __init__(self, name, quantity): super().__init__(quantity) self.name = name def __str__(self): return f'Book {self.name}' book = Book_inheritance('Harry Potter', 120) print(book) # with composition ==> better to use in this case, since with this you mean: a bookshelf has many books. But a book is not a bookshelf. class Bookshelf_composition: def __init__(self, *books): self.books = books def __str__(self): # python ternary operator: 'true' if 'condition' else 'false' end = 's.' if len(self.books) > 1 else '.' return f'Bookshelf with {len(self.books)} book{end}' class Book_composition: def __init__(self, name): self.name = name def __str__(self): return f'Book {self.name}' book = Book_composition('Harry Potter') book1 = Book_composition('Harry Potter II') shelf1 = Bookshelf_composition(book, book1) print(shelf1) from typing import List def list_avg(sequence: List) -> float: return sum(sequence) / len(sequence) # list_avg(123) list_avg([1,2,3]) class TooManyPagesReadError(ValueError): pass class Book: def __init__(self, name: str, page_count: int): self.name = name self.page_count = page_count self.pages_read = 0 def __repr__(self): return ( f'<Book {self.name}, read{self.pages_read} pages out of {self.page_count}>' ) def read(self, pages: int): if self.pages_read + pages > self.page_count: raise TooManyPagesReadError(f'You tried to read {self.pages_read + pages} pages, but this book only has {self.page_count} pages.') self.pages_read += pages print(f'You have now read {self.pages_read} pages out of {self.page_count}.') python101 = Book('Python 101', 50) python101.read(35) python101.read(10) python101.read(30) user = {'username': 'jose', 'access_level': 'guest'} # unprotected route def get_admin_password(): return '1234' # create decorator to protect the route def make_secure(func): def secure_function(): if user['access_level'] == 'admin': return func() else: return f'No admin permissions for {user["username"]}' return secure_function get_admin_password = make_secure(get_admin_password) print(get_admin_password()) # With The '@' syntax def make_secure1(func): def secure_function(): if user['access_level'] == 'admin': return func() else: return f'No admin permissions for {user["username"]}' return secure_function # just add the '@' and the decorator function name to secure this route and then call it @make_secure1 def get_admin_password1(): return '1234' print(get_admin_password1()) # it will return the name as 'secure_function' and any documentation from get_admin_password1 would be lost and replaced with the secure_function print(get_admin_password1.__name__) # in order to fix this, we need to import functools and add the decorator before the secure_function import functools def make_secure2(func): # decorator @functools.wraps(func) #it will protect the name and documentation of the 'func', in this case, the get_admin_password def secure_function(): # function that will replace the other one if user['access_level'] == 'admin': return func() else: return f'No admin permissions for {user["username"]}' return secure_function @make_secure2 def get_admin_password2(): return '1234' # returns get_admin_password2 print(get_admin_password2.__name__) from typing import List, Optional class Student: # this is BAD def __init__(self, name: str, grades: List[int] = []): self.name = name self.grades = grades def take_exam(self, result: int): self.grades.append(result) bob = Student('Bob') matt = Student('Matt') bob.take_exam(90) print(bob.grades) # [90] print(matt.grades) # [90] class Student1: # this is BAD def __init__(self, name: str, grades: Optional[List[int]] = None): self.name = name self.grades = grades or [] def take_exam(self, result: int): self.grades.append(result) bob1 = Student1('Bob') matt1 = Student1('Matt') bob1.take_exam(90) print(bob1.grades) # [90] print(matt1.grades) # [] ```
github_jupyter
# Kestrel+Model ### A [Bangkit 2021](https://grow.google/intl/id_id/bangkit/) Capstone Project Kestrel is a TensorFlow powered American Sign Language translator Android app that will make it easier for anyone to seamlessly communicate with people who have vision or hearing impairments. The Kestrel model builds on the state of the art MobileNetV2 model that is optimized for speed and latency on smartphones to accurately recognize and interpret sign language from the phone’s camera and display the translation through a beautiful, convenient and easily accessible Android app. # American Sign Language Fingerspelling alphabets from the [National Institute on Deafness and Other Communication Disorders (NIDCD)](https://www.nidcd.nih.gov/health/american-sign-language-fingerspelling-alphabets-image) <table> <tr><td> <img src="https://www.nidcd.nih.gov/sites/default/files/Content%20Images/NIDCD-ASL-hands-2019_large.jpg" alt="Fashion MNIST sprite" width="600"> </td></tr> <tr><td align="center"> <b>Figure 1.</b> <a href="https://www.nidcd.nih.gov/health/american-sign-language-fingerspelling-alphabets-image">ASL Fingerspelling Alphabets</a> <br/>&nbsp; </td></tr> </table> ``` from google.colab import drive drive.mount('/content/drive') ``` # Initial setup ``` try: %tensorflow_version 2.x except: pass import numpy as np import matplotlib.pylab as plt import tensorflow as tf import tensorflow_hub as hub import PIL import PIL.Image from os import listdir import pathlib from tqdm import tqdm from tensorflow.keras.preprocessing import image_dataset_from_directory print("\u2022 Using TensorFlow Version:", tf.__version__) print("\u2022 Using TensorFlow Hub Version: ", hub.__version__) print('\u2022 GPU Device Found.' if tf.config.list_physical_devices('GPU') else '\u2022 GPU Device Not Found. Running on CPU') ``` # Data preprocessing ### (Optional) Unzip file on Google Drive ``` import zipfile import pathlib zip_dir = pathlib.Path('/content/drive/Shareddrives/Kestrel/A - Copy.zip') unzip_dir = pathlib.Path('/content/drive/Shareddrives/Kestrel/A_Unzipped') with zipfile.ZipFile(zip_dir, 'r') as zip_ref: zip_ref.extractall(unzip_dir) ``` ### Loading images from directory ``` data_dir = pathlib.Path('/Dev/A') ``` ### (Optional) Counting the number of images in the dataset ``` image_count = len(list(data_dir.glob('*/color*.png'))) print(image_count) ``` ### (Optional) Displaying one of the "a" letter sign language image: ``` two = list(data_dir.glob('*/color*.png')) PIL.Image.open(str(two[0])) ``` # Create the dataset Loading the images off disk using [image_dataset_from_directory](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory). Define some parameters for the loader: ``` BATCH_SIZE = 30 IMG_SIZE = (160, 160) ``` ### Coursera method using ImageDataGenerator ``` from tensorflow.keras.preprocessing.image import ImageDataGenerator train_generator = ImageDataGenerator( rescale = 1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest', validation_split=0.2) validation_generator = ImageDataGenerator( rescale = 1./255, validation_split=0.2) train_dataset = train_generator.flow_from_directory(data_dir, batch_size = BATCH_SIZE, class_mode = 'categorical', subset='training', target_size = IMG_SIZE, shuffle=True, ) validation_dataset = validation_generator.flow_from_directory(data_dir, batch_size = BATCH_SIZE, class_mode = 'categorical', subset='validation', target_size = IMG_SIZE, shuffle=True, ) ``` Splitting images for training and validation ### (Optional) Visualize the data Show the first 9 images and labels from the training set: ``` #@title Showing 9 images plt.figure(figsize=(10, 10)) for images, labels in train_dataset.take(1): for i in range(9): ax = plt.subplot(3, 3, i + 1) plt.imshow(images[i].numpy().astype("uint8")) plt.title(class_names[labels[i]]) plt.axis("off") for image_batch, labels_batch in train_dataset: print(image_batch.shape) print(labels_batch.shape) break ``` ### (Deprecated) Create a test set To create a Test Set, determine how many batches of data are available in the validation set using ```tf.data.experimental.cardinality```, then move 20% of them to a test set. ``` validation_batches = tf.data.experimental.cardinality(validation_dataset) test_dataset = validation_dataset.take(validation_batches // 5) validation_dataset = validation_dataset.skip(validation_batches // 5) print('Number of validation batches: %d' % tf.data.experimental.cardinality(validation_dataset)) print('Number of test batches: %d' % tf.data.experimental.cardinality(test_dataset)) ``` ### Configure the dataset for performance Use buffered prefetching to load images from disk without having I/O become blocking. To learn more about this method see the [data performance](https://www.tensorflow.org/guide/data_performance) guide. ``` AUTOTUNE = tf.data.AUTOTUNE train_dataset = train_dataset.cache().prefetch(buffer_size=AUTOTUNE) validation_dataset = validation_dataset.cache().prefetch(buffer_size=AUTOTUNE) # test_dataset = test_dataset.prefetch(buffer_size=AUTOTUNE) ``` # Create the model ### Create the base model from the pre-trained convnets You will create the base model from the **MobileNet V2** model developed at Google. This is pre-trained on the ImageNet dataset, a large dataset consisting of 1.4M images and 1000 classes. ImageNet is a research training dataset with a wide variety of categories like `jackfruit` and `syringe`. This base of knowledge will help us classify cats and dogs from our specific dataset. First, you need to pick which layer of MobileNet V2 you will use for feature extraction. The very last classification layer (on "top", as most diagrams of machine learning models go from bottom to top) is not very useful. Instead, you will follow the common practice to depend on the very last layer before the flatten operation. This layer is called the "bottleneck layer". The bottleneck layer features retain more generality as compared to the final/top layer. First, instantiate a MobileNet V2 model pre-loaded with weights trained on ImageNet. By specifying the **include_top=False** argument, you load a network that doesn't include the classification layers at the top, which is ideal for feature extraction. ``` # Create the base model from the pre-trained model MobileNet V2 IMG_SHAPE = IMG_SIZE + (3,) base_model = tf.keras.applications.MobileNetV2(input_shape=(160, 160, 3), include_top=False, weights='imagenet') ``` This feature extractor converts each `224 x 224` image into a `7x7x1280` block of features. Let's see what it does to an example batch of images: ``` image_batch, label_batch = next(iter(train_dataset)) feature_batch = base_model(image_batch) print(feature_batch.shape) ``` ### Freeze the convolutional base In this step, you will freeze the convolutional base created from the previous step and to use as a feature extractor. Additionally, you add a classifier on top of it and train the top-level classifier. It is important to freeze the convolutional base before you compile and train the model. Freezing (by setting layer.trainable = False) prevents the weights in a given layer from being updated during training. MobileNet V2 has many layers, so setting the entire model's `trainable` flag to False will freeze all of them. ``` base_model.trainable = False # Let's take a look at the base model architecture base_model.summary() ``` ### Adding new layer to the model ``` last_layer = base_model.get_layer('out_relu') print('last layer output shape: ', last_layer.output_shape) last_output = last_layer.output from tensorflow.keras.optimizers import RMSprop from tensorflow.keras import layers from tensorflow.keras import Model # Flatten the output layer to 1 dimension x = layers.Flatten()(last_output) # Add a dropout rate of 0.5 # x = layers.Dropout(0.5)(x) # Add a fully connected layer with 1,024 hidden units and ReLU activation # x = layers.Dense(1024, activation='relu', kernel_regularizer='l2')(x) x = layers.Dense(1024, activation='relu')(x) # Add a dropout rate of 0.5 x = layers.Dropout(0.5)(x) # Add a final layer for classification x = layers.Dense (24, activation='softmax')(x) model = Model( base_model.input, x) model.summary() # !pip install scipy ``` ### Training the model ``` checkpoint_path = "TensorFlow_Training_Checkpoint/Kestrel_Training_10_50Dropout0.5/cp.ckpt" import os # base_learning_rate = 0.0001 def get_uncompiled_model(): model = Model( base_model.input, x) return model def get_compiled_model(): model = get_uncompiled_model() model.compile( optimizer="rmsprop", loss="categorical_crossentropy", metrics=["accuracy"], ) return model checkpoint_dir = os.path.dirname(checkpoint_path) if not os.path.exists(checkpoint_dir): os.makedirs(checkpoint_dir) def make_or_restore_model(): # Either restore the latest model, or create a fresh one # if there is no checkpoint available. checkpoints = [checkpoint_dir + "/" + name for name in os.listdir(checkpoint_dir)] if checkpoints: latest_checkpoint = max(checkpoints, key=os.path.getctime) print("Restoring from", latest_checkpoint) #return tf.keras.models.load_model(latest_checkpoint) model = Model( base_model.input, x) model.load_weights(checkpoint_path) model.compile( optimizer="rmsprop", loss="categorical_crossentropy", metrics=["accuracy"], ) return model print("Creating a new model") return get_compiled_model() # Create a callback that saves the model's weights model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, save_weights_only=True, monitor='val_accuracy', mode='auto', save_best_only=True, # Only save a model if `val_loss` has improved. verbose=1) early_callbacks = [ tf.keras.callbacks.EarlyStopping( # Stop training when `val_loss` is no longer improving monitor="val_accuracy", # "no longer improving" being defined as "no better than 1e-2 less" # min_delta=1e-2, # "no longer improving" being further defined as "for at least 2 epochs" patience=30, verbose=1, ) ] model = make_or_restore_model() history = model.fit(train_dataset, epochs=50, validation_data = validation_dataset, verbose = 1, callbacks=[model_checkpoint_callback, early_callbacks])# Pass callback to training # This may generate warnings related to saving the state of the optimizer. # These warnings (and similar warnings throughout this notebook) # are in place to discourage outdated usage, and can be ignored. # # EXERCISE: Use the tf.saved_model API to save your model in the SavedModel format. # export_dir = 'saved_model/2' # # YOUR CODE HERE # tf.saved_model.save(model, export_dir) ``` ### Plotting the accuracy and loss ``` import matplotlib.pyplot as plt acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'r', label='Training accuracy') plt.plot(epochs, val_acc, 'b', label='Validation accuracy') plt.title('Training and validation accuracy') plt.legend(loc=0) plt.figure() plt.plot(epochs, loss, 'r', label='Training Loss') plt.plot(epochs, val_loss, 'b', label='Validation Loss') plt.title('Training and validation loss') plt.legend() plt.show() ``` # Exporting to TFLite You will now save the model to TFLite. We should note, that you will probably see some warning messages when running the code below. These warnings have to do with software updates and should not cause any errors or prevent your code from running. ``` # EXERCISE: Use the tf.saved_model API to save your model in the SavedModel format. export_dir = 'saved_model/10_50Dropout0.5V2' # YOUR CODE HERE tf.saved_model.save(model, export_dir) # # Select mode of optimization # mode = "Speed" # if mode == 'Storage': # optimization = tf.lite.Optimize.OPTIMIZE_FOR_SIZE # elif mode == 'Speed': # optimization = tf.lite.Optimize.OPTIMIZE_FOR_LATENCY # else: # optimization = tf.lite.Optimize.DEFAULT # EXERCISE: Use the TFLiteConverter SavedModel API to initialize the converter import tensorflow as tf converter = tf.lite.TFLiteConverter.from_saved_model(export_dir) # YOUR CODE HERE # Set the optimzations converter.optimizations = [tf.lite.Optimize.DEFAULT]# YOUR CODE HERE # Invoke the converter to finally generate the TFLite model tflite_model = converter.convert()# YOUR CODE HERE tflite_model_file = pathlib.Path('saved_model/10_50Dropout0.5V2/model.tflite') tflite_model_file.write_bytes(tflite_model) # path_to_pb = "C:/saved_model/saved_model.pb" # def load_pb(path_to_pb): # with tf.gfile.GFile(path_to_pb, "rb") as f: # graph_def = tf.GraphDef() # graph_def.ParseFromString(f.read()) # with tf.Graph().as_default() as graph: # tf.import_graph_def(graph_def, name='') # return graph # print(graph) ``` # Test the model with TFLite interpreter ``` # Load TFLite model and allocate tensors. interpreter = tf.lite.Interpreter(model_content=tflite_model) interpreter.allocate_tensors() input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] # Gather results for the randomly sampled test images predictions = [] test_labels = [] test_images = [] test_batches = data_dir.map(format_example).batch(1) for img, label in test_batches.take(50): interpreter.set_tensor(input_index, img) interpreter.invoke() predictions.append(interpreter.get_tensor(output_index)) test_labels.append(label[0]) test_images.append(np.array(img)) # Utilities functions for plotting def plot_image(i, predictions_array, true_label, img): predictions_array, true_label, img = predictions_array[i], true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) img = np.squeeze(img) plt.imshow(img, cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label.numpy(): color = 'green' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): predictions_array, true_label = predictions_array[i], true_label[i] plt.grid(False) plt.xticks(list(range(10))) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array[0], color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array[0]) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') # Visualize the outputs # Select index of image to display. Minimum index value is 1 and max index value is 50. index = 5 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(index, predictions, test_labels, test_images) plt.subplot(1,2,2) plot_value_array(index, predictions, test_labels) plt.show() ```
github_jupyter
## Libraries ``` import numpy as np import pandas as pd from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.cluster import KMeans import pickle from sklearn.linear_model import LogisticRegression import matplotlib.pyplot as plt import matplotlib.axes as axs import seaborn as sns sns.set() ``` ## Data Preparation ``` df_purchase = pd.read_csv('purchase data.csv') scaler = pickle.load(open('scaler.pickle', 'rb')) pca = pickle.load(open('pca.pickle', 'rb')) kmeans_pca = pickle.load(open('kmeans_pca.pickle', 'rb')) features = df_purchase[['Sex', 'Marital status', 'Age', 'Education', 'Income', 'Occupation', 'Settlement size']] df_purchase_segm_std = scaler.transform(features) df_purchase_segm_pca = pca.transform(df_purchase_segm_std) purchase_segm_kmeans_pca = kmeans_pca.predict(df_purchase_segm_pca) df_purchase_predictors = df_purchase.copy() df_purchase_predictors['Segment'] = purchase_segm_kmeans_pca segment_dummies = pd.get_dummies(purchase_segm_kmeans_pca, prefix = 'Segment', prefix_sep = '_') df_purchase_predictors = pd.concat([df_purchase_predictors, segment_dummies], axis = 1) df_pa = df_purchase_predictors ``` ## Purchase Probability Model ``` Y = df_pa['Incidence'] X = pd.DataFrame() X['Mean_Price'] = (df_pa['Price_1'] + df_pa['Price_2'] + df_pa['Price_3'] + df_pa['Price_4'] + df_pa['Price_5'] ) / 5 model_purchase = LogisticRegression(solver = 'sag') model_purchase.fit(X, Y) model_purchase.coef_ ``` ## Price Elasticity of Purchase Probability ``` df_pa[['Price_1', 'Price_2', 'Price_3', 'Price_4', 'Price_5']].describe() price_range = np.arange(0.5, 3.5, 0.01) price_range df_price_range = pd.DataFrame(price_range) Y_pr = model_purchase.predict_proba(df_price_range) purchase_pr = Y_pr[:][:, 1] pe = model_purchase.coef_[:, 0] * price_range * (1 - purchase_pr) df_price_elasticities = pd.DataFrame(price_range) df_price_elasticities = df_price_elasticities.rename(columns = {0: "Price_Point"}) df_price_elasticities['Mean_PE'] = pe df_price_elasticities pd.options.display.max_rows = None df_price_elasticities plt.figure(figsize = (9, 6)) plt.plot(price_range, pe, color = 'grey') plt.xlabel('Price') plt.ylabel('Elasticity') plt.title('Price Elasticity of Purchase Probability') ``` ## Purchase Probability by Segments ### $\color{green}{\text{Segment 1 - Career-Focused}}$ ``` df_pa_segment_1 = df_pa[df_pa['Segment'] == 1] Y = df_pa_segment_1['Incidence'] X = pd.DataFrame() X['Mean_Price'] = (df_pa_segment_1['Price_1'] + df_pa_segment_1['Price_2'] + df_pa_segment_1['Price_3'] + df_pa_segment_1['Price_4'] + df_pa_segment_1['Price_5']) / 5 model_incidence_segment_1 = LogisticRegression(solver = 'sag') model_incidence_segment_1.fit(X, Y) model_incidence_segment_1.coef_ Y_segment_1 = model_incidence_segment_1.predict_proba(df_price_range) purchase_pr_segment_1 = Y_segment_1[:][:, 1] pe_segment_1 = model_incidence_segment_1.coef_[:, 0] * price_range * (1 - purchase_pr_segment_1) ``` ### Results ``` df_price_elasticities['PE_Segment_1'] = pe_segment_1 plt.figure(figsize = (9, 6)) plt.plot(price_range, pe, color = 'grey') plt.plot(price_range, pe_segment_1, color = 'green') plt.xlabel('Price') plt.ylabel('Elasticity') plt.title('Price Elasticity of Purchase Probability') ``` ### $\color{red}{\text{Segment 2 - Fewer-Opportunities}}$ ``` df_pa_segment_2 = df_pa[df_pa['Segment'] == 2] Y = df_pa_segment_2['Incidence'] X = pd.DataFrame() X['Mean_Price'] = (df_pa_segment_2['Price_1'] + df_pa_segment_2['Price_2'] + df_pa_segment_2['Price_3'] + df_pa_segment_2['Price_4'] + df_pa_segment_2['Price_5']) / 5 model_incidence_segment2 = LogisticRegression(solver = 'sag') model_incidence_segment2.fit(X, Y) model_incidence_segment2.coef_ Y_segment_2 = model_incidence_segment2.predict_proba(df_price_range) purchase_pr_segment2 = Y_segment_2[:][: , 1] pe_segment2 = model_incidence_segment2.coef_[:,0] * price_range * ( 1- purchase_pr_segment2) ``` ### Results ``` df_price_elasticities['PE_Segment_2'] = pe_segment2 plt.figure(figsize = (9, 6)) plt.plot(price_range, pe, color = 'grey') plt.plot(price_range, pe_segment_1, color = 'green') plt.plot(price_range, pe_segment2, color = 'r') plt.xlabel('Price') plt.ylabel('Elasticity') plt.title('Price Elasticity of Purchase Probability') ``` ## ${\textbf{Homework}}$ ### $\color{blue}{\text{Segment 0 - Standard}}$ ``` df_pa_segment_0 = df_pa[df_pa['Segment'] == 0] Y = df_pa_segment_0['Incidence'] X = pd.DataFrame() X['Mean_Price'] = (df_pa_segment_0['Price_1'] + df_pa_segment_0['Price_2'] + df_pa_segment_0['Price_3'] + df_pa_segment_0['Price_4'] + df_pa_segment_0['Price_5']) / 5 model_incidence_segment0 = LogisticRegression(solver = 'sag') model_incidence_segment0.fit(X, Y) model_incidence_segment0.coef_ Y_segment_0 = model_incidence_segment0.predict_proba(df_price_range) purchase_pr_segment0 = Y_segment_0[:][: , 1] pe_segment0 = model_incidence_segment0.coef_[:,0] * price_range *( 1- purchase_pr_segment0) df_price_elasticities.insert(2, column = 'PE_Segment_0', value = pe_segment0) ``` ### $\color{orange}{\text{Segment 3 - Well-Off}}$ ``` df_pa_segment_3 = df_pa[df_pa['Segment'] == 3] Y = df_pa_segment_3['Incidence'] X = pd.DataFrame() X['Mean_Price'] = (df_pa_segment_3['Price_1'] + df_pa_segment_3['Price_2'] + df_pa_segment_3['Price_3'] + df_pa_segment_3['Price_4'] + df_pa_segment_3['Price_5']) / 5 model_incidence_segment3 = LogisticRegression(solver = 'sag') model_incidence_segment3.fit(X, Y) model_incidence_segment3.coef_ Y_segment_3 = model_incidence_segment2.predict_proba(df_price_range) purchase_pr_segment3 = Y_segment_3[:][: , 1] pe_segment3 = model_incidence_segment3.coef_[:,0] * price_range *( 1- purchase_pr_segment3) df_price_elasticities['PE_Segment_3'] = pe_segment3 df_price_elasticities ``` ### ${\textbf{Results}}$ ``` plt.figure(figsize = (9, 6)) plt.plot(price_range, pe, color = 'grey') plt.plot(price_range, pe_segment0, color = 'b') plt.plot(price_range, pe_segment_1, color = 'green') plt.plot(price_range, pe_segment2, color = 'r') plt.plot(price_range, pe_segment3, color = 'orange') plt.xlabel('Price') plt.ylabel('Elasticity') plt.title('Price Elasticity of Purchase Probability') ``` ## Purchase Probability with Promotion Feature ### Data Preparation ``` Y = df_pa['Incidence'] X = pd.DataFrame() X['Mean_Price'] = (df_pa['Price_1'] + df_pa['Price_2'] + df_pa['Price_3'] + df_pa['Price_4'] + df_pa['Price_5']) / 5 X['Mean_Promotion'] = (df_pa['Promotion_1'] + df_pa['Promotion_2'] + df_pa['Promotion_3'] + df_pa['Promotion_4'] + df_pa['Promotion_5'] ) / 5 X.head() ``` ## Model Estimation ``` model_incidence_promotion = LogisticRegression(solver = 'sag') model_incidence_promotion.fit(X, Y) model_incidence_promotion.coef_ ``` ## Price Elasticity with Promotion ``` df_price_elasticity_promotion = pd.DataFrame(price_range) df_price_elasticity_promotion = df_price_elasticity_promotion.rename(columns = {0: "Price_Range"}) df_price_elasticity_promotion['Promotion'] = 1 Y_promotion = model_incidence_promotion.predict_proba(df_price_elasticity_promotion) promo = Y_promotion[:, 1] price_elasticity_promo = (model_incidence_promotion.coef_[:, 0] * price_range) * (1 - promo) df_price_elasticities['Elasticity_Promotion_1'] = price_elasticity_promo df_price_elasticities ``` ## Price Elasticity without Promotion ``` df_price_elasticity_promotion_no = pd.DataFrame(price_range) df_price_elasticity_promotion_no = df_price_elasticity_promotion_no.rename(columns = {0: "Price_Range"}) df_price_elasticity_promotion_no['Promotion'] = 0 Y_no_promo = model_incidence_promotion.predict_proba(df_price_elasticity_promotion_no) no_promo = Y_no_promo[: , 1] price_elasticity_no_promo = model_incidence_promotion.coef_[:, 0] * price_range *(1- no_promo) df_price_elasticities['Elasticity_Promotion_0'] = price_elasticity_no_promo plt.figure(figsize = (9, 6)) plt.plot(price_range, price_elasticity_no_promo) plt.plot(price_range, price_elasticity_promo) plt.xlabel('Price') plt.ylabel('Elasticity') plt.title('Price Elasticity of Purchase Probability with and without Promotion') ``` ## ${\textbf{Brand Choice}}$ ### Data Preparation ``` brand_choice = df_pa[df_pa['Incidence'] == 1] pd.options.display.max_rows = 100 brand_choice Y = brand_choice['Brand'] brand_choice.columns.values features = ['Price_1', 'Price_2', 'Price_3', 'Price_4', 'Price_5'] X = brand_choice[features] model_brand_choice = LogisticRegression(solver = 'sag', multi_class = 'multinomial') model_brand_choice.fit(X, Y) model_brand_choice.coef_ bc_coef = pd.DataFrame(model_brand_choice.coef_) bc_coef bc_coef = pd.DataFrame(np.transpose(model_brand_choice.coef_)) coefficients = ['Coef_Brand_1', 'Coef_Brand_2', 'Coef_Brand_3', 'Coef_Brand_4', 'Coef_Brand_5'] bc_coef.columns = [coefficients] prices = ['Price_1', 'Price_2', 'Price_3', 'Price_4', 'Price_5'] bc_coef.index = [prices] bc_coef = bc_coef.round(2) bc_coef ```
github_jupyter
# Lambda School Data Science Module 141 ## Statistics, Probability, and Inference ## Prepare - examine what's available in SciPy As we delve into statistics, we'll be using more libraries - in particular the [stats package from SciPy](https://docs.scipy.org/doc/scipy/reference/tutorial/stats.html). ``` from scipy import stats dir(stats) # As usual, lots of stuff here! There's our friend, the normal distribution norm = stats.norm() print(norm.mean()) print(norm.std()) print(norm.var()) # And a new friend - t t1 = stats.t(5) # 5 is df "shape" parameter print(t1.mean()) print(t1.std()) print(t1.var()) ``` ![T distribution PDF with different shape parameters](https://upload.wikimedia.org/wikipedia/commons/4/41/Student_t_pdf.svg) *(Picture from [Wikipedia](https://en.wikipedia.org/wiki/Student's_t-distribution#/media/File:Student_t_pdf.svg))* The t-distribution is "normal-ish" - the larger the parameter (which reflects its degrees of freedom - more input data/features will increase it), the closer to true normal. ``` t2 = stats.t(30) # Will be closer to normal print(t2.mean()) print(t2.std()) print(t2.var()) ``` Why is it different from normal? To better reflect the tendencies of small data and situations with unknown population standard deviation. In other words, the normal distribution is still the nice pure ideal in the limit (thanks to the central limit theorem), but the t-distribution is much more useful in many real-world situations. History sidenote - this is "Student": ![William Sealy Gosset](https://upload.wikimedia.org/wikipedia/commons/4/42/William_Sealy_Gosset.jpg) *(Picture from [Wikipedia](https://en.wikipedia.org/wiki/File:William_Sealy_Gosset.jpg))* His real name is William Sealy Gosset, and he published under the pen name "Student" because he was not an academic. He was a brewer, working at Guinness and using trial and error to determine the best ways to yield barley. He's also proof that, even 100 years ago, you don't need official credentials to do real data science! ## Live Lecture - let's perform and interpret a t-test We'll generate our own data, so we can know and alter the "ground truth" that the t-test should find. We will learn about p-values and how to interpret "statistical significance" based on the output of a hypothesis test. ``` # TODO - during class, but please help! survey_data = [0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0] import numpy as np import pandas as pd df = pd.DataFrame(survey_data) df.describe() df.plot.hist() ``` ### Student t-distribution applet Showing t-statistic data point outputing probability mass under curve, assuming unbiased (mean=0) normal distribution. When probability mass is less than pvalue=0.05%, then null hypothesis can be rejected as random outlier. https://homepage.stat.uiowa.edu/~mbognar/applets/t.html ``` # Now with confidence! import scipy # 0.5 indicate unbiased hypothesis probability of # equal 50% chance of liking coke or pepsi # However df.mean()=0.66, indicate under no biased influence, # 0.66 people like pepsi over coke. scipy.stats.ttest_1samp(survey_data, 0.5) # statistic value (2.364) at point value under fair (0.5) normal distribution, # the probability mass is 2.2%, which is less than pvalue of 5% # Therefore null hypothesis of no biased random chance of 2.2%, can be rejected. # In other word, the survey_data can be accepted with more than 95% confidence. # the t-statistic is the ratio of the departure of the estimated value of a # parameter from its hypothesized value to its standard error # We want to calculate: tstat = 2.364321853156195 # df.std() = 0.478518 # standard deviation is not adjusted to standard error, meaning the big sample # size, the bigger the stardard deviation. # stderr - adjust for error. Sample size variation does not affect its value. sample_stderr = 0.478518 / np.sqrt(len(survey_data)) sample_mean = 0.660000 null_hypothesis_mean = 0.5 t_stat = (sample_mean - null_hypothesis_mean) / sample_stderr print(t_stat) len(survey_data) # Science! Reproducibility... import random def make_soda_data(n=50): # Fair version # return pd.DataFrame([random.randint(0, 1) for _ in range(n)]) # Unfair version! return pd.DataFrame(np.random.binomial(n=1, p=0.5, size=n)) make_soda_data(n=500).describe() t_statistics = [] p_values = [] n_experiments = 10 # Number of visitors for _ in range(n_experiments): df = make_soda_data(n=500000) ttest = scipy.stats.ttest_1samp(df, 0.5) t_statistics.append(ttest.statistic) p_values.append(ttest.pvalue) pd.DataFrame(t_statistics).describe() pd.DataFrame(p_values).describe() random.choice([0,1,1]) # 0.666 unfairness favouring 1 np.random.binomial(100,0.7) # 70% biased favouring 1 out of 100 trials np.random.binomial(1,0.7) # 70% biased out 1 trial ``` ## Assignment - apply the t-test to real data Your assignment is to determine which issues have "statistically significant" differences between political parties in this [1980s congressional voting data](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records). The data consists of 435 instances (one for each congressperson), a class (democrat or republican), and 16 binary attributes (yes or no for voting for or against certain issues). Be aware - there are missing values! Your goals: 1. Load and clean the data (or determine the best method to drop observations when running tests) 2. Using hypothesis testing, find an issue that democrats support more than republicans with p < 0.01 3. Using hypothesis testing, find an issue that republicans support more than democrats with p < 0.01 4. Using hypothesis testing, find an issue where the difference between republicans and democrats has p > 0.1 (i.e. there may not be much of a difference) Note that this data will involve *2 sample* t-tests, because you're comparing averages across two groups (republicans and democrats) rather than a single group against a null hypothesis. Stretch goals: 1. Refactor your code into functions so it's easy to rerun with arbitrary variables 2. Apply hypothesis testing to your personal project data (for the purposes of this notebook you can type a summary of the hypothesis you formed and tested) ``` # TODO - your code here! import pandas as pd import numpy as np df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.data', na_values=['?'], header=None) column_names = [ 'party', 'handicapped_infants', 'water_proj', 'budget_resolution', 'physician_fee', 'elsalvador_aid', 'religious_schools', 'antiban_satellite', 'nicaraguan_aid', 'mx_missile', 'immigration', 'synfuels-cutback' 'education_spending', 'superfund_sue', 'crime', 'duty_free', 'safrica_export' ] df.head() df.fillna(0, inplace=True) df.replace('y',1, inplace=True) df.replace('n',-1, inplace=True) df.head() rvotes = df[df[0] == 'republican'] rvotes.shape dvotes = df[ df[0] == 'democrat'] dvotes.shape votes = df.groupby(df[0]).sum() votes.head() d_issues=votes.loc['democrat'].abs() > votes.loc['republican'].abs() r_issues=votes.loc['democrat'].abs() < votes.loc['republican'].abs() d_issues vote.columns = colname vote.iat[0,1] ``` ### T Test (Student Test) Null hypothesis of T Test is based on normal distribution centered at 0.5 (no biased). * mean = 0.5 * std = 1.0 * var = 1.0 * statistic = 1.96 , pvalue = 0.05 Sample T test return (statistics, pvalue) * statistic = data point value under normal distribution where pvalue is currently showing. * pvalue = tail area under normal distribution where statistic data point value is pointing. ### One sample t test * The observed mean (from a single sample) is compared to an expected mean of population. ## Two sample test * One sample compares to another sample. ### Dependent sample t test * Sample related to members of other samples. * Within group variation. * Two groups of measurements are based on the same sample observation. * Eg before and after treatment of a drug on a patient. ### Independent t test * Sample unrelated to members of other samples. * Differences in means between two groups. * Eg blood pressure treatment of patients vs control group who receives placebo. ### Independent voting * Democrate voting is independent of Republican voting on any particular issues. * Independent t test : ttest_ind ``` from scipy.stats import ttest_ind result = pd.DataFrame(columns=['issues','statistic','pvalue']) for i in range(1,16): t,p = ttest_ind(rvotes[i], dvotes[i]) result.loc[i] = [column_names[i],t,p] result.head() ''' Adding democrat & republic issues dataframe column to existing result dataframe ''' result['d_issues'] = d_issues result['r_issues'] = r_issues result ''' pvalue > 0.01 No much different from democrat nor republican support, liken coin flip. Null hypothesis of random chance accepted. Therefore party support for this issue can not be trusted. ''' result[result['pvalue'] > 0.01] ''' Democrat supported issues ''' result[ result['d_issues'] & result['pvalue'] < 0.01 ] ''' Republic Supported issues ''' result[ result['r_issues'] & result['pvalue'] < 0.01 ] ```
github_jupyter
# Musicians- Medium ``` # Prerequesites from pyhive import hive %load_ext sql %sql hive://cloudera@quickstart.cloudera:10000/sqlzoo %config SqlMagic.displaylimit = 20 ``` ## 6. **List the names, dates of birth and the instrument played of living musicians who play a instrument which Theo also plays.** ``` %%sql WITH ins AS ( SELECT instrument FROM musician JOIN performer ON ( musician.m_no=performer.perf_is) WHERE m_name LIKE 'Theo%' ) SELECT m_name, born, performer.instrument FROM musician JOIN performer ON ( musician.m_no=performer.perf_is) JOIN ins ON (ins.instrument=performer.instrument) WHERE died IS NULL AND m_name NOT LIKE 'Theo%' ORDER BY m_name ``` ## 7. **List the name and the number of players for the band whose number of players is greater than the average number of players in each band.** ``` %%sql WITH t AS ( SELECT DISTINCT band_name, perf_is FROM band JOIN plays_in ON ( band.band_no=plays_in.band_id) JOIN performer ON ( performer.perf_no=plays_in.player) ), summ AS ( SELECT band_name, COUNT(*) nmbr FROM t GROUP BY band_name ) SELECT summ.band_name, summ.nmbr FROM summ JOIN (SELECT AVG(nmbr) mean FROM summ) a WHERE summ.nmbr>a.mean ``` ## 8. **List the names of musicians who both conduct and compose and live in Britain.** ``` %%sql SELECT DISTINCT m_name FROM musician JOIN composer ON ( musician.m_no=composer.comp_is) JOIN place ON (musician.living_in=place.place_no) JOIN performance ON ( performance.conducted_by=musician.m_no) WHERE place_country IN ('England', 'Scotland') ORDER BY m_name ``` ## 9. **Show the least commonly played instrument and the number of musicians who play it.** ``` %%sql WITH t AS ( SELECT instrument, COUNT(*) n FROM performer JOIN plays_in ON ( performer.perf_no=plays_in.player) JOIN performance ON (performance.gave=plays_in.band_id) GROUP By instrument ORDER BY n LIMIT 1 ) SELECT performer.instrument, COUNT(*) n_player FROM performer JOIN t ON (performer.instrument=t.instrument) GROUP BY performer.instrument ``` ## 10. **List the bands that have played music composed by Sue Little; Give the titles of the composition in each case.** ``` %%sql WITH t AS ( SELECT c_no, c_title FROM composition JOIN has_composed ON ( composition.c_no=has_composed.cmpn_no) JOIN composer ON (composer.comp_no=has_composed.cmpr_no) JOIN musician ON (musician.m_no=composer.comp_is) WHERE m_name='Sue Little' ) SELECT band_name, c_title FROM t JOIN performance ON (t.c_no=performance.performed) JOIN band ON (performance.gave=band.band_no) ORDER BY band_name ```
github_jupyter
``` project = 'saga-trafikkdata-prod-pz8l' use_colab_auth = True # Legg inn ditt eget prosjekt her, f.eks. 'saga-olanor-playground-ab12' bq_job_project = '' if (use_colab_auth): from google.colab import auth auth.authenticate_user() print('Authenticated') import warnings from google.cloud import bigquery warnings.filterwarnings('ignore') client = bigquery.Client(project=bq_job_project) ``` Denne spørringen henter enkelt og greit ut timetrafikk for trafikkregistreringspunktet "HØVIK" på datoen 2. februar 2022, totalt for begge kjøreretninger. ``` query = f""" SELECT name AS tellepunkt, EXTRACT(DATE FROM `from` AT TIME ZONE "Europe/Oslo") AS dato, EXTRACT(HOUR FROM `from` AT TIME ZONE "Europe/Oslo") AS time, total.volumeNumbers.volume as timetrafikk FROM `{project}.standardized.timetrafikk` WHERE name = "HØVIK" AND DATE(`from`, "Europe/Oslo") = "2022-02-02" ORDER BY dato, time """ print(query) client.query(query).to_dataframe() ``` Følgende spørring henter data for tellepunktet på E18 ved Høvik og beregner gjennomsnittlig døgntrafikk i januar i år per ukedag, kjøreretning og lengdeklasse. ``` query = f""" SELECT name AS tellepunkt, flat_directions.heading AS retning, EXTRACT(DAYOFWEEK FROM `from` AT TIME ZONE "Europe/Oslo") AS ukedag, CAST(AVG(IF(flattened.lengthRange.upperBound = 5.6, flattened.total.volumeNumbers.volume, NULL)) * 24 AS INT64) AS korte_kjoretoy, CAST(AVG(IF(flattened.lengthRange.upperBound IS NULL AND flattened.lengthRange.lowerBound = 5.6, flattened.total.volumeNumbers.volume, NULL)) * 24 AS INT64) AS lange_kjoretoy FROM `{project}.standardized.timetrafikk`, UNNEST(byDirection) flat_directions, UNNEST(flat_directions.byLengthRange) flattened WHERE name = "HØVIK" AND DATE(`from`, "Europe/Oslo") BETWEEN "2022-01-01" AND "2022-01-31" AND (flattened.lengthRange.upperBound = 5.6 OR (flattened.lengthRange.lowerBound = 5.6 AND flattened.lengthRange.upperBound IS NULL)) GROUP BY 1,2,3 ORDER BY 1,2,3 """ print(query) client.query(query).to_dataframe() ``` Denne spørringen viser en enkel `UNNEST` for å finne totalvolum og volum per __felt__. ``` query = f""" SELECT trpId, DATETIME(`from`, "Europe/Oslo") AS dato, timetrafikk.total.volumeNumbers.volume as totalVolum, lanes.lane.laneNumber as felt, lanes.total.volumeNumbers.volume as feltVolum FROM `{project}.standardized.timetrafikk` timetrafikk, UNNEST(byLane) lanes WHERE DATE(`from`, "Europe/Oslo") = "2021-09-15" AND trpId = "16219V72812" ORDER BY `from` LIMIT 20 """ print(query) client.query(query).to_dataframe() ``` Denne spørringen viser en enkel `UNNEST` for å finne totalvolum og volum per __lengdeklasse__. ``` query = f""" SELECT trpId, DATETIME(`from`, "Europe/Oslo") AS dato, timetrafikk.total.volumeNumbers.volume as totalVolum, IFNULL(lengthRanges.lengthRange.lowerBound, 0) meterLengdeFra, IFNULL(CAST(lengthRanges.lengthRange.upperBound AS STRING), "ubegrenset") meterLengdeTil, lengthRanges.total.volumeNumbers.volume as lengdeklasseVolum FROM `{project}.standardized.timetrafikk` timetrafikk, UNNEST(byLengthRange) lengthRanges WHERE DATE(`from`, "Europe/Oslo") = "2021-09-15" AND trpId = "16219V72812" ORDER BY `from` LIMIT 20 """ print(query) client.query(query).to_dataframe() ```
github_jupyter
``` import sys print(f'Interpreter dir: {sys.executable}') import os import warnings warnings.filterwarnings("ignore") if os.path.basename(os.getcwd()) == 'notebooks': os.chdir('../') print(f'Working dir: {os.getcwd()}') %load_ext autoreload %autoreload 2 import xgboost as xgb import lightgbm as lgb import pandas as pd import numpy as np from fbprophet import Prophet from sklearn.preprocessing import StandardScaler from bayes_opt import BayesianOptimization # Ejemplo de bayessian metaparameter optimization por si quieres usarlo para buscar parametros def bayes_parameter_opt_lgb(X, y, init_round=15, opt_round=25, n_folds=5, random_seed=6, n_estimators=10000, learning_rate=0.02, output_process=False): # prepare data train_data = lgb.Dataset(data=X, label=y) # parameters def lgb_eval(num_leaves, feature_fraction, bagging_fraction, max_depth, lambda_l1, lambda_l2, min_split_gain, min_child_weight): params = {'application':'binary', 'num_iterations': n_estimators, 'learning_rate':learning_rate, 'early_stopping_round':100, 'metric':'binary'} params["num_leaves"] = int(round(num_leaves)) params['feature_fraction'] = max(min(feature_fraction, 1), 0) params['bagging_fraction'] = max(min(bagging_fraction, 1), 0) params['max_depth'] = int(round(max_depth)) params['lambda_l1'] = max(lambda_l1, 0) params['lambda_l2'] = max(lambda_l2, 0) params['min_split_gain'] = min_split_gain params['min_child_weight'] = min_child_weight params["is_unbalance"] = True cv_result = lgb.cv(params, train_data, nfold=n_folds, seed=random_seed, stratified=True, verbose_eval =200, metrics=['auc']) return max(cv_result['auc-mean']) # range lgbBO = BayesianOptimization(lgb_eval, {'num_leaves': (24, 45), 'feature_fraction': (0.1, 0.9), 'bagging_fraction': (0.8, 1), 'max_depth': (5, 8.99), 'lambda_l1': (0, 5), 'lambda_l2': (0, 3), 'min_split_gain': (0.001, 0.1), 'min_child_weight': (5, 50)}, random_state=0) # optimize lgbBO.maximize(init_points=init_round, n_iter=opt_round) # output optimization process if output_process==True: lgbBO.points_to_csv("bayes_opt_result.csv") # return best parameters return lgbBO#.res['max']['max_params'] ``` ### Load data with outliers remove as in previous version ``` # clean data has outlier of std > 2 already removed raw = pd.read_csv("data/processed/clean_data.csv", index_col=["Timestamp"], parse_dates=["Timestamp"]) df = raw.rename(columns={"is_leakage":"target"}) last_leakage_period = "6H" # Create target using rolling window of 6 hours df["target"] = df[["target"]].rolling(last_leakage_period).max().copy() # Standarize time series scaler = StandardScaler().fit(df.values[:, 1:]) df.iloc[:, 1:] = scaler.transform(df.values[:, 1:]) df.head() # Remove 2020 to avoid the coronavirus effect df_2019 = df[df.index < df.index[-14000]] df_2019.shape def split_datasets(df, test_examples=25000): df_train = df[df.index < df.index[-test_examples]] df_test = df[df.index > df.index[-test_examples]] df_val = df_test.iloc[test_examples // 2:].copy() df_test = df_test.iloc[:test_examples // 2].copy() return df_train, df_test, df_val test_examples = 25000 df_train, df_test, df_val = split_datasets(df_2019) df_train.head() ``` ## Extract prophet features ``` # These are all the non-zero features provided by prophet cols = ["ds", 'trend', 'yhat_lower', 'yhat_upper', 'trend_lower', 'trend_upper', 'additive_terms', 'additive_terms_lower', 'additive_terms_upper', 'daily', 'daily_lower', 'daily_upper', 'weekly', 'weekly_lower', 'weekly_upper', 'yearly', 'yearly_lower', 'yearly_upper', 'yhat'] # Choosing only these ones we get pretty much the same metrics than using all the previous ones small_cols = ["ds", 'trend', 'additive_terms', 'daily', 'weekly', 'yearly', 'yhat'] def extract_prophet_features(df, column, model, cols): """Fit a prophet model to the desired column. Return its predictions and fitter model.""" pdf = df[[column]].reset_index().rename(columns={"Timestamp":"ds", column:"y"}) m = Prophet(**model) if isinstance(model, dict) else model model = m.fit(pdf) if isinstance(model, dict) else m return model, model.predict(pdf)[cols].set_index("ds") # Prpphet can be fine-tuned to get better forecastings prophet_params = dict(yearly_seasonality=True, weekly_seasonality=True, daily_seasonality=True, ) column = "PressureBar" model_pres, press_train = extract_prophet_features(df_train, column=column, model=prophet_params, cols=small_cols) column = "m3Volume" model_vol, volume_train = extract_prophet_features(df_train, column=column, model=prophet_params, cols=small_cols) # Use the models fit on training set to extract prophet features on the test set. # This is a conservative assumption, as the prophet models could be continuously trained in # in production to provide more accurate forecastings. column = "PressureBar" _, press_test = extract_prophet_features(df_test, column=column, model=model_pres, cols=small_cols) column = "m3Volume" _, volume_test = extract_prophet_features(df_test, column=column, model=model_vol, cols=small_cols) ``` ## Add rolling statistics (Optional) ``` def add_rolling_means(df, periods): """Add features representing rolling mean aggregation during the provided periods.""" data = [df.rolling(period).mean() for period in periods] df_c = df.copy() for new_df, p in zip(data, periods): df_c = pd.merge(df_c, new_df, left_index=True, right_index=True, how="inner", suffixes=('', "_%s" % p)) return df_c periods = ["1H", "2H", "6H", "12H", "24H"] press_feats_train = add_rolling_means(press_train, periods) vol_feats_train = add_rolling_means(volume_train, periods) press_feats_test = add_rolling_means(press_test, periods) vol_feats_test = add_rolling_means(volume_test, periods) press_feats_train = press_train vol_feats_train = volume_train press_feats_test = press_test vol_feats_test = volume_test ``` ## Create train and test sets ``` train_features = pd.merge(df_train, press_feats_train, right_index=True, left_index=True, how="inner", suffixes=('', "_press")) train_features = pd.merge(train_features, vol_feats_train, right_index=True, left_index=True, how="inner", suffixes=('', "_vol")) test_features = pd.merge(df_test, press_feats_test, right_index=True, left_index=True, how="inner", suffixes=('', "_press")) test_features = pd.merge(test_features, vol_feats_test, right_index=True, left_index=True, how="inner", suffixes=('', "_vol")) train_x = train_features.drop("target", axis=1) train_y = train_features["target"] test_x = test_features.drop("target", axis=1) test_y = test_features["target"] from sklearn.metrics import accuracy_score,roc_auc_score,confusion_matrix,classification_report def print_report(model): y_train_pred = model.predict(train_x) y_test_pred = model.predict(test_x) #y_val_pred = gbm_2.predict(x_val) print("TRAIN SET") print(classification_report(train_y.values.astype(int), y_train_pred.astype(int))) print("\nTEST SET") print(classification_report(test_y.values.astype(int), y_test_pred.astype(int))) print("\nTRAIN SET") print(confusion_matrix(train_y.values.astype(int), y_train_pred.astype(int))) print("\nTEST DATSET") print(confusion_matrix(test_y.values.astype(int), y_test_pred.astype(int))) ``` ``` import imblearn from sklearn.ensemble import BaggingClassifier from imblearn.under_sampling import NearMiss from imblearn.pipeline import make_pipeline from sklearn.linear_model import LogisticRegression n_jobs = 64 pipeline = make_pipeline(NearMiss(version=2, n_jobs=n_jobs), LogisticRegression(max_iter=500, C=0.1, class_weight='balanced', n_jobs=n_jobs, penalty='elasticnet', solver="saga", l1_ratio=0.2)) pipeline.fit(train_x.values, train_y.values.astype(int)) print_report(pipeline) params = {#'pos_bagging_fraction':0.4, #"bagging_fraction":0.5, 'feature_fraction': 0.3721514930979355, 'lambda_l1': 3.0138168803582195, 'lambda_l2': 1.6346495489906907, 'max_depth': None, 'min_child_weight': 1.065235087999525, 'min_split_gain': 0.04432113391500656, 'num_leaves': 42} gbm_2 = lgb.LGBMClassifier(objective='binary',metric='binary', n_estimators=50, bagging_fraction=0.5, scale_pos_weight=1000, # tweaking this has a direct effect on prec/recall tradeoff is_unbalance=False, **params) gbm_2 = make_pipeline(imblearn.combine.SMOTEENN(n_jobs=n_jobs), gbm_2) gbm_2.fit(train_x.values, train_y.values.astype(int)) print_report(gbm_2) from collections import defaultdict import ray import tqdm def get_cum_metrics(y_true, y_pred): metrics = defaultdict(list) for i in tqdm.autonotebook.trange(1, len(x)): mets = classification_report(y_true[:i], y_pred[:i], output_dict=True) for k, v in mets.items(): if k == "1": for ki, vi in v.items(): metrics[ki].append(vi) return metrics @ray.remote def calculate_metrics(i, y_true, y_pred): return classification_report(y_true[:i], y_pred[:i], output_dict=True) y_test_pred = gbm_2.predict(test_x) ray.init(ignore_reinit_error=True) proc_ids = [calculate_metrics.remote(i, test_y.astype(int), y_test_pred.astype(int)) for i in range(1, len(test_y))] results = ray.get(proc_ids) results[0].keys() cum_mets = pd.DataFrame.from_records([r["1"] for r in results if "1" in r]) cum_mets.iloc[:, :3].plot() ```
github_jupyter
# PyTorch: Tabular Classify Binary ![mines](../images/mines.png) ``` import torch import torch.nn as nn from torch import optim import torchmetrics from sklearn.preprocessing import LabelBinarizer, StandardScaler import aiqc from aiqc import datum ``` --- ## Example Data Reference [Example Datasets](example_datasets.ipynb) for more information. ``` df = datum.to_pandas('sonar.csv') df.head() ``` --- ## a) High-Level API Reference [High-Level API Docs](api_high_level.ipynb) for more information including how to work with non-tabular data. ``` splitset = aiqc.Pipeline.Tabular.make( df_or_path = df , dtype = None , feature_cols_excluded = 'object' , feature_interpolaters = None , feature_window = None , feature_encoders = dict( sklearn_preprocess = StandardScaler() , dtypes = ['float64'] ) , feature_reshape_indices = None , label_column = 'object' , label_interpolater = None , label_encoder = dict(sklearn_preprocess = LabelBinarizer(sparse_output=False)) , size_test = 0.12 , size_validation = 0.22 , fold_count = None , bin_count = None ) def fn_build(features_shape, label_shape, **hp): model = nn.Sequential( nn.Linear(features_shape[0], 12), nn.BatchNorm1d(12,12), nn.ReLU(), nn.Dropout(p=0.5), nn.Linear(12, label_shape[0]), nn.Sigmoid() ) return model def fn_train(model, loser, optimizer, samples_train, samples_evaluate, **hp): ## --- Prepare mini batches for analysis --- batched_features, batched_labels = aiqc.torch_batcher( samples_train['features'], samples_train['labels'], batch_size=5, enforce_sameSize=False, allow_1Sample=False ) ## --- Metrics --- acc = torchmetrics.Accuracy() # Mirrors `keras.model.History.history` object. history = { 'loss':list(), 'accuracy': list(), 'val_loss':list(), 'val_accuracy':list() } ## --- Training loop --- epochs = hp['epoch_count'] for epoch in range(epochs): ## --- Batch training --- for i, batch in enumerate(batched_features): # Make raw (unlabeled) predictions. batch_probability = model(batched_features[i]) batch_loss = loser(batch_probability, batched_labels[i]) # Backpropagation. optimizer.zero_grad() batch_loss.backward() optimizer.step() ## --- Epoch metrics --- # Overall performance on training data. train_probability = model(samples_train['features']) train_loss = loser(train_probability, samples_train['labels']) train_acc = acc(train_probability, samples_train['labels'].to(torch.short)) history['loss'].append(float(train_loss)) history['accuracy'].append(float(train_acc)) # Performance on evaluation data. eval_probability = model(samples_evaluate['features']) eval_loss = loser(eval_probability, samples_evaluate['labels']) eval_acc = acc(eval_probability, samples_evaluate['labels'].to(torch.short)) history['val_loss'].append(float(eval_loss)) history['val_accuracy'].append(float(eval_acc)) return model, history ``` Optional, will be automatically selected based on `analysis_type` if left as `None`. ``` def fn_optimize(model, **hp): optimizer = optim.Adamax( model.parameters() , lr=hp['learning_rate'] ) return optimizer hyperparameters = { "learning_rate": [0.01, 0.005] , "epoch_count": [50] } queue = aiqc.Experiment.make( library = "pytorch" , analysis_type = "classification_binary" , fn_build = fn_build , fn_train = fn_train , splitset_id = splitset.id , repeat_count = 2 , hide_test = False , hyperparameters = hyperparameters , fn_lose = None #optional/ automated , fn_optimize = fn_optimize #optional/ automated , fn_predict = None #optional/ automated , foldset_id = None ) queue.run_jobs() ``` For more information on visualization of performance metrics, reference the [Visualization & Metrics](visualization.html) documentation. --- ## b) Low-Level API Reference [Low-Level API Docs](api_low_level.ipynb) for more information including how to work with non-tabular data and defining optimizers. ``` dataset = aiqc.Dataset.Tabular.from_pandas(df) label_column = 'object' label = dataset.make_label(columns=[label_column]) labelcoder = label.make_labelcoder( sklearn_preprocess = LabelBinarizer(sparse_output=False) ) feature = dataset.make_feature(exclude_columns=[label_column]) encoderset = feature.make_encoderset() featurecoder_0 = encoderset.make_featurecoder( sklearn_preprocess = StandardScaler() , dtypes = ['float64'] ) splitset = aiqc.Splitset.make( feature_ids = [feature.id] , label_id = label.id , size_test = 0.22 , size_validation = 0.12 ) def fn_build(features_shape, label_shape, **hp): model = nn.Sequential( nn.Linear(features_shape[0], 12), nn.BatchNorm1d(12,12), nn.ReLU(), nn.Dropout(p=0.5), nn.Linear(12, label_shape[0]), nn.Sigmoid() ) return model def fn_train(model, loser, optimizer, samples_train, samples_evaluate, **hp): ## --- Prepare mini batches for analysis --- batched_features, batched_labels = aiqc.torch_batcher( samples_train['features'], samples_train['labels'], batch_size=5, enforce_sameSize=False, allow_1Sample=False ) ## --- Metrics --- acc = torchmetrics.Accuracy() # Mirrors `keras.model.History.history` object. history = { 'loss':list(), 'accuracy': list(), 'val_loss':list(), 'val_accuracy':list() } ## --- Training loop --- epochs = hp['epoch_count'] for epoch in range(epochs): ## --- Batch training --- for i, batch in enumerate(batched_features): # Make raw (unlabeled) predictions. batch_probability = model(batched_features[i]) batch_loss = loser(batch_probability, batched_labels[i]) # Backpropagation. optimizer.zero_grad() batch_loss.backward() optimizer.step() ## --- Epoch metrics --- # Overall performance on training data. train_probability = model(samples_train['features']) train_loss = loser(train_probability, samples_train['labels']) train_acc = acc(train_probability, samples_train['labels'].to(torch.short)) history['loss'].append(float(train_loss)) history['accuracy'].append(float(train_acc)) # Performance on evaluation data. eval_probability = model(samples_evaluate['features']) eval_loss = loser(eval_probability, samples_evaluate['labels']) eval_acc = acc(eval_probability, samples_evaluate['labels'].to(torch.short)) history['val_loss'].append(float(eval_loss)) history['val_accuracy'].append(float(eval_acc)) return model, history ``` Optional, will be automatically selected based on `analysis_type` if left as `None`. ``` def fn_optimize(model, **hp): optimizer = optim.Adamax( model.parameters() , lr=hp['learning_rate'] ) return optimizer hyperparameters = { "learning_rate": [0.01, 0.005] , "epoch_count": [50] } algorithm = aiqc.Algorithm.make( library = "pytorch" , analysis_type = "classification_binary" , fn_build = fn_build , fn_train = fn_train , fn_optimize = fn_optimize ) hyperparamset = algorithm.make_hyperparamset( hyperparameters = hyperparameters ) queue = algorithm.make_queue( splitset_id = splitset.id , hyperparamset_id = hyperparamset.id , repeat_count = 1 ) queue.run_jobs() ``` For more information on visualization of performance metrics, reference the [Visualization & Metrics](visualization.html) documentation.
github_jupyter
## Import ``` import numpy as np import functions as fc from timeit import default_timer as time from fatiando.gravmag import polyprism from fatiando import mesher, gridder from fatiando.gravmag import prism from fatiando.constants import G, SI2MGAL from scipy.sparse import diags from matplotlib import pyplot as plt import matplotlib.cm as cm from mpl_toolkits.basemap import Basemap from mpl_toolkits.axes_grid1.inset_locator import inset_axes from scipy.interpolate import griddata from scipy import interpolate, signal from fatiando.vis import mpl import cPickle as pickle #%matplotlib inline def plot_rec(bmap, lower_left, upper_left, lower_right, upper_right): xs = [lower_left[0], upper_left[0], lower_right[0], upper_right[0], lower_left[0], lower_right[0], upper_left[0], upper_right[0]] ys = [lower_left[1], upper_left[1], lower_right[1], upper_right[1], lower_left[1], lower_right[1], upper_left[1], upper_right[1]] bmap.plot(xs, ys, latlon = True, color='red') ``` ## Observed Grid and Data ``` with open('carajas_gz.pickle') as r: carajas = pickle.load(r) grid_x = carajas['x'] grid_y = carajas['y'] grid_z = carajas['z'] grid_dobs = carajas['gz'] gz_max = np.max(grid_dobs) gz_min = np.min(grid_dobs) print gz_min, gz_max gz_colorbar_ranges = np.arange(-90., 6.1, 6) print gz_colorbar_ranges shape = (500, 500) font_title = 10 font_ticks = 8 font_labels = 10 height=6. width = 8. height_per_width = height/width #plt.figure(figsize=(8,6)) plt.figure(figsize=(4.33,4.33*height_per_width)) #plt.plot() ax=plt.subplot(1,1,1) # plt.tricontourf(np.ravel(grid_y),np.ravel(grid_x),np.ravel(grid_dobs), # levels=gz_colorbar_ranges, cmap='jet', # vmin = -90, vmax = 6) plt.contourf(grid_y.reshape(shape),grid_x.reshape(shape),grid_dobs.reshape(shape), levels=gz_colorbar_ranges, cmap='jet', vmin = -90, vmax = 6) #define colorbar cbar = plt.cm.ScalarMappable(cmap=cm.jet) cbar.set_array(np.ravel(grid_dobs)) cbar.set_clim(-90, 6) cb = plt.colorbar(cbar, shrink=1, boundaries=gz_colorbar_ranges) cb.set_label('Gravity data (mGal)', rotation=90, fontsize=font_labels) cb.ax.tick_params(labelsize=font_ticks) plt.xlim(np.min(grid_y),np.max(grid_y)) plt.ylim(np.min(grid_x),np.max(grid_x)) plt.xticks(fontsize=font_ticks) plt.yticks(fontsize=font_ticks) plt.xlabel('Easting coordinate y (km)', fontsize=font_labels) plt.ylabel('Northing coordinate x (m)', fontsize=font_labels) mpl.m2km() plt.tight_layout(True) # plot the inset #inset = inset_axes(ax, width="40%", height="40%", loc=3, bbox_to_anchor=(65,44,350,350)) inset = inset_axes(ax, width="30%", height="30%", loc=3) m = Basemap(projection='merc',llcrnrlat=-40,urcrnrlat=10,\ llcrnrlon=-82,urcrnrlon=-29,lat_ts=20,resolution='c') m.drawcoastlines(zorder=1) m.fillcontinents(color='white',lake_color='aqua') llcrnrlon = -53 urcrnrlon = -49 llcrnrlat = -8 urcrnrlat = -5 lower_left = (llcrnrlon, llcrnrlat) lower_right= (urcrnrlon, llcrnrlat) upper_left = (llcrnrlon, urcrnrlat) upper_right= (urcrnrlon, urcrnrlat) plot_rec(m, lower_left, upper_left, lower_right, upper_right, ) m.drawmapboundary(fill_color='lightblue') m.drawcountries(linewidth=0.6, linestyle='solid', color='k' ) #plt.savefig('../manuscript/Fig/carajas_real_data.png', dpi=300) plt.savefig('../manuscript/Fig/Figure9.png', dpi=1200) plt.show() ``` ## Equivalent layer Depth ``` # Equivalent Layer depth shape_m = (500, 500) zj = np.ones_like(grid_z)*300 ``` ## Fast Eq. Layer Combined with Circulant-Toeplitz (BCCB) ``` # Predicted data s = time() itmax = 50 rho_toep, gzp_toep = fc.fast_eq_bccb(np.ravel(grid_x),np.ravel(grid_y),np.ravel(grid_z), np.ravel(zj),shape_m,np.ravel(grid_dobs),itmax) e = time() tcpu = e - s print tcpu delta_gz = gzp_toep-np.ravel(grid_dobs) ``` ## Property estimative plot ## Data, Predicted data and Residuals plot ## Fast Equivalent layer BCCB plot ``` mean = np.mean(delta_gz) print mean std = np.std(delta_gz) print std print np.min(delta_gz), np.max(delta_gz) res_colorbar_ranges = np.arange(-0.8, 0.81, 0.1) res_colorbar_ranges # plot of the vertical component of the gravitational atraction at z=0 font_title = 10 font_ticks = 8 font_labels = 10 height=12. width = 8. height_per_width = height/width #plt.figure(figsize=(7,10)) plt.figure(figsize=(4.33,4.33*height_per_width)) #plt.subplot(311) #plt.title('A)', y=0.91, x=0.1, fontsize=18) #plt.tricontourf(np.ravel(grid_y),np.ravel(grid_x),np.ravel(grid_dobs),15,cmap='jet') #cb = plt.colorbar() ##plt.axis('scaled') #cb.set_label('$Gz$ ( $mGal$ )', rotation=90, fontsize=14) #plt.xlim(np.min(yi_c),np.max(yi_c)) #plt.ylim(np.min(xi_c),np.max(xi_c)) #plt.xticks(fontsize=14) #plt.yticks(fontsize=14) #plt.xlabel('Easting coordinate y (km)', fontsize=12) #plt.ylabel('Northing coordinate x (m)', fontsize=12) #mpl.m2km() plt.subplot(211) plt.title('(a)', y=0.93, x=-0.20, fontsize=font_title) # plt.tricontourf(np.ravel(grid_y),np.ravel(grid_x),gzp_toep, # levels=gz_colorbar_ranges, cmap='jet', # vmin = -90, vmax = 6) plt.contourf(grid_y.reshape(shape),grid_x.reshape(shape),gzp_toep.reshape(shape), levels=gz_colorbar_ranges, cmap='jet', vmin = -90, vmax = 6) #define colorbar cbar = plt.cm.ScalarMappable(cmap=cm.jet) cbar.set_array(gzp_toep) cbar.set_clim(-90, 6) cb = plt.colorbar(cbar, shrink=1, boundaries=gz_colorbar_ranges) cb.set_label('Gravity data (mGal)', rotation=90, fontsize=font_labels) cb.ax.tick_params(labelsize=font_ticks) #plt.xlim(np.min(grid_y),np.max(grid_y)) #plt.ylim(np.min(grid_x),np.max(grid_x)) plt.xticks(fontsize=font_ticks) plt.yticks(fontsize=font_ticks) #plt.xlabel('Easting coordinate y (km)', fontsize=14) plt.ylabel('Northing coordinate x (m)', fontsize=font_labels) mpl.m2km() plt.subplot(212) plt.title('(b)', y=0.93, x=-0.20, fontsize=font_title) # plt.tricontourf(np.ravel(grid_y),np.ravel(grid_x),delta_gz, # levels=res_colorbar_ranges, vmin=-0.8, vmax=.8, cmap='jet') plt.contourf(grid_y.reshape(shape),grid_x.reshape(shape),delta_gz.reshape(shape), levels=res_colorbar_ranges, vmin=-0.6, vmax=0.6, cmap='jet') #define colorbar cbar = plt.cm.ScalarMappable(cmap=cm.jet) cbar.set_array(delta_gz) cbar.set_clim(-0.6, 0.6) cb = plt.colorbar(cbar, shrink=1, boundaries=res_colorbar_ranges, extend='both') cb.set_label('Residuals (mGal)', rotation=90, fontsize=font_labels) cb.ax.tick_params(labelsize=font_ticks) #plt.xlim(np.min(grid_y),np.max(grid_y)) #plt.ylim(np.min(grid_x),np.max(grid_x)) plt.xticks(fontsize=font_ticks) plt.yticks(fontsize=font_ticks) plt.xlabel('Easting coordinate y (km)', fontsize=font_labels) plt.ylabel('Northing coordinate x (m)', fontsize=font_labels) mpl.m2km() plt.tight_layout(True) #plt.savefig('../manuscript/Fig/Carajas_gz_predito.png', dpi=300) plt.savefig('../manuscript/Fig/Figure10.png', dpi=1200) ``` ## Transformation - Upward Continuation ``` # BTTb Eq. Layer Transformation N = shape_m[0]*shape_m[1] z_up = np.zeros_like(grid_x)-5000 s = time() BTTB_up = fc.bttb(np.ravel(grid_x),np.ravel(grid_y),np.ravel(z_up),np.ravel(zj)) cev_up = fc.bccb(shape_m,N,BTTB_up) gzp_bccb_up = fc.fast_forward_bccb(shape_m,N,rho_toep,cev_up) e = time() tcpu = e - s print tcpu # plot of the vertical component of the gravitational atraction at z=0 font_title = 10 font_ticks = 8 font_labels = 10 height=6. width = 8. height_per_width = height/width #plt.figure(figsize=(8,6)) plt.figure(figsize=(4.33,4.33*height_per_width)) # plt.tricontourf(np.ravel(grid_y),np.ravel(grid_x),gzp_bccb_up, # 30, cmap='jet', vmin = -90, vmax = 6) plt.contourf(grid_y.reshape(shape),grid_x.reshape(shape),gzp_bccb_up.reshape(shape), 30, cmap='jet', vmin = -90, vmax = 6) #define colorbar cbar = plt.cm.ScalarMappable(cmap=cm.jet) cbar.set_array(gzp_bccb_up) cbar.set_clim(-90, 6) cb = plt.colorbar(cbar, shrink=1, boundaries=gz_colorbar_ranges) cb.set_label('Gravity data (mGal)', rotation=90, fontsize=font_labels) cb.ax.tick_params(labelsize=font_ticks) plt.xlim(np.min(grid_y),np.max(grid_y)) plt.ylim(np.min(grid_x),np.max(grid_x)) plt.xticks(fontsize=font_ticks) plt.yticks(fontsize=font_ticks) plt.xlabel('Easting coordinate y (km)', fontsize=font_labels) plt.ylabel('Northing coordinate x (m)', fontsize=font_labels) mpl.m2km() plt.tight_layout(True) #plt.savefig('../manuscript/Fig/up5000_carajas_500x500.png', dpi=300) plt.savefig('../manuscript/Fig/Figure11.png', dpi=1200) plt.show() ``` ## Transformation - Downward Continuation ## Transformation - Gzz ## Junk Tests
github_jupyter
<div class="contentcontainer med left" style="margin-left: -50px;"> <dl class="dl-horizontal"> <dt>Title</dt> <dd> Scatter Element</dd> <dt>Dependencies</dt> <dd>Matplotlib</dd> <dt>Backends</dt> <dd><a href='./Scatter.ipynb'>Matplotlib</a></dd> <dd><a href='../bokeh/Scatter.ipynb'>Bokeh</a></dd> <dd><a href='../plotly/Scatter.ipynb'>Plotly</a></dd> </dl> </div> ``` import numpy as np import holoviews as hv from holoviews import dim hv.extension('matplotlib') ``` The ``Scatter`` element visualizes as markers placed in a space of one independent variable, traditionally denoted as *x*, against a dependent variable, traditionally denoted as *y*. In HoloViews, the name ``'x'`` is the default dimension name used in the key dimensions (``kdims``) and ``'y'`` is the default dimension name used in the value dimensions (``vdims``). We can see this from the default axis labels when visualizing a simple ``Scatter`` element: ``` np.random.seed(42) coords = [(i, np.random.random()) for i in range(20)] scatter = hv.Scatter(coords) scatter.opts(color='k', marker='s', s=50) ``` Here the random *y* values are considered to be the 'data' whereas the *x* positions express where those data values were measured (compare this to the different way that [``Points``](./Points.ipynb) elements are defined). In this sense, ``Scatter`` is equivalent to a [``Curve``](./Curve.ipynb) without any lines connecting the samples, and you can use slicing to view the *y* values corresponding to a chosen *x* range: ``` scatter[0:12] + scatter[12:20] ``` A ``Scatter`` element must always have at least one value dimension (to give it a *y* location), but additional value dimensions are also supported. Here is an example with two additional quantities for each point, declared as the ``vdims`` ``'z'`` and ``'size'`` visualized as the color and size of the dots, respectively: ``` np.random.seed(10) data = np.random.rand(100,4) scatter = hv.Scatter(data, vdims=['y', 'z', 'size']) scatter = scatter.opts(color='z', s=dim('size')*100) scatter + scatter[0.3:0.7, 0.3:0.7].hist() ``` In the right subplot, the ``hist`` method is used to show the distribution of samples along our first value dimension, (*y*). The marker shape specified above can be any supported by [matplotlib](http://matplotlib.org/api/markers_api.html), e.g. ``s``, ``d``, or ``o``; the other options select the color and size of the marker. **Note**: Although the ``Scatter`` element is superficially similar to the [``Points``](./Points.ipynb) element (they can generate plots that look identical), the two element types are semantically quite different: Unlike ``Scatter``, ``Points`` are used to visualize data where the *y* variable is *independent*. This semantic difference also explains why the histogram generated by the ``hist`` call above visualizes the distribution of a different dimension than it does for [``Points``](./Points.ipynb) (because here *y*, not *z*, is the first ``vdim``). This difference means that ``Scatter`` elements most naturally overlay with other elements that express dependent relationships between the x and y axes in two-dimensional space, such as the ``Chart`` types like [``Curve``](./Curve.ipynb). Conversely, ``Points`` elements either capture (x,y) spatial locations or they express a dependent relationship between an (x,y) location and some other dimension (expressed as point size, color, etc.), and thus they most naturally overlay with [``Raster``](./Raster.ipynb) types like [``Image``](./Image.ipynb). For full documentation and the available style and plot options, use ``hv.help(hv.Scatter).``
github_jupyter
``` import numpy as np import scipy import matplotlib.pyplot as plt %matplotlib inline plt.style.use('jf') from jftools import fedvr # 5 points (element boundaries) gives 4 elements # very low order to have only a few basis functions for plot # g = fedvr_grid(4,np.linspace(0,8,5)) g = fedvr.fedvr_grid(4,np.array([0,2,3.5,4.5,6,8])) xnew = np.linspace(g.x[0],g.x[-1],500) fvals = g.get_basis_function_values(xnew) plt.plot(g.x,0*g.x,'ro',ms=10,mew=2.5,zorder=5,label='FEDVR points') for x in [r.x[0] for r in g.regs]+[g.regs[-1].x[-1]]: plt.axvline(x,ls='--',color='0.4',lw=1,label='FE boundaries' if x==g.regs[0].x[0] else None) plt.plot(xnew,fvals.T) plt.margins(0.03) plt.legend() plt.tight_layout(pad=0) from ipywidgets import interact import scipy.sparse.linalg g = fedvr.fedvr_grid(11,np.linspace(-80,80,41)) print("#Grid points:",len(g.x)) M = 1. sigma = 8. k0 = 1. ts, dt = np.linspace(0,300,301,retstep=True) f0 = lambda x: np.exp(-x**2/(2*sigma**2) + 1j*k0*x) H = -g.dx2/(2*M) psis = np.zeros([len(ts),len(g.x)],dtype=complex) psis[0] = g.project_function(f0) U = scipy.sparse.linalg.expm(-1j*dt*H.tocsc()) for ii in range(1,len(ts)): psis[ii] = U.dot(psis[ii-1]) xnew = np.linspace(g.x[0],g.x[-1],500) psiplots = g.evaluate_basis(psis,xnew) @interact(ii=(0,len(ts)-1)) def doplot(ii=0): plt.plot(xnew,abs(psiplots[ii])**2) #plt.plot(g.x,abs(psis[ii])**2/g.wt,'o--') plt.ylim(0,1) sigma = 0.8 fdfs = [(lambda x: np.exp(-x**2/(2*sigma**2)), lambda x: np.exp(-x**2/(2*sigma**2)) * -x/sigma**2, lambda x: np.exp(-x**2/(2*sigma**2)) * (x**2-sigma**2)/sigma**4), (np.sin, np.cos, lambda x: -np.sin(x)), (lambda x: np.sin(np.pi*x/4)**2, lambda x: np.pi/4*np.sin(np.pi*x/2), lambda x: np.pi**2/8*np.cos(np.pi*x/2)), (lambda x: np.pi/4*np.sin(np.pi*x/2), lambda x: np.pi**2/8*np.cos(np.pi*x/2), lambda x: -np.pi**3/16*np.sin(np.pi*x/2)), (lambda x: np.sin(x)**2, lambda x: np.sin(2*x), lambda x: 2*np.cos(2*x)), (lambda x: np.sin(12*x), lambda x: 12*np.cos(12*x), lambda x: -144*np.sin(12*x)) ] g = fedvr.fedvr_grid(11,np.linspace(-4,4,5)) xnew = np.linspace(g.x[0],g.x[-1],1000) fig, axs = plt.subplots(1,len(fdfs),figsize=(7.5*len(fdfs),6.5)) for (f,df,d2f),ax in zip(fdfs,axs): cn = g.project_function(f) y = g.evaluate_basis(cn,xnew) dcn = g.dx.dot(cn) dy = g.evaluate_basis(dcn,xnew) dcn2 = g.dx2.dot(cn) dcn2a = g.dx.dot(dcn) dy2 = g.evaluate_basis(dcn2,xnew) dy2a = g.evaluate_basis(dcn2a,xnew) next(ax._get_lines.prop_cycler) ax.plot(xnew,y,label=r'$f(x)$') ax.plot(xnew,f(xnew),'k--') ax.plot(xnew,dy,label=r"$f'(x)$") ax.plot(xnew,df(xnew),'k--') ax.plot(xnew,dy2,label=r"$f''(x)$") ax.plot(xnew,d2f(xnew),'k--') ax.margins(0.03) ax.legend(fontsize=18) fig.tight_layout() dx = g.dx.toarray() dxdx = dx @ dx dx2 = g.dx2.toarray() print(np.linalg.norm(dx+dx.T)) print(np.linalg.norm(dxdx-dxdx.T)) print(np.linalg.norm(dx2-dx2.T)) plt.plot(np.linalg.eigvalsh(-0.5*dx2)) plt.plot(np.linalg.eigvalsh(-0.5*dxdx)) plt.plot(np.arange(dx.shape[0])**2*np.pi**2/(2*8**2)) plt.ylim(0,100) #plt.xlim(0,10); plt.ylim(0,10) f, axs = plt.subplots(1,4,figsize=(23,4)) for ax, arr in zip(axs,[dx,dx2,dxdx,dx2-dxdx]): arr = np.sign(arr)*np.sqrt(abs(arr)) vmax = abs(arr).max() im = ax.imshow(arr,interpolation='none',cmap='coolwarm',vmin=-vmax,vmax=vmax) plt.colorbar(im,ax=ax) fdfs = [(lambda x: np.exp(-x**2/(2*0.5**2)), lambda x: np.exp(-x**2/(2*0.5**2)) * -x/0.5**2), (np.sin, np.cos), (lambda x: np.sin(x)**2, lambda x: np.sin(2*x)), (lambda x: np.sin(12*x), lambda x: 12*np.cos(12*x)) ] g = fedvr.fedvr_grid(11,np.linspace(-4,4,5)) xnew = np.linspace(g.x[0],g.x[-1],1000) fig, axs = plt.subplots(1,len(fdfs),figsize=(7*len(fdfs),5.5)) for (f,df),ax in zip(fdfs,axs): cn = g.project_function(f) y = g.evaluate_basis(cn,xnew) dcn = g.dx.dot(cn) dy = g.evaluate_basis(dcn,xnew) next(ax._get_lines.prop_cycler) ax.plot(xnew,y,label=r'$f(x)$') ax.plot(xnew,f(xnew),'k--') ax.plot(xnew,dy,label=r"$f'(x)$") ax.plot(xnew,df(xnew),'k--') ax.margins(0.03) ax.legend(fontsize=18) fig.tight_layout() f = lambda x: np.exp(-x**2/(2*0.5**2)) nfuns = [5,8,11,15] fig, axs = plt.subplots(1,len(nfuns),figsize=(7*len(nfuns),5.5),sharey=True) for nfun, ax in zip(nfuns,axs): g = fedvr.fedvr_grid(nfun,np.linspace(-4,4,5)) xnew = np.linspace(g.x[0],g.x[-1],1000) y = f(xnew) cn = g.project_function(f) ynew = g.evaluate_basis(cn,xnew) ax.plot(xnew,y,label=r'$f(x)$',lw=3) ax.plot(g.x,cn/np.sqrt(g.wt),'o--',lw=1,ms=6,label=r'$f(x_n) = c_n/\sqrt{w_n}$',zorder=4) ax.plot(xnew,ynew,'--',label=r'$\tilde f(x) = \sum_n c_n b_n(x)$') ax.margins(0.02) ax.legend() ax.set_title(r"$N_{fun} = %d$, $\|\tilde f - f\|/\|f\| = %.3e$"%(nfun,np.trapz(abs(y-ynew),xnew)/np.trapz(y,xnew)),verticalalignment='bottom') print(np.trapz(y,xnew)-np.sum(cn*np.sqrt(g.wt))) fig.tight_layout(pad=0.5) ```
github_jupyter
# Steps to Tackle a Time Series Problem (with Codes in Python) Note: These are just the codes from article ## Loading and Handling TS in Pandas ``` import pandas as pd import numpy as np import matplotlib.pylab as plt %matplotlib inline from matplotlib.pylab import rcParams rcParams['figure.figsize'] = 15, 6 #Note: aim is not to teach stock price forecasting. It's a very complex domain and I have almost no clue about it. Here I will demonstrate the various techniques which can be used for time-series forecasting data = pd.read_csv('AirPassengers.csv') print data.head() print '\n Data Types:' print data.dtypes ``` Reading as datetime format: ``` dateparse = lambda dates: pd.datetime.strptime(dates, '%Y-%m') # dateparse('1962-01') data = pd.read_csv('AirPassengers.csv', parse_dates='Month', index_col='Month',date_parser=dateparse) print data.head() #check datatype of index data.index #convert to time series: ts = data['#Passengers'] ts.head(10) ``` ### Indexing TS arrays: ``` #1. Specific the index as a string constant: ts['1949-01-01'] #2. Import the datetime library and use 'datetime' function: from datetime import datetime ts[datetime(1949,1,1)] ``` #Get range: ``` #1. Specify the entire range: ts['1949-01-01':'1949-05-01'] #2. Use ':' if one of the indices is at ends: ts[:'1949-05-01'] ``` Note: ends included here ``` #All rows of 1962: ts['1949'] ``` # Checking for stationarity ## Plot the time-series ``` plt.plot(ts) ``` ### Function for testing stationarity ``` from statsmodels.tsa.stattools import adfuller def test_stationarity(timeseries): #Determing rolling statistics rolmean = pd.rolling_mean(timeseries, window=12) rolstd = pd.rolling_std(timeseries, window=12) #Plot rolling statistics: orig = plt.plot(timeseries, color='blue',label='Original') mean = plt.plot(rolmean, color='red', label='Rolling Mean') std = plt.plot(rolstd, color='black', label = 'Rolling Std') plt.legend(loc='best') plt.title('Rolling Mean & Standard Deviation') plt.show(block=False) #Perform Dickey-Fuller test: print 'Results of Dickey-Fuller Test:' dftest = adfuller(timeseries, autolag='AIC') dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used']) for key,value in dftest[4].items(): dfoutput['Critical Value (%s)'%key] = value print dfoutput test_stationarity(ts) ``` # Making TS Stationary ## Estimating & Eliminating Trend ``` ts_log = np.log(ts) plt.plot(ts_log) ``` ## Smoothing: ### Moving average ``` moving_avg = pd.rolling_mean(ts_log,12) plt.plot(ts_log) plt.plot(moving_avg, color='red') ts_log_moving_avg_diff = ts_log - moving_avg ts_log_moving_avg_diff.head(12) ts_log_moving_avg_diff.dropna(inplace=True) ts_log_moving_avg_diff.head() test_stationarity(ts_log_moving_avg_diff) ``` ### Exponentially Weighted Moving Average ``` expwighted_avg = pd.ewma(ts_log, halflife=12) plt.plot(ts_log) plt.plot(expwighted_avg, color='red') # expwighted_avg.plot(style='k--') ts_log_ewma_diff = ts_log - expwighted_avg test_stationarity(ts_log_ewma_diff) ``` ## Eliminating Trend and Seasonality ### Differencing: ``` #Take first difference: ts_log_diff = ts_log - ts_log.shift() plt.plot(ts_log_diff) ts_log_diff.dropna(inplace=True) test_stationarity(ts_log_diff) ``` ### Decomposition: ``` from statsmodels.tsa.seasonal import seasonal_decompose decomposition = seasonal_decompose(ts_log) trend = decomposition.trend seasonal = decomposition.seasonal residual = decomposition.resid plt.subplot(411) plt.plot(ts_log, label='Original') plt.legend(loc='best') plt.subplot(412) plt.plot(trend, label='Trend') plt.legend(loc='best') plt.subplot(413) plt.plot(seasonal,label='Seasonality') plt.legend(loc='best') plt.subplot(414) plt.plot(residual, label='Residuals') plt.legend(loc='best') plt.tight_layout() ts_log_decompose = residual ts_log_decompose.dropna(inplace=True) test_stationarity(ts_log_decompose) ``` # Final Forecasting ``` from statsmodels.tsa.arima_model import ARIMA ``` ### ACF & PACF Plots ``` #ACF and PACF plots: from statsmodels.tsa.stattools import acf, pacf lag_acf = acf(ts_log_diff, nlags=20) lag_pacf = pacf(ts_log_diff, nlags=20, method='ols') #Plot ACF: plt.subplot(121) plt.plot(lag_acf) plt.axhline(y=0,linestyle='--',color='gray') plt.axhline(y=-1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray') plt.axhline(y=1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray') plt.title('Autocorrelation Function') #Plot PACF: plt.subplot(122) plt.plot(lag_pacf) plt.axhline(y=0,linestyle='--',color='gray') plt.axhline(y=-1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray') plt.axhline(y=1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray') plt.title('Partial Autocorrelation Function') plt.tight_layout() ``` ### AR Model: ``` #MA model: model = ARIMA(ts_log, order=(2, 1, 0)) results_AR = model.fit(disp=-1) plt.plot(ts_log_diff) plt.plot(results_AR.fittedvalues, color='red') plt.title('RSS: %.4f'% sum((results_AR.fittedvalues-ts_log_diff)**2)) ``` ### MA Model ``` model = ARIMA(ts_log, order=(0, 1, 2)) results_MA = model.fit(disp=-1) plt.plot(ts_log_diff) plt.plot(results_MA.fittedvalues, color='red') plt.title('RSS: %.4f'% sum((results_MA.fittedvalues-ts_log_diff)**2)) ``` ### ARIMA Model: ``` model = ARIMA(ts_log, order=(2, 1, 2)) results_ARIMA = model.fit(disp=-1) plt.plot(ts_log_diff) plt.plot(results_ARIMA.fittedvalues, color='red') plt.title('RSS: %.4f'% sum((results_ARIMA.fittedvalues-ts_log_diff)**2)) ``` ### Convert to original scale: ``` predictions_ARIMA_diff = pd.Series(results_ARIMA.fittedvalues, copy=True) print predictions_ARIMA_diff.head() predictions_ARIMA_diff_cumsum = predictions_ARIMA_diff.cumsum() print predictions_ARIMA_diff_cumsum.head() predictions_ARIMA_log = pd.Series(ts_log.ix[0], index=ts_log.index) predictions_ARIMA_log = predictions_ARIMA_log.add(predictions_ARIMA_diff_cumsum,fill_value=0) predictions_ARIMA_log.head() plt.plot(ts_log) plt.plot(predictions_ARIMA_log) predictions_ARIMA = np.exp(predictions_ARIMA_log) plt.plot(ts) plt.plot(predictions_ARIMA) plt.title('RMSE: %.4f'% np.sqrt(sum((predictions_ARIMA-ts)**2)/len(ts))) ```
github_jupyter
# Gameplan: 1. Set up data 2. Create subset for Excel 3. Make a prediction w Dot Product 4. Analyze results 5. Try a neural net. ``` from theano.sandbox import cuda %matplotlib inline import utils; reload(utils) from utils import * from __future__ import division, print_function path = "data/ml-small/ml-latest-small/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size = 64 ``` # Setup We'll read in the ratings using read_csv function from Pandas, which reads a csv file into a pandas dataframe--a 2D size-mutable tabular data structure w/ labeled rows and columns. It's a dict-like container fro series objects. We'll return the head of the dataframe, which by default is set to n=5 rows. ``` ratings = pd.read_csv(path+'ratings.csv') ratings.head() len(ratings) #how many? ``` We'll read in movie names for display purposes We'll use set_index, a dataframe functiuon that will give us info from the column label. to_dict will just convert this to a dictionary ``` movie_names_prelim = pd.read_csv(path+'ratings.csv').set_index('movieId').to_dict() print(movie_names_prelim["title"]) movie_names = pd.read_csv(path+'movies.csv').set_index('movieId')['title'].to_dict() ``` We'll grab users/movies as the unique values from ratings. ``` users = ratings.userId.unique() movies = ratings.movieId.unique() print(users[:5]) print(movies[:10]) ``` We'll reverse order of users,movies by flipping things. we use enumerate() which adds a counter to an iterable--it starts from 0 and will count from there. It will update movie and user ids so that they are contiguous integers--this helps us use embeddings. ``` userid2idx = {o:i for i,o in enumerate(users)} movieid2idx = {o:i for i,o in enumerate(movies)} ``` We'll reorder the ratings/users in our ratings DataFrame by using apply, which applies a function along the input axis of a DataFrame. This'll make sure things in our frame our ordered when we use them. ``` ratings.movieId = ratings.movieId.apply(lambda(x): movieid2idx[x]) ratings.userId = ratings.userId.apply(lambda(x): userid2idx[x]) user_min, user_max, movie_min, movie_max = (ratings.userId.min(), ratings.userId.max(), ratings.movieId.min(), ratings.movieId.max()) user_min, user_max, movie_min, movie_max # now to get the number of users and movies using nunique(), not unique() n_users = ratings.userId.nunique() n_movies = ratings.movieId.nunique() n_users,n_movies ``` 671,9066 are the number of latent factors in each embedding. ``` n_factors = 50 np.random.seed = 42 #seeds a generator--for when we want repeatable results. ``` Now split into training and validation random.rand creates an array of the given shape and will populate it with values from a uniform distribution over [0,1)--since we say to be less than 0.8, the values in the array that are less than 0.8 will show up as true, and thsoe that are not will be replaced with a value of false. ``` things = [1,2,3,4,5] rand_toy = np.random.rand(len(things)) < 0.7 print(rand_toy) print(rand_toy[~things]) ``` We use the tilde operator for validation--it's the invert/complement operation. So bascially trn should be all ratings less than 0.8 and val should be those more than 0.8 (?) ``` msk = np.random.rand(len(ratings)) < 0.8 trn = ratings[msk] val = ratings[~msk] ``` # Subset for Excel We now get the most popular movies and most addicted users to copy into excel. We'll use pandas groupby to group series of things by key or a series of columns ``` g=ratings.groupby('userId')['rating'].count() #count returns a series w/ number of non-null observations over the requested axis topUsers=g.sort_values(ascending=False)[:15] #top 15 users g=ratings.groupby('movieId')['rating'].count() topMovies=g.sort_values(ascending=False)[:15] top_r = ratings.join(topUsers, rsuffix='_r', how='inner', on='userId') #rsuffix is used from right frame's overlapping columns #inner will form an intersection of calling frame's index with the other frame's index preserving the order othe calling one #on is the column in the caller (ratings) to join the index in other top_r = top_r.join(topMovies, rsuffix='_r', how='inner', on='movieId') pd.crosstab(top_r.userId, top_r.movieId, top_r.rating, aggfunc=np.sum) #creates a crosstab of 2+ factors ``` # Dot Product ``` user_in = Input(shape=(1,),dtype='int64',name='user_in') u = Embedding(n_users,n_factors,input_length=1,W_regularizer=l2(1e-4))(user_in) # turns positive integers (indexes) into dense vectors of a fixed size. input_dim is n_users, output is n_factors. movie_in = Input(shape=(1,),dtype='int64',name='movie_in') m = Embedding(n_movies,n_factors,input_length=1,W_regularizer=l2(1e-4))(movie_in) x = merge([u,m], mode='dot') x = Flatten()(x) model = Model([user_in,movie_in],x) #we're using functional api and giving multiple inputs to the model model.compile(Adam(0.001),loss='mse') model.fit([trn.userId,trn.movieId],trn.rating,batch_size=64,nb_epoch=1, validation_data=([val.userId,val.movieId],val.rating)) model.optimizer.lr=0.01 model.fit([trn.userId,trn.movieId],trn.rating,batch_size=64,nb_epoch=1, validation_data=([val.userId,val.movieId],val.rating)) model.optimizer.lr=0.001 model.fit([trn.userId,trn.movieId],trn.rating,batch_size=64,nb_epoch=6, validation_data=([val.userId,val.movieId],val.rating)) ``` # Bias We need a single bias for each user / each movie representing how positive/negative each user is, and how good each movie is. To get this, we'll create an embedding w one output for each movie and each user, and add it to our output. ``` def embedding_input(name, n_in, n_out, reg): inp = Input(shape=(1,),dtype='int64',name=name) return inp, Embedding(n_in,n_out,input_length=1, W_regularizer=l2(reg))(inp) user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4) movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4) def create_bias(inp, n_in): x = Embedding(n_in, 1, input_length=1)(inp) #n_in is the input to embedding layer, output is 1 number return Flatten()(x) ub = create_bias(user_in,n_users) mb = create_bias(movie_in,n_movies) x = merge([u,m],mode='dot') x = Flatten()(x) x = merge([x,ub],mode='sum') #add user bias to our dot product x = merge([x,mb],mode='sum') #add movie bias to dot product model = Model([user_in,movie_in],x) model.compile(Adam(0.001),loss='mse') model.fit([trn.userId,trn.movieId],trn.rating, batch_size=64, nb_epoch=1, validation_data=([val.userId,val.movieId],val.rating)) model.optimizer.lr=0.01 model.fit([trn.userId,trn.movieId],trn.rating,batch_size=64,nb_epoch=6, validation_data=([val.userId,val.movieId],val.rating)) model.optimizer.lr=0.001 model.fit([trn.userId,trn.movieId],trn.rating,batch_size=64,nb_epoch=10, validation_data=([val.userId,val.movieId],val.rating)) model.fit([trn.userId,trn.movieId],trn.rating,batch_size=64,nb_epoch=5, validation_data=([val.userId,val.movieId],val.rating)) model.save_weights(model_path+'bias.h5') model.load_weights(model_path+'bias.h5') model.predict([np.array([3]), np.array([6])]) ``` # Results ``` g=ratings.groupby('movieId')['rating'].count() topMovies=g.sort_values(ascending=False)[:2000] topMovies=np.array(topMovies.index) ``` We'll look at the movie bias term--create a model (inputs associated w outputs) using fxnl api. Input is movie id and output is bias ``` get_movie_bias = Model(movie_in,mb) movie_bias = get_movie_bias.predict(topMovies) movie_ratings = [(b[0], movie_names[movies[i]]) for i,b in zip(topMovies,movie_bias)] sorted(movie_ratings,key=itemgetter(0))[:15] sorted(movie_ratings,key=itemgetter(0),reverse=True)[:15] #bottom rated movies get_movie_emb = Model(movie_in,m) movie_emb = np.squeeze(get_movie_emb.predict([topMovies])) movie_emb.shape from sklearn.decomposition import PCA pca = PCA(n_components=3) #reduce to 3 vectors/embeddings movie_pca = pca.fit(movie_emb.T).components_ fac0 = movie_pca[0] movie_comp = [(f,movie_names[movies[i]]) for f,i in zip(fac0,topMovies)] ``` First comopnent of the 3 vectors: ``` sorted(movie_comp,key=itemgetter(0),reverse=True)[:10] sorted(movie_comp,key=itemgetter(0))[:10] fac1= movie_pca[1] movie_comp = [(f,movie_names[movies[i]]) for f,i in zip(fac1,topMovies)] ``` Second reduced component ``` sorted(movie_comp,key=itemgetter(0),reverse=True)[:10] sorted(movie_comp,key=itemgetter(0))[:10] fac2 = movie_pca[2] movie_comp = [(f,movie_names[movies[i]]) for f,i in zip(fac2,topMovies)] sorted(movie_comp,key=itemgetter(0),reverse=True)[:10] sorted(movie_comp,key=itemgetter(0))[:10] import sys stdout, stderror = sys.stdout, sys.stderr #svae stdout, stderr reload(sys) sys.setdefaultencoding('utf-8') sys.stdout,sys.stderr = stdout,stderror start=50;end=100 X = fac0[start:end] Y = fac2[start:end] plt.figure(figsize=(15,15)) plt.scatter(X,Y) for i,x,y in zip(topMovies[start:end],X,Y): plt.text(x,y,movie_names[movies[i]],color=np.random.rand(3)*0.7, fontsize=14) plt.show() ``` # Neural Net ``` user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4) movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4) x = merge([u,m], mode='concat') #we concatenate user/movie embeddings into a vector to feed into the NN x = Flatten()(x) x = Dropout(0.3)(x) x = Dense(70,activation='relu')(x) x = Dropout(0.75)(x) x = Dense(1)(x) nn = Model([user_in,movie_in],x) nn.compile(Adam(0.001),loss='mse') nn.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=8, validation_data=([val.userId, val.movieId], val.rating)) ``` Looks like the neural net is a good way to go!
github_jupyter
``` ### Pytorch geometry (グラフニューラルネットワークライブラリ) のインストール(only first time) !pip install -q torch-scatter -f https://pytorch-geometric.com/whl/torch-1.9.0+cu102.html !pip install -q torch-sparse -f https://pytorch-geometric.com/whl/torch-1.9.0+cu102.html !pip install -q git+https://github.com/rusty1s/pytorch_geometric.git # ライブラリ確認 import numpy as np import pandas as pd import rdkit from rdkit import Chem import torch from torch_geometric.data import Data as TorchGeometricData print(Chem.__doc__) #SDFファイルの読み込み import glob import re suppl = Chem.SDMolSupplier("../../ForMolPredict/SDF_files/SOL/SOL_AllMOL.sdf",removeHs=False) mol_list = [x for x in suppl] mol_num = len(mol_list) print("there are {} molecules".format(mol_num)) #detaの分割 from sklearn.model_selection import train_test_split train_val, test = train_test_split(mol_list, random_state=0) train, val = train_test_split(train_val) ## pandasのデータフレームからRDkitのmolオブジェクトXとラベルYのペアに変換 Xwf={dataset_keyword:[] for dataset_keyword in ["train","valid","test"]} Ywf={dataset_keyword:[] for dataset_keyword in ["train","valid","test"]} #(A) lowを0それ以外を1とする for mol in train: Xwf["train"].append(mol) if mol.GetProp('SOL_class')=='(A) low': Ywf["train"].append(0.0) else: Ywf["train"].append(1.0) for mol in val: Xwf["valid"].append(mol) if mol.GetProp('SOL_class')=='(A) low': Ywf["valid"].append(0.0) else: Ywf["valid"].append(1.0) for mol in test: Xwf["test"].append(mol) if mol.GetProp('SOL_class')=='(A) low': Ywf["test"].append(0.0) else: Ywf["test"].append(1.0) def one_of_k_encoding(x, allowable_set): if x not in allowable_set: raise Exception( "input {0} not in allowable set{1}:".format(x, allowable_set)) return list(map(lambda s: x == s, allowable_set)) def one_of_k_encoding_unk(x, allowable_set): """Maps inputs not in the allowable set to the last element.""" if x not in allowable_set: x = allowable_set[-1] return list(map(lambda s: x == s, allowable_set)) def get_atom_features(atom, en_list=None, explicit_H=False, use_sybyl=False, use_electronegativity=False, use_gasteiger=False, degree_dim=17): if use_sybyl: atom_type = ordkit._sybyl_atom_type(atom) atom_list = ['C.ar', 'C.cat', 'C.1', 'C.2', 'C.3', 'N.ar', 'N.am', 'N.pl3', 'N.1', 'N.2', 'N.3', 'N.4', 'O.co2', 'O.2', 'O.3', 'S.O', 'S.o2', 'S.2', 'S.3', 'F', 'Si', 'P', 'P3', 'Cl', 'Br', 'Mg', 'Na', 'Ca', 'Fe', 'As', 'Al', 'I', 'B', 'V', 'K', 'Tl', 'Yb', 'Sb', 'Sn', 'Ag', 'Pd', 'Co', 'Se', 'Ti', 'Zn', 'H', 'Li', 'Ge', 'Cu', 'Au', 'Ni', 'Cd', 'In', 'Mn', 'Zr', 'Cr', 'Pt', 'Hg', 'Pb', 'Unknown'] else: atom_type = atom.GetSymbol() atom_list = ['C', 'N', 'O', 'S', 'F', 'Si', 'P', 'Cl', 'Br', 'Mg', 'Na', 'Ca', 'Fe', 'As', 'Al', 'I', 'B', 'V', 'K', 'Tl', 'Yb', 'Sb', 'Sn', 'Ag', 'Pd', 'Co', 'Se', 'Ti', 'Zn', 'H', 'Li', 'Ge', 'Cu', 'Au', 'Ni', 'Cd', 'In', 'Mn', 'Zr', 'Cr', 'Pt', 'Hg', 'Pb', 'Unknown'] results = one_of_k_encoding_unk(atom_type, atom_list) + \ one_of_k_encoding(atom.GetDegree(), list(range(degree_dim))) + \ one_of_k_encoding_unk(atom.GetImplicitValence(), [0, 1, 2, 3, 4, 5, 6]) + \ [atom.GetFormalCharge(), atom.GetNumRadicalElectrons()] + \ one_of_k_encoding_unk(atom.GetHybridization(), [Chem.rdchem.HybridizationType.SP, Chem.rdchem.HybridizationType.SP2, Chem.rdchem.HybridizationType.SP3, Chem.rdchem.HybridizationType.SP3D, Chem.rdchem.HybridizationType.SP3D2]) + \ [atom.GetIsAromatic()] if use_electronegativity: results = results + [en_list[atom.GetAtomicNum() - 1]] if use_gasteiger: gasteiger = atom.GetDoubleProp('_GasteigerCharge') if np.isnan(gasteiger) or np.isinf(gasteiger): gasteiger = 0 # because the mean is 0 results = results + [gasteiger] # In case of explicit hydrogen(QM8, QM9), avoid calling `GetTotalNumHs` if not explicit_H: results = results + one_of_k_encoding_unk(atom.GetTotalNumHs(), [0, 1, 2, 3, 4]) return np.array(results, dtype=np.float32) def get_bond_features(bond): results=one_of_k_encoding_unk(bond.GetBondType(),[Chem.rdchem.BondType.SINGLE, Chem.rdchem.BondType.DOUBLE, Chem.rdchem.BondType.TRIPLE, Chem.rdchem.BondType.AROMATIC]) return np.array(results, dtype=np.float32) import torch from rdkit import Chem from torch_geometric.data import Data as TorchGeometricData from torch_geometric.data import DataLoader def get_edge_features(mol): edge_list= [] num_bond_features=0 for bond in mol.GetBonds(): i = bond.GetBeginAtomIdx() j = bond.GetEndAtomIdx() bond_features = get_bond_features(bond) num_bond_features=len(bond_features) edge_list += [([i, j],bond_features), ([j, i],bond_features)] return edge_list, num_bond_features #modified mol2geodata #規格化関数 def rescaling(features): norm_features = [] max_value = max(features) min_value = min(features) for feature in features: norm_feature = (feature - min_value)/(max_value - min_value) norm_features.append(norm_feature) return norm_features def get_WF_results(mol): mol_props = ['Volume', 'Energy', 'HOMO', 'LUMO', 'HLgap', 'Mcharge_ave', 'Mcharge_var', 'Lcharge_ave', 'Lcharge_var', 'dipole', 'Atom_num', 'Mass', 'Density'] atom_props = ['Mcharges', 'Lcharges', 'Mass', 'X_dem', 'Y_dem', 'Z_dem'] mol_datalist = [] WF_results = [] for mol_prop in mol_props: mol_datalist.append(mol.GetDoubleProp(mol_prop)) for atom in mol.GetAtoms(): atom_data = [] for atom_prop in atom_props: atom_data.append(atom.GetDoubleProp(atom_prop)) molatom_data = mol_datalist + atom_data WF_results.append(molatom_data) return np.array(WF_results, dtype=np.float32) def mol2geodataWF(mol,y): smile = Chem.MolToSmiles(mol) atom_features =[get_atom_features(atom) for atom in mol.GetAtoms()] WF_results = get_WF_results(mol) atom_features = np.append(atom_features, WF_results, axis=1) num_atom_features=len(atom_features[0]) atom_features = torch.FloatTensor(atom_features).view(-1, len(atom_features[0])) edge_list,num_bond_features = get_edge_features(mol) edge_list=sorted(edge_list) edge_indices=[e for e,v in edge_list] edge_attributes=[v for e,v in edge_list] edge_indices = torch.tensor(edge_indices) edge_indices = edge_indices.t().to(torch.long).view(2, -1) edge_attributes = torch.FloatTensor(edge_attributes) #print(num_atom_features,num_bond_features) return TorchGeometricData(x=atom_features, edge_index=edge_indices, edge_attr=edge_attributes, num_atom_features=num_atom_features,num_bond_features=num_bond_features,smiles=smile, y=y) train_data_list = [mol2geodataWF(mol,y) for mol,y in zip(Xwf["train"],Ywf["train"])] train_loader = DataLoader(train_data_list, batch_size=128,shuffle=True) valid_data_list = [mol2geodataWF(mol,y) for mol,y in zip(Xwf["valid"],Ywf["valid"])] valid_loader = DataLoader(valid_data_list, batch_size=128,shuffle=True) test_data_list = [mol2geodataWF(mol,y) for mol,y in zip(Xwf["test"],Ywf["test"])] test_loader = DataLoader(test_data_list, batch_size=128,shuffle=True) num_atom_features = train_data_list[0].num_atom_features num_bond_features = train_data_list[0].num_bond_features print("num_atom_features =",num_atom_features) print("num_bond_features =",num_bond_features) # ニューラルネットワークの構造の定義 import torch import torch.nn.functional as F from torch.nn import Sequential, Linear, ReLU, GRU import torch_geometric.transforms as T from torch_geometric.nn import NNConv, Set2Set dim = 64 # 中間層の次元 class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.lin0 = torch.nn.Linear(num_atom_features, dim) nn = Sequential(Linear(num_bond_features, 128), ReLU(), Linear(128, dim * dim)) self.conv = NNConv(dim, dim, nn, aggr='mean') self.gru = GRU(dim, dim) self.set2set = Set2Set(dim, processing_steps=3) self.lin1 = torch.nn.Linear(2 * dim, dim) self.lin2 = torch.nn.Linear(dim, 1) def forward(self, data): out = F.relu(self.lin0(data.x)) h = out.unsqueeze(0) for i in range(3): m = F.relu(self.conv(out, data.edge_index, data.edge_attr)) out, h = self.gru(m.unsqueeze(0), h) out = out.squeeze(0) out = self.set2set(out, data.batch) out = F.relu(self.lin1(out)) out = self.lin2(out) return out.view(-1) # ニューラルネットワークの学習パラメータの定義 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = Net().to(device) optimizer = torch.optim.Adam(model.parameters(), lr=0.001) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min',factor=0.7, patience=5,min_lr=0.00001) loss_function = torch.nn.BCEWithLogitsLoss() def train_step(epoch): model.train() loss_all = 0 for data in train_loader: data = data.to(device) optimizer.zero_grad() loss = loss_function(model(data), data.y) loss.backward() loss_all += loss.item() * data.num_graphs optimizer.step() return loss_all / len(train_loader.dataset) def test_step(loader): model.eval() loss_all = 0 for data in loader: data = data.to(device) loss = loss_function(model(data), data.y) loss_all += loss.item() * data.num_graphs return loss_all / len(loader.dataset) # 学習開始 Epoch_list = [] Loss_list = [] Val_list = [] Test_list =[] best_valid_loss = None for epoch in range(200): lr = scheduler.optimizer.param_groups[0]['lr'] loss = train_step(epoch) valid_loss = test_step(valid_loader) scheduler.step(valid_loss) if best_valid_loss is None or valid_loss <= best_valid_loss: test_loss = test_step(test_loader) best_valid_loss = valid_loss Epoch_list.append(epoch) Loss_list.append(loss) Val_list.append(valid_loss) Test_list.append(test_loss) print('Epoch: {:03d}, LR: {:7f}, Loss: {:.7f}, Validation loss: {:.7f}, ' 'Test loss: {:.7f}'.format(epoch, lr, loss, valid_loss, test_loss)) #グラフ化 import matplotlib as mpl import matplotlib.pyplot as plt plt.plot(Epoch_list, Loss_list, label = 'Loss') plt.plot(Epoch_list, Val_list, label = 'Validation loss') plt.plot(Epoch_list, Test_list, label = 'Test loss') # 凡例を表示 plt.legend(loc=5) # 軸ラベル plt.xlabel('Epoch') plt.ylabel('Loss') plt.show() ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt import pandas as pd import sys sys.path.append('../../pyutils') import metrics import utils ``` # Introduction In unsupervised learing, one has a set of $N$ observations $x_i \in \mathbb{R}^p$, having joint density $P(X)$. The goal is to infer properties of this density. At very low dimension ($p \leq 3$), several methods can directly estimate $P(X)$ for any $X$. But these methods fail in high-dimensions. It can be used to: - Identify low-dimensional manifolds with high data density. - Cluster analysis finds multiple convex regions that contains modes of $P(X)$ - Mixture modeling try to estimate $P(X)$ with a mixture of density functions. - Association rules: construct simple rules that describe regions of high density. In unsupervised learning, there is no measure of success, it's difficult to prove the conclusions of the model. # Association Rules The general goal of association rules is to find values $v_1, \text{...}, v_L$ such that the probability density $P(X=v_l)$ is relatively large. This problem is also called mode finding or bump hunting. For problems with a large number of values, the number of observations such that $X=v_l$ is usually too small to be reliable. One solution is to seek regions of the $X$-space. Let $s_{j}$ a subset of values taken by feature v$X_j$. The goal is to find $s_{1},\text{...},s_p$ such that the folowing value is large: $$P \left( \bigcap_{j=1}^p (X_j \in s_j) \right)$$ ## Market Basket Analysis This problem is usually not feasible for $p$ and $N$ large. Market Basket Analysis is a special case where all predictions are binary: $X_j \in \{ 0, 1 \}$. The goal is to find a subset of integers $\mathcal{K} \subset \{ 1, \text{...}, p \}$ such that the following value is large: $$P \left( \bigcap_{k \in \mathcal{K}} (X_k = 1) \right) = \prod_{k \in \mathcal{J}} P(X_k = 1)$$ $\mathcal{K}$ is called an item set. This value is called the support or prevalente $T(\mathcal{K})$. It can be estimated from the dataset: $$T(\mathcal{K}) = \frac{1}{N} \sum_{i=1}^N \prod_{k \in \mathcal{K}} x_{ik}$$ The goal of the algorithm is to fing, given a lower bound $t$ for the support, all item sets with support greater than $t$: $$\{ \mathcal{K}_l | T(\mathcal{K}_l) > t \}$$ There are $2^J$ possible item sets, fortunately they are algorithms that allow to find the item sets without looking at all the possibilities. ## The Apriori Algorithm This algorithm can handle very large $N$ and $p$ as long as the number of itemset with support greather than $t$ is small enough. It uses the following property: $$\mathcal{L} \subseteq \mathcal{K} \implies T(\mathcal{L}) \geq T(\mathcal{K})$$ It works by doing only a few passes through the training set. The first pass over the data compute the support of all single-item sets, and discards all with support lower than $t$. The following passes combine the remaining itemsets with the ones remaining after the first pass, and discard all with support lower than $t$. The process stops when all itemsets are discarded. Each obtained itemset $\mathcal{K}$ into two subsets such that: $A \cup B = \mathcal{K}$, written $A \implies B$. The support of the rule is written $T(A \implies B) \approx P(A \cap B)$, it is the same as $T(\mathcal{K})$. The support is the proportion of observations having $A \cap B$. The confidence is the proportion of obversations having $B$ among all those having $A$. It is written $C(A \implies B)$. $$C(A \implies B) = \frac{T(A \implies B)}{T(A)} \approx P(B|A)$$ The lift is how likely it is to have $A$ and $B$ relative to $B$. It is written $L(A \implies B)$. $$L(A \implies B) = \frac{C(A \implies B)}{T(B)} \approx \frac{P(B|A)}{P(B)}$$ ## Example Given a dataset of 7500 transactions from a french retail store, find associative rules from it Dataset: [Link](https://drive.google.com/file/d/1y5DYn0dGoSbC22xowBq2d4po6h1JxcTQ/view) ``` import os from apyori import apriori from google_drive_downloader import GoogleDriveDownloader as gdd FILE_ID ='1y5DYn0dGoSbC22xowBq2d4po6h1JxcTQ' FILE_PATH = '/tmp/store_data.csv' if not os.path.isfile(FILE_PATH): gdd.download_file_from_google_drive(file_id=FILE_ID, dest_path=FILE_PATH) data = pd.read_csv(FILE_PATH, header=None) data.head() records = [] for i in range(data.shape[0]): records.append([str(data.values[i,j]) for j in range(data.shape[1]) if str(data.values[i,j]) != 'nan']) rules = apriori(records, min_support=0.0235, min_confidence=0.1, min_lift=1.5, min_length=2) res = list(rules) print(len(res)) for item in res: s = '' stats = item.ordered_statistics[0] for x in stats.items_base: s += str(x) + '; ' s += '=> ' for x in stats.items_add: s += str(x) + '; ' s += ('S = {:.4f}, C = {:.4f}, L = {:.4f}'.format(item.support, stats.confidence, stats.lift)) print(s) class AprioriRule: def __init__(self, a, b, support, confidence, lift): self.a = a self.b = b self.support = support self.confidence = confidence self.lift = lift class Apriori: def __init__(self, min_support, min_confidence, min_lift, min_length): self.min_support = min_support self.min_confidence = min_confidence self.min_lift = min_lift self.min_length = min_length def fit(self, data): #1) build dict of words self.lwords = [] self.dwords = dict() for entry in data: for w in entry: if not w in self.dwords: self.dwords[w] = len(self.lwords) self.lwords.append(w) #2) build data matrix self.X = np.zeros((len(data), len(self.lwords))) for i in range(len(data)): for w in data[i]: self.X[i, self.dwords[w]] = 1 #3) first pass through dataset rules = [] res = [] for j in range(self.X.shape[1]): items = [j] s = self.get_support(items) if s >= self.min_support: res.append(items) rules.append(items) res1 = list(res) # 4) other passes through dataset until no itemset found while len(res) > 0: res_next = [] for items in res: for other in res1: if other[0] > items[-1]: items_ext = items + other s = self.get_support(items_ext) if s >= self.min_support: res_next.append(items_ext) rules.append(items_ext) res = res_next # 5) remove lists too short rules = [x for x in rules if len(x) >= self.min_length] # 6) divide rules into A => B rules rules_ex = [] for r in rules: rules_ex += self.split_rule(r) rules = rules_ex # 7) compute all rules stats rules = [self.build_rule(r) for r in rules] # 8) filter rules rules = [r for r in rules if r.confidence > self.min_confidence and r.lift > self.min_lift] self.rules = rules def get_support(self, items): n = 0 for x in self.X: val = 1 for it in items: if x[it] == 0: val = 0 break n += val return n / len(self.X) def split_rule(self, r): res = [] for i in range(len(r) - 1): p1 = r[:i+1] p2 = r[i+1:] res.append((p1, p2)) return res def build_rule(self, r): sab = self.get_support(r[0] + r[1]) sa = self.get_support(r[0]) sb = self.get_support(r[1]) support = sab confidence = support / sa lift = confidence / sb wa = [self.lwords[x] for x in r[0]] wb = [self.lwords[x] for x in r[1]] return AprioriRule(wa, wb, support, confidence, lift) mod = Apriori(min_support=0.0235, min_confidence=0.1, min_lift=1.5, min_length=2) mod.fit(records) print(len(mod.rules)) for item in mod.rules: s = '' for x in item.a: s += str(x) + '; ' s += '=> ' for x in item.b: s += str(x) + '; ' s += ('S = {:.4f}, C = {:.4f}, L = {:.4f}'.format(item.support, item.confidence, item.lift)) print(s) ``` ## Unsupervised as Supersived learning We are trying to estimate the probability density $g(x)$. We onyl have access to a reference probability density $g_0(x)$. It could be for example the uniform density over the range of the variables. We can easily sample $N_0$ observations from $g_0(x)$. We also have the dataset $x_1,\text{...},x_N$, an i.i.d. random sample drawn from $g(x)$. Let's pool this two datasets together and assign mass $w = \frac{N_0}{N+N_0}$ to those drawn from $g(x)$, and $w_0 = \frac{N}{N+N_0}$ to those drawn from $g_0(x)$. We get a mixture density $\frac{g(x) + g_0(x)}{2}$. If we assign $Y=1$ to sample draw from $g(x)$ and $Y=0$ to those draw from $g_0(x)$, we get: $$\mu(x) = E(Y|x) = \frac{g(x)}{g(x) + g_0(x)}$$ $\mu_x$ can be estimated by supervised learning by combining the $N$ samples from $g(x)$ with $Y=1$, and the $N_0$ samples from $g_0(x)$ with $Y=0$. Then, we can get an estimate for $g(x)$: $$\hat{g}(x) = g_0(x) \frac{\hat{\mu}(x)}{1 - \hat{\mu}(x)}$$ The accuracy of $\hat{g}(x)$ greatly depends on the choice of $g_0(x)$. ## Generalized Association rules The goal is to find a subset of integers $\mathcal{J} \subset \{ 1, 2, \text{...}, p \}$ and the corresponding value subjects $s_j$ so that the following value is large: $$P \left( \bigcap_{j \in \mathcal{J}} (X_j \in s_j) \right)$$ This can be estimated by: $$\frac{1}{N} \sum_{i=1}^N I \left( \bigcap_{j \in \mathcal{J}} (x_{ij} \in s_j) \right)$$ This favors the discovery of itemsets whose marginal constituents $(X_j \in s_j)$ are frequent, that is the following value is large: $$\frac{1}{N} \sum_{i=1}^N I(x_{ij} \in s_j)$$ A good reference distribution is the product of the marginal distributions: $$g_0(x) = \prod_{j=1}^J g_j(x_j)$$ A sample from $g_0(x)$ is easily generated from the original dataset by appliying different random permutation to the data values of each of the variables. After drawing samples from $g_0(x)$, we get a training dataset for supervised learning, with $Y \in \{ 0, 1 \}$. The goal is to use this data to find regrions: $$R = \bigcap{j \in \mathcal{J}} (X_j \in s_j)$$ for which $\mu(x) = E(Y|x)$ is large. One might also require that the support os these regions is big enough: $$T(R) = \int_{x \in R} g(x)dx$$ Decision trees are such a model, each leaf $t$ represent a region $R$: $$\bar{y}_t = \text{ave}(y_i|x_i \in t)$$ The actual data support is given by: $$T(R) = \bar{y}_t \frac{N_t}{N + N_0}$$ with $R_t$ the number of observations in the leaf $t$. ``` from sklearn.tree import DecisionTreeRegressor class GeneralizedAssosRules: def __init__(self, t): self.t = t def fit(self, X): N, p = X.shape N0 = 2*N X0 = X[np.random.choice(N, size=N0, replace=True)] for j in range(p): X0[:, j] = X0[np.random.permutation(N0), j] Xe = np.vstack((X, X0)) ye = np.vstack((np.ones((N, 1)), np.zeros((N0, 1)))) self.tree = DecisionTreeRegressor() self.tree.fit(Xe, ye) print(Xe.shape) print(ye.shape) X = np.random.randn(124, 7) mod = GeneralizedAssosRules(0.2) mod.fit(X) ``` # Cluster Analysis It consists of grouping a collection of obects into subsets (called clusters) such that those within each cluster are more closely related to one another than objects from other clusters. To form clusters, an important notion is the degree of similarity or dissimmalirity between two objects. It can be a distance, like the euclidian distance. It is used for example by K-Means clustering which use a top-down procedure to build clusters. Other approches are mostly bottom-up. ## Proximity Matrices $Let D \in \mathbb{R}^{N*N}$ the matrix of dissimilarities, with $N$ the number of objects. $D_{ij}$ represents the proximity between object $i$ and object $j$. Usually it's symmetric matrix with nonnegative entries, and zeroes on the diagonal. We usually have $x_ij$ with $N$ observations and $p$ features. We need to compute the dissimilarity between 2 observations in order to build $D$. One solution is to use a dissimilarity $d_j(x_{ij}, x_{i'j})$ for each feature: $$D(x_i, x_{i'}) = \sum_{j=1}^p d_j(x_{ij}, x_{i'j})$$ For quantitative variables, we define an error: $d(x_i, x_{i'}) = l(|x_i - x_{i'}|)$. Usually it's the squared error loss, or the absolute error. Or in cal also be based on correlation: $$p(x_i, x_{i'}) = \frac{\sum_j (x_{ij} - \bar{x}_i)(x_{i'j} - \bar{x}_{i'})}{\sqrt{\sum_j (x_{ij} - \bar{x}_i)^2 \sum_j (x_{i'j} - \bar{x}_{i'})^2}}$$ with $\bar{x}_i = \sum_{j} x_{ij}/p$. For ordinal variables, we usually replace their $M$ original values by: $$\frac{i - 1/2}{M}$$ with $i$ the original order of the variable. Then they are treated as quantitative variables. For categorical variables, the dissemilarity must be defined explicitly, by using a $M*M$ matrix for examples. They are several ways to combine all $d_j(x_{ij}, x_{i'j})$. It can be with a weighted average: $$D(x_i, x_{i'}) = \sum_{j=1}^p w_j d_j(x_{ij}, x_{i'j})$$ $$\text{with } \sum_{j=1}^p w_j = 1$$ Setting $w_j = 1/j$ does not give all attribute equal influence. To get equal influence, you should set $w_j = 1/\bar{d}_j$ with: $$\bar{d}_j = \frac{1}{N^2} \sum_{i=1}^N \sum_{i'=1}^N d_j(x_{ij}, x_{i'j})$$ This seems a reasonable idea, but may be counterproductive. To cluser data, you may not want all attributes to contribute equally. ## Clustering Algorithms The goal of clustering os to partition data into groups so that the dissimilarities between those assigned to the same cluster are smaller than those in different clusters. They fall into three types: - combinatorial algorithms - mixture modelling - mode seeking ## Combinatorial algorithms These algorithms assign each observation to a cluster without any probability model. Each observation $x_i$ is assigned to a cluster $k \in \{1, \text{...}, K \}$. These assignments can be characterized by an encoder: $k = C(i)$. The models looks for $C^*(i)$ that achieves a particular goal. It is adjusted to minimize a loss function that charactherize the clustering goal. One possible loss is the within-cluster point scatter. It make observations in the same cluster as close as possible: $$W(C) = \frac{1}{2} \sum_{k=1}^K \sum_{C(i)=k} \sum_{C(i')=k} d(x_i, x_{i'})$$ Another loss is the between-cluster point scatter. It makes observations in different cluster as far as possible: $$B(C) = \frac{1}{2} \sum_{k=1}^K \sum_{C(i)=k} \sum_{C(i')\neq k} d(x_i, x_{i'})$$ Minimize $W(C)$ is equivalent to maximize $B(C)$. Th total point scatter $T$ is a constant given the data, independant of cluster assignment. $$T = W(C) + B(C)$$ Minimize this loss function by testing all assignments is intractable. For only $N=19$ and $K=4$, they are around $10^{10}$ possible assignments. Algorithms are often based on iterative greedy descent. It starts with initial assignments, that are changed in each step, in a way to reduce the loss function. The algorithm terminates when there is no possible improvment. But the result is a local optima, which may be highly suboptimal compared to the global optimum. ## K-Means K-Means is a combinatorial algorithm that uses the squared Euclidian distance: $$d(x_i, x_{i'}) = ||x_i - x_{i'}||^2$$ We are minimizing the within-cluster distance: $$W(C) = \sum_{k=1}^K N_k \sum_{C(i) = k} ||x_i - \bar{x}_k||^2$$ with $\bar{x}_k$ the mean of all observations in cluster $k$, and $N_k$ the number of observations in cluster $K$ We are trying to solve: $$C^* = \min_C \sum_{k=1}^K N_k \sum_{C(i) = k} ||x_i - \bar{x}_k||^2$$ The K-Means algorithm is reaally simple: 1. Initialize the $R$ clusters randomly (from training set) 2. Repeat until convergence: - Assign each training point to the closest centroid - The center of each cluster becomes the mean of all its assigned points Each step reduce the loss function, but it converges only to a local mininum. One should start the algorithm with many different random inititialization, and choose the one with the lowest loss. ``` from sklearn.datasets import load_iris from sklearn.decomposition import PCA class KMeansClustering: def __init__(self, R, nstarts = 100): self.R = R self.nstarts = nstarts def fit(self, X): best_loss = float('inf') best_means = None for _ in range(self.nstarts): self.train(X) loss = self.get_loss(X) if loss < best_loss: best_loss = loss best_means = self.means self.means = best_means def train(self, X): N, p = X.shape self.means = X[np.random.choice(N, self.R)] while True: old_means = self.means.copy() #assign each point to the closest cluster ctrs = [list() for _ in range(self.R)] for x in X: ctrs[self.get_closest_ctr_idx(x)].append(x) # compute the new center position of every cluster for i in range(self.R): if len(ctrs[i]) != 0: self.means[i] = np.mean(np.vstack(ctrs[i]), axis=0) if np.linalg.norm(old_means - self.means) < 1e-6: break def get_loss(self, X): #assign each point to the closest cluster ctrs = [list() for _ in range(self.R)] for x in X: ctrs[self.get_closest_ctr_idx(x)].append(x) #compute distance between each point and the cluster center loss = 0 for k in range(self.R): for x in ctrs[k]: loss += len(ctrs[k])*(x-self.means[k]) @ (x-self.means[k]) return loss def get_closest_ctr_idx(self, x): min_idx = None min_dist = float('inf') for i in range(self.R): dist = (x - self.means[i]) @ (x - self.means[i]) if dist < min_dist: min_idx = i min_dist = dist return min_idx def predict(self, X): y = np.empty(len(X)).astype(np.int) for i in range(len(X)): y[i] = self.get_closest_ctr_idx(X[i]) return y X, y = load_iris().data, load_iris().target X = X[np.random.permutation(len(X))] pca = PCA(n_components=2) X = pca.fit_transform(X) X = X - np.mean(X, axis=0) X = X / np.std(X, axis=0) print(X.shape) print(y.shape) mod = KMeansClustering(3) mod.fit(X) colors = [ ['red', 'blue', 'green'][x] for x in mod.predict(X)] print('loss:', mod.get_loss(X)) plt.scatter(X[:,0], X[:,1], c=colors) plt.show() ``` ## Gaussian Mixtures as Soft K-Means K-Means is closely related to estimating a Gaussian mixture with the EM algorithm. - The E-step assign weight to each data point based on it's relative density under each mixture component (closeness) - The M-step recompute the component density based on current weights (mean / covariance) If every Gaussian have covariance matrix $\sigma^2 I$, the relative density under each mixture is a monote function of the euclidian distance between the data point and the mixture center. Hence EM as a soft K-Means, making probabalistic (rather than deterministic) assigment of points to cluster centers. As $\sigma^2 \to 0$, the probabilities become $0$ and $1$, and the two methods coincide. ## Vector Quantization Vector Quantization is a compression technique in image / signal processing, using K-Means. The prodecure is: - Break the image into small blocks, for example for a $1024*1024$ image break into $2*2$ blocks, we get $512*512$ vectors in $\mathbb{R}^4$ - A K-Means is run on the blocks. As $K$ increases, the quality of the image and the compressed size decrease. Each block is approximated by it's closest centroid. - We just need to store the $K$ centroids vectors, and the index of the closest centroid of all the blocks. - To reconstruct the image, each block become it's closest centroid, and the blocks are converted to an image This works because with typical images many blocks look the same. It only require only one block each to represent them. We can go further by applying a hierarchical K-Means, or using a variable coding length. ``` from PIL import Image from sklearn.cluster import KMeans class VectorQuantization: def __init__(self, K, bsize = 4): self.K = K self.bsize = 2 def img2block(self, X): s = X.shape[0] res = np.empty((s//self.bsize, s//self.bsize, self.bsize*self.bsize)).astype(np.int) for i in range(res.shape[0]): for j in range(res.shape[1]): res[i, j] = np.array([ X[2*i,2*j],X[2*i+1,2*j],X[2*i,2*j+1],X[2*i+1,2*j+1] ]) return res.reshape(-1, self.bsize*self.bsize) def block2img(self, b): s2 = int(np.sqrt(b.shape[0])) b = b.reshape(s2, s2, self.bsize*self.bsize) X = np.empty((s2*self.bsize, s2*self.bsize)).astype(np.int) for i in range(s2): for j in range(s2): X[2*i,2*j] = b[i,j,0] X[2*i+1,2*j] = b[i,j,1] X[2*i,2*j+1] = b[i,j,2] X[2*i+1,2*j+1] = b[i,j,3] return X def encode(self, img): b = self.img2block(img) clf = KMeans(n_clusters=self.K, n_init=1) clf.fit(b) code = clf.labels_ centers = clf.cluster_centers_ return code, centers def decode(self, code, centers): b = np.empty((len(code), self.bsize*self.bsize)).astype(np.int) for i in range(len(b)): b[i] = centers[code[i]] return self.block2img(b) IMG_URL = 'https://i.ytimg.com/vi/J4Q86j9HOao/hqdefault.jpg' IMG_PATH = '/tmp/img.jpg' utils.dl_file(IMG_URL, IMG_PATH) X = Image.open(IMG_PATH) X = X.resize((256,256), Image.ANTIALIAS) X = X.convert('L') X = np.asarray(X.getdata(),dtype=np.int).reshape((X.size[1],X.size[0])) vq200 = VectorQuantization(K=200) code, centers = vq200.encode(X) X2 = vq200.decode(code, centers) vq4 = VectorQuantization(K=4) code, centers = vq4.encode(X) X3 = vq4.decode(code, centers) print(metrics.tdist(X, X2)) print(metrics.tdist(X, X3)) plt.imshow(X, cmap='gray') plt.show() plt.imshow(X2, cmap='gray') plt.show() plt.imshow(X3, cmap='gray') plt.show() ``` ## K-medoids K-Means is appropriate when the dissimilairty measure $D(x_i, x_{i'})$ is the euclidian distance. These requires all variables to be of quantitative type, it the procedure lacks robustness on outliers. The algorithm can be generalized to any $D(x_i, x_{i'})$ We don't need the inputs $x$, only the distances. It's far more expensive to compute than K-Means. K-medoids algorithm: 1. Start with a particular intialization $C(i)$ 2. Repeat until the cluster assignments $C(i)$ doesn't change: - For each cluster $k$, find the cluster center $m_k$: $$m_k = \arg \min_{ \{i:C(i)=k \} } \sum_{C(i')=k} D(x_i, x_{i'})$$ - Minitmize the total error by assigning each observation to the closest cluster: $$C(i) = \arg \min_k D(x_i, m_k)$$ ``` from sklearn.datasets import load_iris from sklearn.decomposition import PCA class KMedoidsClustering: def __init__(self, K): self.K = K def fit(self, X): N, p = X.shape self.centers = [None] * self.K # build distance matrix D = np.empty((N, N)) for i in range(N): for j in range(N): D[i,j] = (X[i] - X[j]) @ (X[i] - X[j]) X = None #X is useless, we only need D # initialization #assign each point ro a random cluster ctrs = [list() for _ in range(self.K)] for i in range(N): ctrs[np.random.randint(0, self.K)].append(i) while True: #estimate cluster centers for k in range(self.K): best_i = None best_dist = float('inf') ck = ctrs[k] for i in ck: dist = 0 for i2 in ck: dist += D[i, i2] if dist < best_dist: best_dist = dist best_i = i self.centers[k] = best_i ## old_ctrs = ctrs ctrs = [list() for _ in range(self.K)] # assign each point to the closest cluster center for i in range(N): best_k = None best_dist = float('inf') for k in range(self.K): dist = D[i, self.centers[k]] if dist < best_dist: best_dist = dist best_k = k ctrs[best_k].append(i) #stop only if the assigments didn't changed changed = False for k in range(self.K): if ctrs[k] != old_ctrs[k]: changed = True break if not changed: break #build labels vectors self.labels = np.empty(N).astype(np.int) for k in range(self.K): for i in ctrs[k]: self.labels[i] = k X, y = load_iris().data, load_iris().target X = X[np.random.permutation(len(X))] pca = PCA(n_components=2) X = pca.fit_transform(X) X = X - np.mean(X, axis=0) X = X / np.std(X, axis=0) print(X.shape) print(y.shape) mod = KMedoidsClustering(3) mod.fit(X) colors = [ ['red', 'blue', 'green'][x] for x in mod.labels] plt.scatter(X[:,0], X[:,1], c=colors) plt.show() ``` ## Initialization It can be defined by specifying an initial set of centers $\{ m_1, \text{...}, m_K$ or an initial encoder $C(i)$. Specifying the center is usually more convenient. A strategy based of forward stepwise assignment is derived, called K-Means++. K-Means++ agorithm: 1. Initialize the first center $m_1$ uniformly at random from all observations. 2. For $k=2 \to K$: - Compute for every observation the distance with the closest of the already chosen centroids: $$D(i) = \min_{c = \{ m_1, \text{...}, m_{k-1} \} } D(x_i, c)$$ - Choose the center $m_k$ from a weighted probability probability distribution of $X$, with weights $D(i)^2$ ``` from sklearn.datasets import load_iris from sklearn.decomposition import PCA class KMeansClustering: def __init__(self, R, nstarts = 100): self.R = R self.nstarts = nstarts def fit(self, X): best_loss = float('inf') best_means = None for _ in range(self.nstarts): self.train(X) loss = self.get_loss(X) if loss < best_loss: best_loss = loss best_means = self.means self.means = best_means def train(self, X): N, p = X.shape # K-means++ Initialization self.means = np.empty((self.R, p)) self.means[0] = X[np.random.choice(N)] for k in range(1, self.R): d = np.empty(N) for i in range(N): d[i] = min([(X[i]-self.means[c])@(X[i]-self.means[c]) for c in range(k)]) d /= np.sum(d) self.means[k] = X[np.random.choice(N, p=d)] while True: old_means = self.means.copy() #assign each point to the closest cluster ctrs = [list() for _ in range(self.R)] for x in X: ctrs[self.get_closest_ctr_idx(x)].append(x) # compute the new center position of every cluster for i in range(self.R): if len(ctrs[i]) != 0: self.means[i] = np.mean(np.vstack(ctrs[i]), axis=0) if np.linalg.norm(old_means - self.means) < 1e-6: break def get_loss(self, X): #assign each point to the closest cluster ctrs = [list() for _ in range(self.R)] for x in X: ctrs[self.get_closest_ctr_idx(x)].append(x) #compute distance between each point and the cluster center loss = 0 for k in range(self.R): for x in ctrs[k]: loss += len(ctrs[k])*(x-self.means[k]) @ (x-self.means[k]) return loss def get_closest_ctr_idx(self, x): min_idx = None min_dist = float('inf') for i in range(self.R): dist = (x - self.means[i]) @ (x - self.means[i]) if dist < min_dist: min_idx = i min_dist = dist return min_idx def predict(self, X): y = np.empty(len(X)).astype(np.int) for i in range(len(X)): y[i] = self.get_closest_ctr_idx(X[i]) return y X, y = load_iris().data, load_iris().target X = X[np.random.permutation(len(X))] pca = PCA(n_components=2) X = pca.fit_transform(X) X = X - np.mean(X, axis=0) X = X / np.std(X, axis=0) print(X.shape) print(y.shape) mod = KMeansClustering(3) mod.fit(X) colors = [ ['red', 'blue', 'green'][x] for x in mod.predict(X)] print('loss:', mod.get_loss(X)) plt.scatter(X[:,0], X[:,1], c=colors) plt.show() ``` ## Choice of K One technique is to use a loss function, such as the within-cluster dissimilarity $W_K$, and compute it for several values of K. But this loss is decreasing with the number of $K$, even when used on a validation set with Cross-Validation. The value of $K$ start decreasing exponentially, then at a point the difference between each $K$ abruptly decrease. Heuristically, set $K^*=K$ for this particular $K$ when the difference become less important, gives good results. The $K^*$ can be found simply by plotting $W_K$ for different values of $K$. The plot looks like an elbow at $K^*$. This method is also called the elbow method. ``` losses = [] for k in range(1, 10): mod = KMeansClustering(k) mod.fit(X) losses.append(mod.get_loss(X)) plt.plot(np.arange(1, 10), losses) plt.show() best_k = 2 #by looking at plot mod = KMeansClustering(best_k) mod.fit(X) colors = [ ['red', 'blue', 'green'][x] for x in mod.predict(X)] plt.scatter(X[:,0], X[:,1], c=colors) plt.show() ``` ## Hierarchical Clustering Hierarchical clustering uses only a measure of dissimilarity between 2 groups of observations. They produce hierarchical representations in wich the clusters at each level are created by merging clusters at the next lower level. At the lowest level there is $N$ clusters of size $1$, and at the highest $1$ cluster of size $N$. There exist two-strategies: - Aglomerative (bottom-up): Start at the bottom and and recursively merge a pair of clusters into one - Divisive (top-down): Start at the top and recursively split one cluster into two. Each level represents a different grouping of the data. It's up to the user to decide which level represents a natural clustering. Most methods posseses a monotonicity property: The dissimilarity between clusters is monotone increasing with the level. The model can be plotted as a binary tree, where the height of each node is proportional to the value of the intergroup dissimilarity between it's two children. This is called a dendogram. The results are valid only if the data really posseses a hierarchical structure. ## Agglomerative Clustering It starts with every observation in a different cluster. At each step, the closest 2 clusters are merged. After $N-1$ steps, the algorithm stops with only one cluster left. A measure of dissimilary between 2 groups, $d(G,H)$ is needed. They are several possibilities: - Single Linkage is the least dissimilar of all pairs: $$d_\text{SL}(G,H) = \min_{i \in G, i' \in H} d_{ii'}$$ - Complete Linkage is the most dissimilar of all pairs: $$d_\text{CL}(G,H) = \max_{i \in G, i' \in H} d_{ii'}$$ - Group Average is the mean dissimilairty between all pairs: $$d_\text{GA}(G,H) = \frac{1}{N_G N_H} \sum_{i \in G} \sum_{i' \in H} d_{ii'}$$ If the data is compact (small dissimilarities between clusters, clusters well separated from each others), all methods produce similar results. Single Linkage only requires a single pair of two groups to be small to combine them. It has a tendency to combine at relatively low thresholds, observations linked by a series of close intermediates. This phenomem, called chaining, is a defect of the method. Complete Linkage is the other extreme, two groups are similar if all their obrservations are close. It tends to produce compact clusters, however it may produce clusters with observations much closer to members of other clusters than to member of their own cluster, breaking the closoness property. Groupe average is a compromise between the two. ## Divisive Clustering It begins with the whole dataset into one cluster, then recursively divide one existing cluster in two. Ater $N-1$ steps, are $N$ clusters of size $1$. This approach is less used than agglomerative clustering. Place all observations in a single cluster $G$. Chooses the observations whose average dissimilairy from all other observations is the largest. It is the first member of a new cluster H. At each step, the observation in $G$ whose average dissimilarity from those in H, minus the remaining observations in G, is transfered to H. It continues until the largest value became negative. The original cluster is then split in two, G and H. At each step a new cluster is chosen an split in two. The cluster chosen can be the one with the largest diameter, or the largest average dissimilarity between it's members. # Self-Organizing Maps Self-organization of a massive document collection - Kohonen, T., Kaski, S., Lagus, K., Saloj ̈arvi, J., Paatero,A. and Saarela,A. (2000) - [PDF](http://lig-membres.imag.fr/bisson/cours/M2INFO-AIW-ML/papers/Kohonen00.pdf) This method can be viewed as a constrained version of K-Means, where the prototype are encouraged to lie in a or two dimensional manifold in the feature space. We consider a SOM as a rectangular grid of $q_1*q_2=K$ prototypes $m_j \in \mathbb{R}^p$. Once the model is fit, the observations can be mapped into the rectangular grid. Algorithm: For each observation $x$_i: - Find the cluster $m_j$ closest to $x_i$ - Find all clusters $m_k$ such that the distance in the grid between $l_j$ and $l_k$ is lower than $r$. - Move all $m_k$ closer to $x_i$: $$m_k \leftarrow m_k + \alpha (x_i - m_k)$$ Thousands of iterations are made over the dataset. At each iteration, $\alpha$ and $r$ are decreased. The update both move the prototypes closer to the data, but also maintain a smooth 2D spatial relationship between prototypes. ``` def d2_dist(a, b): return (a[0]-b[0])**2 + (a[1]-b[1])**2 class SOM: def __init__(self, Q, niters = 1000): self.Q = Q self.niters = niters self.alpha_beg = 1 self.alpha_end = 0 self.dalpha = (self.alpha_end - self.alpha_beg) / self.niters self.r_beg = 10 self.r_end = 1 self.dr = (self.r_end - self.r_beg) / self.niters def get_closest_centroid(self, x): best_dist = float('inf') best_pos = None for i in range(self.Q): for j in range(self.Q): dist = (self.clusters[i,j]-x) @ (self.clusters[i,j]-x) if dist < best_dist: best_dist = dist best_pos = (i,j) return best_pos def fit(self, X): N, p = X.shape self.clusters = np.random.randn(self.Q, self.Q, p) alpha = self.alpha_beg r = self.r_beg for it in range(self.niters): for x in X: i0, j0 = self.get_closest_centroid(x) for i in range(self.Q): for j in range(self.Q): if d2_dist((i,j), (i0,j0)) < r: d = x - self.clusters[i,j] self.clusters[i,j] += alpha * d if it % 50 == 0: print('iteration:', it) alpha += self.dalpha r += self.dr def reconstruct(self, X): X2 = np.empty(X.shape) for i in range(len(X)): pos = self.get_closest_centroid(X[i]) X2[i] = self.clusters[pos] return X2 X, y = load_iris().data, load_iris().target X = X[np.random.permutation(len(X))] pca = PCA(n_components=2) X = pca.fit_transform(X) X = X - np.mean(X, axis=0) X = X / np.std(X, axis=0) mod = SOM(Q=5, niters=250) mod.fit(X) Xr = mod.reconstruct(X) print('recons error:', np.linalg.norm(X - Xr)) plt.scatter(X[:,0], X[:,1], c='blue') plt.scatter(Xr[:,0], Xr[:,1], c='red') plt.show() ``` # Principal Components, Curves and Surfaces Principal Components provides a sequence of best linear approximations of the data, of all ranks $q \leq p$. The parametric representation of an affine hyperplane is: $$f(\lambda) = \mu + V_q \lambda$$ with $\mu \in \mathbb{R}^p$ a location vector, $V_q \in \mathbb{R}^{p*q}$ a matrix with unit orthogonal columns vectors, and $\lambda \in \mathbb{R}^q$ a vector of parameters. We can fit this model by minimizing the reconstruction error: $$\min_{\mu, \lambda_i, V_q} \sum_{i=1}^N ||x_i - \mu - V_q\lambda_i||^2$$ When we partially optimize of $\mu$ and $\lambda_i$ we get: $$\hat{\mu} = \bar{x}$$ $$\hat{\lambda}_i = V^T_q(x_i - \bar{x})$$ The problem becomes: $$\min_{V_q} ||(x_i - \bar{x}) - V_qV^T_q(x_i - \bar{x})||^2$$ We assume $\bar{x} = 0$. The reconstruction matrix $H_q \in \mathbb{R}^{p*p}$ is a projection matrix such that $H_q = V_qV_q^T$ The solution can be found with the singular value decomposition of $X$ centered: $$X = UDV^T$$ For each rank $q$, the solution $V_q$ are the first $q$ columns of V. ``` from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.decomposition import PCA class MyPCA: def __init__(self, q): self.q = q def fit(self, X): Xc = np.mean(X, axis=0, keepdims=True) X = X - Xc U, d, VT = np.linalg.svd(X) Vq = VT[:self.q].T self.Xc = Xc self.Vq = Vq def transform(self, X): return (X - self.Xc) @ self.Vq def inverse_transform(self, Xq): return (Xq @ self.Vq.T) + self.Xc X, y = load_iris().data, load_iris().target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=15) p1 = MyPCA(q=2) p1.fit(X_train) p2 = PCA(n_components=2) p2.fit(X_train) Xtrq1 = p1.transform(X_train) Xtrq2 = p2.transform(X_train) print(Xtrq1.shape) print(metrics.tdist(Xtrq1, Xtrq2)) Xteq1 = p1.transform(X_test) Xteq2 = p2.transform(X_test) print(Xteq1.shape) print(metrics.tdist(Xteq1, Xteq2)) Xtrr1 = p1.inverse_transform(Xtrq1) Xtrr2 = p2.inverse_transform(Xtrq2) print(Xtrr1.shape) print(metrics.tdist(Xtrr1, Xtrr2)) Xter1 = p1.inverse_transform(Xteq1) Xter2 = p2.inverse_transform(Xteq2) print(Xter1.shape) print(metrics.tdist(Xter1, Xter2)) ``` The colums of $UD$ are called the principal components. the $N$ optimal $\hat{\lambda}_i$ are given by the first q principal components. ## Principal Curves and Surfaces Principal curve generelize the principal component line. It provides a smooth one-dimensional curved approximation of a set of data points. A principal surface is more general, provides a curved manifold approximation of dimension 2 or more. ## Spectral Clustering Traditional clustering methods use spherical or elliptical metrics, and won't work well if the clusters are non-convex. Spectral clustering is a generalization designed for these situations. Let's define a matrix of similarities $S \in \mathbb{R}^{N*N}$, with $s_{ii'} \geq 0$ the similarity between $x_i$ and $x_{i'}$. Let $G = <V, E>$ an undirected similarity graph with vertices $v_i$ for each observation, and edges weighted by $s_{ii'}$ only if reaches a specific threshold, otherwhise there is no edge. Clustering is now a graph problem, we wish to partition the graph such that edges between different groups have low weight, and within a group have high weight. Let $d_{ii'}$ the euclidian distance between $x_i$ and $x_{ii'}$. One similarity mesure is the radial-kernel gram matrix: $s_{ii'} = \exp (-d^2_{ii'}/c)$, with $c > 0$ a scale parameter. One way to define a similarity graph is the mutual K-nearest neighbor graph. Define $\mathcal{N}_k$ the symmetric set of nearby pair of points. A pair $(i,i')$ if $x_i$ is among the K-nearest neighbors of $x_{i'}$, or vice versa. We connect all pairs in $\mathcal{N}_k$ with weight $w_{ii'} = s_{ii'}$, otherwhise the weight is 0. Another way is to include all edges to get a fully connected graph, with weights $w_{ii'}=s_{ii'}$. The matrix of edge weights $W \in \mathbb{R}^{N*N}$ is called the adjency matrix. The degree of vertex $i$ is $g_i = \sum_{i'} w_{ii'}$. Let $G \in \mathbb{R}^{N*N}$ a diagonal matrix with diagonal elements $g_i$. The graph Laplacian is defined by $L = G - W$. Spectral clustering find the $m$ eigenvectors corresponding to the $m$ smallest eigenvalues of $L$. It gives us the matrix $Z \in \mathbb{R}^{N*m}$. Using a standard method like K-Means, we cluster the rows of $Z$ to yield a clustering of the original points. ``` from sklearn.datasets import load_iris from sklearn.decomposition import PCA class SpectralClustering: def __init__(self, K, c, m): self.K = K self.c = c self.m = m def fit(self, X): N, p = X.shape S = np.array([ [np.exp(-(X[i]-X[j])@(X[i]-X[j])/self.c) for i in range(N)] for j in range(N) ]) W = S G = np.diag(np.sum(W, axis=0)) L = G - W w, V = np.linalg.eigh(L) Z = V[:, 1:self.m+1] km = KMeansClustering(self.K) km.fit(Z) self.Z = Z self.km = km self.labels = km.predict(Z) X, y = load_iris().data, load_iris().target X = X[np.random.permutation(len(X))] pca = PCA(n_components=2) X = pca.fit_transform(X) X = X - np.mean(X, axis=0) X = X / np.std(X, axis=0) mod = SpectralClustering(K=4, c=1, m=2) mod.fit(X) colors = [ ['red', 'blue', 'green', 'yellow'][x] for x in mod.labels] plt.scatter(X[:,0], X[:,1], c=colors) plt.show() ``` ## Kernel Principal Components Kernel principal component analysis - Bernhard Schoelkopf, Alexander J. Smola, and Klaus-Robert Mueller. (1999) - [PDF](http://pca.narod.ru/scholkopf_kernel.pdf) In PCA, we diagonalize an estimate an estimate of the covariance matrix: $$C = \frac{1}{p} \sum_{j=1}^p x_{:,j} x_{:,j}^T$$ Kernel PCA follows the same principle, but first map non lineary the data into another feature space using the transformation $\Phi$. As for kernel SVM methods, be don't need to compute $\Phi(x)$, only the dot product $\Phi(x_i)^T\Phi(x_j)$ The covariance matrix became: $$\bar{C} = \frac{1}{p} \sum_{j=1}^p \Phi(x_{:,j}) \Phi(x_{:,j})^T$$ We need to find engeinvalues $\lambda$ and eigenvectors $V$ satisfying $\lambda V = \bar{C} V$. Let's define the kernel matrix $K \in \mathbb{R}^{N*N}$ such that: $$K_{ij} = \Phi(x_i)^T \Phi(x_j)$$ We now solve the eigenvalue problems: $$\lambda \alpha = K \alpha$$ We get the projected data on $q$ components with: $$Z_q = \alpha * \sqrt{\lambda}$$ The kernel matrix is computed with data not centered. We need to center it first, using the following trick: $$K_\text{center} = K - 1_NK - K1_N + 1_N K 1_N = (I - 1_N)K(I-1_N)$$ with $1_n \in \mathbb{R}^{N*N}$ a matrix with all elements equal to $1/N$. ``` from sklearn.datasets import load_iris from sklearn.decomposition import KernelPCA X, y = load_iris().data, load_iris().target pca = KernelPCA(n_components=2, kernel='rbf', gamma=0.5) X = pca.fit_transform(X) plt.scatter(X[:,0], X[:,1]) plt.show() print(pca.lambdas_.shape, pca.alphas_.shape) print(pca.lambdas_) print(X[:10]) from sklearn.datasets import load_iris from sklearn.preprocessing import KernelCenterer def center_kernel(K): N = len(K) IM1 = np.eye(N) - (np.ones((N,N)) / N) return IM1 @ K @ IM1 def kernel_linear(): return lambda a, b: a @ b def kernel_rbf(gamma): return lambda x, y: np.exp(-gamma * (x - y)@(x - y)) class MyKernelPCA: def __init__(self, q, kernel): self.q = q self.kernel = kernel def fit_transform(self, X): N, p = X.shape K = np.empty((N, N)) for i in range(N): for j in range(N): K[i,j] = self.kernel(X[i], X[j]) K = center_kernel(K) w, V = np.linalg.eigh(K) w, V = np.flip(w), np.flip(V, axis=1) wq, Vq = w[:self.q], V[:, :self.q] self.X = X self.lambdas = wq self.alphas = Vq return self.alphas * np.sqrt(self.lambdas) X, y = load_iris().data, load_iris().target pca = MyKernelPCA(q=2, kernel=kernel_rbf(0.5)) X = pca.fit_transform(X) plt.scatter(X[:,0], X[:,1]) plt.show() print(pca.lambdas.shape, pca.alphas.shape) print(pca.lambdas) print(X[:10]) ``` ## Sparse Principal Components Sparse principal component analysis - Zou, H., Hastie, T. and Tibshirani, R. (2006) - [PDF](https://web.stanford.edu/~hastie/Papers/spc_jcgs.pdf) Principal components can be interpret by examining the $v_j$, called loadings. The interpretiation may be easier if they are parse. Methods are usually based on a kind of Lasso (L1) penalty. One approach is to solve the following problem: $$\max_v v^T(X^TX)v$$ $$\text{s.t. } \sum_{j=1}^p |v_j| \leq t, \space v^Tv=1$$ Another strategy use the reconstruction error with some penalty. For a signe component, the criterion is: $$\min_{\theta, v} \sum_{i=1}^N ||x_i - \theta v^T x_i||_2^2 + \lambda ||v||_2^2 + \lambda_1 ||v||_1$$ $$\text{s.t. } ||\theta||_2 = 1$$ If $\lambda=\lambda_1=0$, then $v=\theta$ is the largest principal component direction. The second penalty on $v$ encourages sparseness of the loadings. For $K$ components, the problem became: $$\min_{\theta, v} \sum_{i=1}^N ||x_i - \Theta V^T x_i||_2^2 + \lambda \sum_{k=1}^K ||v_k||_2^2 + \sum_{k=1}^K \lambda_{1k} ||v||_1$$ $$\text{s.t. } \Theta^T\Theta= I$$ The criterion is not jointly convex in $\Theta$ and $V$, but is convex in each parameter with the other fixed. Minimization over $V$ is equivalent to $K$ elastic net problems. Minimization over $\Theta$ is solved by SVD. Alternation the 2 steps converges to the solution. # Non-Negative Matrix Factorization Learning the parts of objects by non-negative matrix factorization - Lee, D. and Seung, H. (1999) - [PDF](http://www.columbia.edu/~jwp2128/Teaching/E4903/papers/nmf_nature.pdf) Algorithms for non-negative matrix factorization - Lee, D. and Seung, H. (2001)- [PDF](https://papers.nips.cc/paper/1861-algorithms-for-non-negative-matrix-factorization.pdf) Non-negative matrix factorization is an alternative approach to PCA, where data and components are assumed to be non-negative. The data matrix $X$ is approximated by: $$X \approx WH$$ with $X \in \mathbb{R}^{N*p}$, $W \in \mathbb{R}^{N*r}$, $H \in \mathbb{R}^{r*p}$, and $r \leq \max(N,p)$. We also assume that $x_{ij}, w_{ik}, h_{kj} \geq 0$. $W$ and $H$ are found by maximizing the log-likelihood of the data following a Poisson distribution: $$L(W,H) = \sum_{i=1}^N \sum_{j=1}^p \left( x_{ij} \log(WH)_{ij} - (WH)_{ij} \right)$$ By iteratively applying the following updates, we converges to a local maximum: $$w_{ik} \leftarrow w_{ik} \frac{\sum_{j=1}^p h_{kj}x_{ij}/(WH)_{ij}}{\sum_{j=1}^p h_{kj}}$$ $$h_{kj} \leftarrow h_{kj} \frac{\sum_{i=1}^N w_{ik}x_{ij}/(WH)_{ij}}{\sum_{i=1}^N w_{ik}}$$ ``` from sklearn.decomposition import NMF from sklearn.datasets import load_iris X, y = load_iris().data, load_iris().target mod = NMF(3) W = mod.fit_transform(X) H = mod.components_ print(H) print(metrics.tdist(W @ H, X)) print(np.sum(W>=0) == W.size) print(np.sum(H>=0) == H.size) from sklearn.datasets import load_iris class MyNMF: def __init__(self, r): self.r = r def fit(self, X): N, p = X.shape W = np.abs(np.random.randn(N, self.r)) H = np.abs(np.random.randn(self.r, p)) for it in range(1000): WH = W @ H for i in range(N): for k in range(self.r): W[i,k] *= np.sum(H[k]*X[i] / WH[i]) / np.sum(H[k]) WH = W @ H for k in range(self.r): for j in range(p): H[k,j] *=np.sum(W[:,k]*X[:,j]/WH[:,j])/np.sum(W[:,k]) return W, H X, y = load_iris().data, load_iris().target mod = MyNMF(3) W, H = mod.fit(X) print(H) print(metrics.tdist(W @ H, X)) print(np.sum(W>=0) == W.size) print(np.sum(H>=0) == H.size) ``` ## Archetypal Analysis Archetypal analysis - Cutler, A. and Breiman, L. (1994) - [PDF](http://digitalassets.lib.berkeley.edu/sdtr/ucb/text/379.pdf) This method is a propotypes method, simila to K-Means. It approximates each data point by a convex combination of a collection of prototypes. $$X \approx WH$$ with $X \in \mathbb{R}^{N*p}$, $W \in \mathbb{R}^{N*r}$, $H \in \mathbb{R}^{r*p}$. We assume $w_{ik} \geq 0$ and $\sum_{k=1}^r w_{ik} = 1$. The $N$ rows of $X$ are representation by convex combinations of the $r$ archetypes (rows of H). The archetypes themselves are combex combinations of the obvervations: $$H = BX$$ with $B \in \mathbb{R}^{r*N}$, $b_{ki} \geq 0$, and $\sum_{i=1}^N b_{ki}=1$. We minimize the following criterion: $$J(W,B) = ||X - WBX||^2$$ Minimizing J with respect to one variable, with the other fixed is convex for both of them. We iteratively minimizes $J$ with respect to $W$ then $B$ until convergence. But the overall problem is not convex, and it converges to a local minimum. # Independant Component Analysis Multivariate data are often viewed as multiple indirect measurement arising from underlying sources, that cannot be directly measured. Factor analysis is a classical technique to identify these latent sources. They are usually based on Gaussian distributions. Independant Cpomponent Analysis is another approach, that relies on the non-Gaussian nature of the underlying sources. ## Latent Variables and Factor Analysis Let's define the reduced singular value decomposition of $X \in \mathbb{R}^{N*p}$: $$X = UDV^T$$ Let's define $S = \sqrt{N} U$ and $A^T = DV^T / \sqrt{N}$. We get a latent variable representation: $$X = SA^T$$ Each column of $X$ is a linar combination of the columns of $S$. The columns of $S$ have zero mean, unit variance, and are uncorrelated. $$X_j = a_{j1}S_1 + a_{j2}S_2 + \text{...} + a_{jp}S_p$$ We can rewrite it as $X = AS$. But for any orthogonal matrix $R \in \mathbb{R}^{p*p}$ we have: $$X = AS = AR^TRS = A^*S^*$$ with $\text{cov}(S^*) = I$. Hence they are many such decompositions, and it is therefore impossible to identify any particular latent variables as unique underlying sources. The classic factor analysis model has the form: $$X_j = a_{j1}S_1 + a_{j2}S_2 + \text{...} + a_{jq}S_q + \sigma_j$$ $$X = AS + \sigma$$ with $S$ a vector of $q$ underlying latent variables or factors, and $A \in \mathbb{R}^{p*q}$ a matrix of factors loadings, and $\sigma_j$ uncorrelated 0-mean disturbances. Typically the $S_l$ and $\sigma_j$ are modeled as Gaussians, and the model is fit by maximum ikelihood. ## Independant Component Analysis Let $X \in \mathbb{R}^{p*N}$, where each column of $X$ represent an observation. The goal is to find the decomposition: $$X = AS$$ with $A \in \mathbb{R}^{p*p}$ orthogonal matrix and $S \in \mathbb{R}^{p*N}$, such that the columns of $S$ are statically independant. We suppose that $X$ have already been whitened ($Cov(X) = I$) We are trying to find an orthogonal matrix $A$ such that the components of $S=A^TX$ are indepandant (and Non-Gaussian). Several ICA approches are based on entropy. The diferential entropy $H$ of a random variable with density $g(Y)$ is: $$H(Y) = - \int g(y) \log g(y) dy$$ A natural mesure of dependance is the mutual information $I(Y)$ between the components of the random vector $Y$: $$I(Y) = \sum_{j=1}^p H(Y_j) - H(Y)$$ Let $Y=A^TX$ with $A$ orthogonal and $\text{cov}(X)=I$. It can be show that: $$I(Y) = \sum_{j=1}^p H(Y_j) - H(X)$$ Finding $A$ that minimize $I(Y) = I(A^TX)$ looks for orthogonal transformation that leads to the mods independance between its components. Instead of using the entropy $H(Y_j)$, we can use the negentropy measure: $$J(Y_j) = H(Z_j) - H(Y_j)$$ with $Z_j$ a gaussian random variable with the same variance as $Y_j$. We can use an aproximation that can be computed and optimized on the data: $$J(Y_j) \approx (E G(Y_j) - E G(Z_j))^2$$ $$\text{where } G(u) = \frac{1}{a} \log \cosh (au), \space \forall 1 \leq a \leq 2$$ ## Exploratory Projection Pursuit A projection pursuit algorithm for exploratory data analysis - Friedman, J. and Tukey, J. (1974) - [PDF](http://www.slac.stanford.edu/pubs/slacpubs/1250/slac-pub-1312.pdf) This is a graphical exploration technique for visualizing high-dimensional data. ## A Direct Approach to ICA Independent components analysis through product density estimation - Hastie, T. and Tibshirani, R. (2003) - [PDF](https://papers.nips.cc/paper/2155-independent-components-analysis-through-product-density-estimation.pdf) We observe a random vector $X \in \mathbb{R}^p$, assumed to arise from a linear mixing of a latent source random vector $S \in \mathbb{R}^P$: $$X = AS$$. The components $S_j$ are assumed to be independently distributed. We assume $E(S) = 0$, $Cov(S) = I$, $Cov(X) = I$, and $A$ ortohogonal. Because the $S_j$ are independant, the joint density of $S$ is given by: $$f_S(s) = \prod_{j=1}^p f_j(s_j)$$. And since $A$ is orthogonal, the joint density of $X$ is: $$f_X(x) = \prod_{j=1}^p f_j(a_j^Tx)$$. the model $f_X$ is fit using semi-parametric maximum likelihood. Each $f_j$ is represented by an exponentially tilted Gaussian density: $$f_j(s_j) = \phi (s_j) \exp (g_j(s_j))$$ Whith $\phi$ the standard Gaussian and $j_g$ a cubic smoothing pline restricted such that $f_j$ integrates to $1$ ### Fitting the Model We got the data $x_1, \text{...}, x_N$. We first center and whiten it. Then we fit the model using penalized maximum log-likelihood: $$\min_{A, \{ g_j \}_1^p} \sum_{j=1}^p \left[ \frac{1}{N} \sum_{i=1}^N (\log \phi(a_j^Tx_i) + g_j(a_j^Tx_i)) - \lambda_j \int g_j''^2(t)dt \right]$$ $$\text{s.t. } a_j^Ta_k = \delta_{jk} \space \forall j,k$$ $$\text{s.t. } \int \phi(s) \exp(g_j(s)) ds = 1 \space \forall j$$ ProDen ICA algorithm: - Initialize A from a random gaussian, then orgonalize it - Repeat until convergence: - Given fixed $A$, optimize seperately each for each $g_j$ using the penalized density estimation algorithm. - Given fixed $g_j$, optimize for A using one step of the fixed point algorithm. ### Penalized density estimation When $p=1$, the problem simplifies to: $$\min_g \frac{1}{N} \sum_{i=1}^N (\log \phi(s_i) + g(s_i)) - \lambda \int g''^2(t)dt$$ $$\text{s.t. } \int \phi(s) \exp(g(s)) ds = 1$$ The constraint can be integraded with the modified criterion: $$\min_g \frac{1}{N} \sum_{i=1}^N (\log \phi(s_i) + g(s_i)) - \int \phi(s) \exp(g(s)) ds - \lambda \int g''^2(t)dt$$ We approximate the integral using a grid of $L$ values $s_l^*$ separated by $\Delta$, covering the observed values $s_i$: $$y_l^* = \frac{\# s_i \in (s_l^* - \Delta/2, s_l^* + \Delta/2)}{N}$$ The final criterion is: $$\min_g \sum_{l=1}^L \left[ y_l^*(\log \phi(s_l^*) + g(s_l^*)) - \Delta \phi(s_l^*) \exp(g(s_l^*)) \right] - \lambda \int g''^2(t)dt$$ This is a generalized additive model, that can be fit using a newton algorithm, turned into an iteratively reweighted penalized least square regression problem. This is done using a weighted cubic smoothing spline. ### Fixed-point method The penalty term does not depend on $A$, and because all colums of $A$ are othogonal, the Gaussian component $\log \phi(a_j^Tx_i)$ does not depend of A either. What remains to be optimized is: $$C(A) = \frac{1}{N} \sum_{i=1}^N \sum_{j=1}^p g_j(a_j^Tx_i)$$ # Multidimensional Scaling Multidimensional Scaling tries to learn a lower-dimensional manifold like PCA. It only works with distances $d_{ij}$, distance between obervation $i$ and $j$. The goal is to find a lower-dimensional representation of the data that preserves the distance as well as possible. Krukaskal-Shephard scaling (least squares) minimizes the following stress function: $$S_M(Z) = \sum_{i \neq i'} (d_{ii'} - ||z_i - z_{i'}||)^2$$ The criterion is minimized using gradient descent. Another criterion is the Sammon mapping: $$S_{Sm}(Z) = \sum_{i \neq i'} \frac{(d_{ii'} - ||z_i - z_{i'}||)^2}{d_{ii'}}$$ In classical scaling, we use similarities $s_{ii'}$. One example is the center inner product $s_{ii'} = \langle x_i - \bar{x}, x_{i'} - \bar{x} \rangle$. The criterion is: $$S_C(Z) = \sum_{i,i'} (s_{ii'} - \langle z_i - \bar{z}, z_{i'} - \bar{z} \rangle)^2$$ If the similarities are the center inner product, this is equivalent to PCA. Another approach is nonmetric scaling, this minimizes the following criterion: $$S_{NM}(Z) = \sum_{i \neq i'} \frac{(||z_i - z_{i'}|| - \theta(d_{ii'}))^2}{\sum_{i \neq i'} ||z_i - z_{i'}||^2}$$ with $\theta$ an arbitrary increasing function. We fit the model by iteratively optimizing for $Z$ with gradient descent and $\theta$ with isotonic regression until convergence. Isotonic regression is a regression technique trying to minimize the squared error, but the approximator is any form of monotone function. # Nonlinear Dimension Reduction Several methods exist to find a low-dimensional nonlinear manifold of the data ## Isometric feature mapping A global geometric framework for nonlinear dimensionality reduction - Tenenbaum, J. B., de Silva, V. and Langford, J. C. (2000) - [PDF](https://web.mit.edu/cocosci/Papers/sci_reprint.pdf) We build a graph of the dataset, We find the neighbors of each of the points, and build edges with its neighbors. We approximate the geodesic distance between 2 points by the shortest path between these 2 points on the graph. Classical scaling is applied to the graph distances. ## Local linear embedding Nonlinear dimensionality reduction by locally linear embedding - Roweis, S. T. and Saul, L. K. (2000) - [PDF](http://www.robots.ox.ac.uk/~az/lectures/ml/lle.pdf) The point are approximated locally, and a lower dimensional representation is built from these approximations. 1. For each data point $x_i$m we find its K-nearest neighbors $\mathcal{N}(i)$ 2. We approximate each point by an affine mixture of its neighbors: $$\min_{W_{ik}} ||x_i - \sum_{k \in \mathcal{N}(i)} w_{ik}x_k||^2$$ over weights $w_{ik}$ satysfying $\sum_k w_{ik}=1$. 3. We find points $y_i$ in a lower-dimensional space that minimizes: $$\sum_{i=1}^N ||y_i - \sum_{k=1}^N w_{ik} y_k||^2$$ ## Local Multidimension Scaling Local multidimensional scaling for nonlineardimension reduction, graph drawing and proximity analysis - Chen, L. and Buja, A. (2008) - [PDF](https://pdfs.semanticscholar.org/183f/fb91f924ae7b938e4bfd1f5b2c3f8ef3b35c.pdf) Let $\mathcal{N}$ the set of nearby pairs, suchat that $(i,i') \in \mathcal{N}$ if $i$ is among the K-nearest neighbors of $i'$ or vice-versa. The goal if to find the point representations $z_i$ that minimize the stress function: $$S_L(Z) = \sum_{(i,i') \in \mathcal{N}} (d_{ii'} - ||z_i - z_{i'}||)^2 - \tau \sum_{(i,i') \notin \mathcal{N}} ||z_i - z_{i'}||$$ with tuning parameters $\tau$ and $K$. The first term tries to preserve local structure in the data, while the second encourage representations of points that are non-neighbors to be farther appart. The model is trained with gradient descent # The Google PageRank Algorithm The pagerank citation ranking: bringing order to the web - Page, L., Brin, S., Motwani, R. and Winograd, T. (1998) - [PDF](http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf) We have $N$ webpages, and want to rank them in term of importance. A webpage is important if many webpages point to it. It also takes into account the importance of the linkin pages and the number of outgoing pages they have. Let $L$ a binary matrix, $L_{ij} = 1$ if page $j$ points to page $i$, $0$ otherwhise. Let $c_j = \sum_{i=1}^N L_{ij}$ the number of pages pointed to by page $j$. Then the google PageRanks $p_i$ are defined recursively as: $$p_i = (1 - d) + d \sum_{j=1}^N \frac{L_{ij}}{c_j} p_j$$ with $d$ a positive constant that ensures that each page get a PageRank of at least $1-d$. We can write it in matrix notiation: $$p = (1 - d)e + d LD_c^{-1}p$$ with $e$ a vector of $N$ ones and $D_c = \text{diag}(c)$. If we had the constraint that the average PageRank is 1 ($e^Tp=N$), the equation can be rewritten as: $$p= \left[ (1-d)ee^T/N + dLD_c^{-1} \right] p$$ $$p=Ap$$ It can be shown that this problem is the same as a random walk expressed by a Markov Chain, and so the largest eigenvalue of $A$ is $1$. This means we can find $p$ with the power method. Algorithm: - Start with some random $p_0$ - Iterative until convergence: $$p_k \leftarrow A p_{k-1}$$ $$p_k \leftarrow N \frac{p_k}{e^Tp_k}$$ ``` def page_rank(L, d=0.85, tol=1e-12): N = L.shape[0] c = np.sum(L, axis=0) e = np.ones(N) A = (1-d)/N + d * L * (1/c).reshape(1, N) pk = np.random.rand(N) its = 0 while True: its += 1 pk1 = A @ pk pk1 /= np.mean(pk1) if (pk - pk1) @ (pk - pk1) < tol: break pk = pk1 print(metrics.tdist(pk1, A @ pk1)) print('Niters:', its) return pk1 L = np.array([ [0, 0, 1, 0], [1, 0, 0, 0], [1, 1, 0, 1], [0, 0, 0, 0] ]) p = page_rank(L) print(p) ```
github_jupyter
# Wie Sie dieses Notebook nutzen: - Führen Sie diesen Code Zelle für Zelle aus. - Um die Variableninhalte zu beobachten, nutzen Sie in Jupyter-Classic den "Variable Inspektor". Falls Sie dieses Notebook in Jupyter-Lab verwenden, nutzen Sie hierfür den eingebauten Debugger. - Wenn Sie "Code Tutor" zur Visualisierung des Ablaufes einzelner Zellen nutzen möchten, führen Sie einmalig die nachfolgende Zelle aus. Anschliessend schreiben Sie %%tutor in die erste Zeile jeder Zelle, die Sie visualisieren möchten. - Die Dokumentation von range(), len() und allen anderen eingebauten Funktionen finden Sie hier: https://docs.python.org/3/library/functions.html ``` # Für Code Tutor Visualisierungen from metakernel import register_ipython_magics register_ipython_magics() ``` ## Funktionen - Funktionen definiert man mit __def__ - Die korrekte Einrückung des Anweisungsblocks ist zu beachten. - Funktionen haben optional __Parameter__ und einen __Rückgabewert__. Letzterer wird mit "return" zurückgegeben. - Funktionen haben eine __Dokumentation__, die im Docstring hinterlegt ist. - Funktionen haben __Testfälle__, die automatisch ausgeführt werden können und die Funktion dokumentieren und die Verwendung demonstrieren. - Funktionen können Tesfälle im Docstring haben, aber auch auf viele andere Arten getestet werden, etwa mittels __assert__-Statements oder fortgeschritten mit [unittest](https://docs.python.org/3/library/unittest.html#module-unittest). ### Definition ```python def name_der_funktion(parameter1, parameter2): """ Hier steht in einem Satz, was diese Funktion macht. Tests: >>> print(name_der_funktion("Rot", "Grün")) Gelb >>> print(name_der_funktion("Rot", "Blau")) Cyan Hier können weitere relevante Hinweise zur Nutzung gegeben werden. """ berechung 1 berechung 2 berechung ... ergebnis = berechung n return ergebnis ``` ### Anwendung Funktionen lassen sich sehr gut wiederverwenden, etwa in Schleifen. Dazu muss man die Funktion selbst nicht verstehen, wie sie intern funktioniert, sondern nur das Ergebnis. ``` def hash13(s): """ Erzeugt einen Hashwert des Stings s zwischen 0 und 13 Tests: ToDo """ summe = 0 i=0 while i < len(s): j = ord(s[i]) # print("Buchstabe: {} Code: {}".format(s[i], j)) summe = summe + j i+=1 return summe % 13 passwoerter = ["Hallo", "3re4kl4", "abcde", "rambo"] for p in passwoerter: h = hash13(p) print("{} - {}".format(p, h)) ``` ### Gültigkeitsbereich (Scope) Die an die Funktions-Parameter übergebenen Werte sind nur innerhalb des aktuellen Funktionsaufrufs gültig. ``` def funktions_name(parameter1, parameter2): ergebnis = parameter1 * parameter2 return ergebnis rueckgabewert = funktions_name(7,2) print(rueckgabewert) ``` Ausserhalb einer Funktion sind die Parameter-Variablen nicht definiert ``` print(parameter1) ``` ### Tests Um die eingebetteten Tests laufen zu lassen, muss die Funktion "run_docstring_examples" aus dem Packet "doctest" importiert werden. ```python from doctest import run_docstring_examples ``` Dann können durch ff. Aufruf die Tests, die im Docstring stehen, ausgeführt werden. ```python run_docstring_examples(name_der_funktion, locals()) ``` ``` from doctest import run_docstring_examples def mittelwert(zahlen): """ Berechnet das arithmetrische Mittel einer Liste von Zahlen. >>> print(mittelwert([20, 30, 70])) 40.0 >>> print(mittelwert([0, 0, 0])) 0.0 """ ergebnis = sum(zahlen) / len(zahlen) return ergebnis run_docstring_examples(mittelwert, locals()) assert mittelwert([20, 30, 70]) == 40.0 assert mittelwert([0, 0, 0]) == 0.0 ``` Es können auch alle Testfälle in den Docstrings aller Funktionen einer .py-Datei gleichzeitig getestet werden. ``` def average(values): """ Computes the arithmetic mean of a list of numbers. >>> print(average([20, 30, 70])) 40.0 >>> print(average([0, 0, 0])) 0.0 """ return sum(values) / len(values) def second_best(values): """ Computes the second highest value of a list of numbers. >>> print(second_best([20, 30, 70])) 30 >>> print(second_best([0, 0, 0])) 0 """ pass import doctest doctest.testmod() # automatically validate the embedded tests of all functions ```
github_jupyter
# The Assemble Module The `assemble` module of the `repytah` package finds and forms essential structure components. These components are the smallest building blocks that form the basis for every repeat in the song. The functions in this module ensure that each time step of a song is contained in at most one of the song's essential structure components by making none of the repeats overlap in time. When repeats overlap, these repeats undergo a process where they are divided until there are only non-overlapping pieces left. The following functions are exported from the `assemble` module: - `breakup_overlaps_by_intersect`: Extracts repeats in **input\_pattern\_obj** that has the starting indices of the repeats into the essential structure components using **bw\_vec** that has the lengths of each repeat. - `check_overlaps`: Compares every pair of groups, determining if there are any repeats in any pairs of the groups that overlap. - `hierarchical_structure`: Distills the repeats encoded in **matrix\_no\_overlaps** (and **key\_no\_overlaps**) to the essential structure components and then builds the hierarchical representation. Also optionally outputs visualizations of the hierarchical representations. This module uses `find_all_repeats` from the [`search`](https://github.com/smith-tinkerlab/repytah/blob/main/docs/search_vignette.ipynb) module and `reconstruct_full_block` from the [`utilities`](https://github.com/smith-tinkerlab/repytah/blob/main/docs/utilities_vignette.ipynb) module. For more in-depth information on the function calls, an example function pipeline is shown below. Functions from the current module are shown in red. <img src="pictures/function_pipeline.jpg" width="380"> ## Import Modules ``` # NumPy is used for mathematical calculations import numpy as np # Import other modules from inspect import signature # Import assemble from repytah.assemble import * ``` ## breakup_overlaps_by_intersect The purpose of this function is to create the essential structure components matrix. Essential structure components contain the smallest building blocks that form every repeat in the song. This matrix is created using **input\_pattern\_obj** that has the starting indices of the repeats and a vector **bw\_vec** that has the lengths of each repeat. The inputs for this function are: - **input_pattern_obj** (np.ndarray): A binary matrix with 1's where repeats begin and 0's otherwise - **bw_vec** (np.ndarray): Lengths of the repeats encoded in **input\_pattern\_obj** - **thresh_bw** (int): The smallest allowable repeat length The outputs for this function are: - **pattern_no_overlaps** (np.ndarray): A binary matrix with 1's where repeats of essential structure components begin - **pattern_no_overlaps_key** (np.ndarray): A vector containing the lengths of the repeats of essential structure components in **pattern\_no\_overlaps** ``` input_pattern_obj = np.array([[1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0], [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]]) bw_vec = np.array([[3], [5], [8], [8]]) thresh_bw = 0 print("The input array is: \n", input_pattern_obj) print("The lengths of the repeats in the input array is: \n", bw_vec) print("The smallest allowable repeat length is: ", thresh_bw) output = breakup_overlaps_by_intersect(input_pattern_obj, bw_vec, thresh_bw) print("The output array is: \n", output[0]) print("The lengths of the repeats in the output array is: \n", output[1]) ``` ## check_overlaps This function compares every pair of groups and checks for overlaps between those pairs. To check every pair of groups, the function creates *compare\_left* and *compare\_right*. *compare\_left* repeats each row the number of rows times, and *compare\_right* repeats the whole input the number of rows times. By comparing each corresponding time step in *compare\_left* and *compare\_right*, it determines if there are any overlaps between groups. The input for this function is: - **input_mat** (np.ndarray): An array waiting to be checked for overlaps The output for this function is: - **overlaps\_yn** (np.ndarray): A logical array where (i,j) = 1 if row i of input matrix and row j of input matrix overlap and (i, j) = 0 elsewhere ``` input_mat = np.array([[0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0], [0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1]]) print("The input array waiting to be checked for overlaps is: \n", input_mat) output = check_overlaps(input_mat) print("The output logical array is: \n", output) ``` ## hierarchical\_structure This function distills the repeats encoded in **matrix\_no\_overlaps** (and **key\_no\_overlaps**), which are the outputs from the [`remove_overlaps`](https://github.com/smith-tinkerlab/repytah/blob/main/docs/transform_vignette.ipynb) function from the transform module, to the essential structure components and then builds the hierarchical representation. It optionally shows visualizations of the hierarchical structure via the **vis** argument. The inputs for this function are: - **matrix\_no\_overlaps** (np.array\[int]): A binary matrix with 1's where repeats begin and 0's otherwise - **key\_no\_overlaps** (np.array\[int]): A vector containing the lengths of the repeats encoded in **matrix_no_overlaps** - **sn** (int): The song length, which is the number of audio shingles - **vis** (bool): Shows visualizations if True (default = False) The outputs for this function are: - **full_visualization** (np.array\[int]): A binary matrix representation for **full_matrix_no_overlaps** with blocks of 1's equal to the length's prescribed in **full_key** - **full_key** (np.array\[int]): A vector containing the lengths of the hierarchical structure encoded in **full_matrix_no_overlaps** - **full_matrix_no_overlaps** (np.array\[int]): A binary matrix with 1's where hierarchical structure begins and 0's otherwise - **full_anno_lst** (np.array\[int]): A vector containing the annotation markers of the hierarchical structure encoded in each row of **full_matrix_no_overlaps** ``` matrix_no_overlaps = np.array([[0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0]]) key_no_overlaps = np.array([2]) sn = 20 print("The matrix representation of the non-overlapping repeats is: \n", matrix_no_overlaps) print("The lengths of the repeats in matrix_no_overlaps are: \n", key_no_overlaps) print("The song length is: \n", sn) output = hierarchical_structure(matrix_no_overlaps, key_no_overlaps, sn, vis=True) full_visualization = output[0] full_key = output[1] full_matrix_no_overlaps = output[2] full_anno_lst = output[3] print("The binary matrix representation for the full_matrix_no_overlaps is: \n", full_visualization) print("The vector containing the lengths of the hierarchical structure encoded in full_matrix_no_overlaps is: \n", full_key) print("The binary matrix with 1's where hierarchical structure begins and 0's otherwise is: \n", full_matrix_no_overlaps) print("The vector containing the annotation markers of the hierarchical structure encoded in each row \n of full_matrix_no_overlaps is: \n", full_anno_lst) ```
github_jupyter
# Linear algebra in Python with NumPy In this lab, you will have the opportunity to remember some basic concepts about linear algebra and how to use them in Python. Numpy is one of the most used libraries in Python for arrays manipulation. It adds to Python a set of functions that allows us to operate on large multidimensional arrays with just a few lines. So forget about writing nested loops for adding matrices! With NumPy, this is as simple as adding numbers. Let us import the `numpy` library and assign the alias `np` for it. We will follow this convention in almost every notebook in this course, and you'll see this in many resources outside this course as well. ``` import numpy as np # The swiss knife of the data scientist. ``` ## Defining lists and numpy arrays ``` alist = [1, 2, 3, 4, 5] # Define a python list. It looks like an np array narray = np.array([1, 2, 3, 4]) # Define a numpy array ``` Note the difference between a Python list and a NumPy array. ``` print(alist) print(narray) print(type(alist)) print(type(narray)) ``` ## Algebraic operators on NumPy arrays vs. Python lists One of the common beginner mistakes is to mix up the concepts of NumPy arrays and Python lists. Just observe the next example, where we add two objects of the two mentioned types. Note that the '+' operator on NumPy arrays perform an element-wise addition, while the same operation on Python lists results in a list concatenation. Be careful while coding. Knowing this can save many headaches. ``` print(narray + narray) print(alist + alist) ``` It is the same as with the product operator, `*`. In the first case, we scale the vector, while in the second case, we concatenate three times the same list. ``` print(narray * 3) print(alist * 3) ``` Be aware of the difference because, within the same function, both types of arrays can appear. Numpy arrays are designed for numerical and matrix operations, while lists are for more general purposes. ## Matrix or Array of Arrays In linear algebra, a matrix is a structure composed of n rows by m columns. That means each row must have the same number of columns. With NumPy, we have two ways to create a matrix: * Creating an array of arrays using `np.array` (recommended). * Creating a matrix using `np.matrix` (still available but might be removed soon). NumPy arrays or lists can be used to initialize a matrix, but the resulting matrix will be composed of NumPy arrays only. ``` npmatrix1 = np.array([narray, narray, narray]) # Matrix initialized with NumPy arrays npmatrix2 = np.array([alist, alist, alist]) # Matrix initialized with lists npmatrix3 = np.array([narray, [1, 1, 1, 1], narray]) # Matrix initialized with both types print(npmatrix1) print(npmatrix2) print(npmatrix3) ``` However, when defining a matrix, be sure that all the rows contain the same number of elements. Otherwise, the linear algebra operations could lead to unexpected results. Analyze the following two examples: ``` # Example 1: okmatrix = np.array([[1, 2], [3, 4]]) # Define a 2x2 matrix print(okmatrix) # Print okmatrix print(okmatrix * 2) # Print a scaled version of okmatrix np.dot(okmatrix,okmatrix) # Example 2: badmatrix = np.array([[1, 2], [3, 4], [5,6, 7]]) # Define a matrix. Note the third row contains 3 elements print(badmatrix) # Print the malformed matrix print(badmatrix * 2) # It is supposed to scale the whole matrix ``` ## Scaling and translating matrices Now that you know how to build correct NumPy arrays and matrices, let us see how easy it is to operate with them in Python using the regular algebraic operators like + and -. Operations can be performed between arrays and arrays or between arrays and scalars. ``` # Scale by 2 and translate 1 unit the matrix result = okmatrix * 2 + 1 # For each element in the matrix, multiply by 2 and add 1 print(result) # Add two sum compatible matrices result1 = okmatrix + okmatrix print(result1) # Subtract two sum compatible matrices. This is called the difference vector result2 = okmatrix - okmatrix print(result2) ``` The product operator `*` when used on arrays or matrices indicates element-wise multiplications. Do not confuse it with the dot product. ``` result = okmatrix * okmatrix # Multiply each element by itself print(result) ``` ## Transpose a matrix In linear algebra, the transpose of a matrix is an operator that flips a matrix over its diagonal, i.e., the transpose operator switches the row and column indices of the matrix producing another matrix. If the original matrix dimension is n by m, the resulting transposed matrix will be m by n. **T** denotes the transpose operations with NumPy matrices. ``` matrix3x2 = np.array([[1, 2], [3, 4], [5, 6]]) # Define a 3x2 matrix print('Original matrix 3 x 2') print(matrix3x2) print('Transposed matrix 2 x 3') print(matrix3x2.T) ``` However, note that the transpose operation does not affect 1D arrays. ``` nparray = np.array([1, 2, 3, 4]) # Define an array print('Original array') print(nparray) print('Transposed array') print(nparray.T) ``` perhaps in this case you wanted to do: ``` nparray = np.array([[1, 2, 3, 4]]) # Define a 1 x 4 matrix. Note the 2 level of square brackets print('Original array') print(nparray) print('Transposed array') print(nparray.T) ``` ## Get the norm of a nparray or matrix In linear algebra, the norm of an n-dimensional vector $\vec a$ is defined as: $$ norm(\vec a) = ||\vec a|| = \sqrt {\sum_{i=1}^{n} a_i ^ 2}$$ Calculating the norm of vector or even of a matrix is a general operation when dealing with data. Numpy has a set of functions for linear algebra in the subpackage **linalg**, including the **norm** function. Let us see how to get the norm a given array or matrix: ``` nparray1 = np.array([1, 2, 3, 4]) # Define an array norm1 = np.linalg.norm(nparray1) nparray2 = np.array([[1, 2], [3, 4]]) # Define a 2 x 2 matrix. Note the 2 level of square brackets norm2 = np.linalg.norm(nparray2) print(norm1) print(norm2) ``` Note that without any other parameter, the norm function treats the matrix as being just an array of numbers. However, it is possible to get the norm by rows or by columns. The **axis** parameter controls the form of the operation: * **axis=0** means get the norm of each column * **axis=1** means get the norm of each row. ``` nparray2 = np.array([[1, 1], [2, 2], [3, 3]]) # Define a 3 x 2 matrix. normByCols = np.linalg.norm(nparray2, axis=0) # Get the norm for each column. Returns 2 elements normByRows = np.linalg.norm(nparray2, axis=1) # get the norm for each row. Returns 3 elements print(normByCols) print(normByRows) ``` However, there are more ways to get the norm of a matrix in Python. For that, let us see all the different ways of defining the dot product between 2 arrays. ## The dot product between arrays: All the flavors The dot product or scalar product or inner product between two vectors $\vec a$ and $\vec a$ of the same size is defined as: $$\vec a \cdot \vec b = \sum_{i=1}^{n} a_i b_i$$ The dot product takes two vectors and returns a single number. ``` nparray1 = np.array([0, 1, 2, 3]) # Define an array nparray2 = np.array([4, 5, 6, 7]) # Define an array flavor1 = np.dot(nparray1, nparray2) # Recommended way print(flavor1) flavor2 = np.sum(nparray1 * nparray2) # Ok way print(flavor2) flavor3 = nparray1 @ nparray2 # Geeks way print(flavor3) # As you never should do: # Noobs way flavor4 = 0 for a, b in zip(nparray1, nparray2): flavor4 += a * b print(flavor4) ``` **We strongly recommend using np.dot, since it is the only method that accepts arrays and lists without problems** ``` norm1 = np.dot(np.array([[1, 2],[3, 4]]), np.array([[1, 2],[3, 4]])) # Dot product on nparrays norm2 = np.dot([1, 2], [3, 4]) # Dot product on python lists print(norm1, '=', norm2 ) ``` Finally, note that the norm is the square root of the dot product of the vector with itself. That gives many options to write that function: $$ norm(\vec a) = ||\vec a|| = \sqrt {\sum_{i=1}^{n} a_i ^ 2} = \sqrt {a \cdot a}$$ ## Sums by rows or columns Another general operation performed on matrices is the sum by rows or columns. Just as we did for the function norm, the **axis** parameter controls the form of the operation: * **axis=0** means to sum the elements of each column together. * **axis=1** means to sum the elements of each row together. ``` nparray2 = np.array([[1, -1], [2, -2], [3, -3]]) # Define a 3 x 2 matrix. sumByCols = np.sum(nparray2, axis=0) # Get the sum for each column. Returns 2 elements sumByRows = np.sum(nparray2, axis=1) # get the sum for each row. Returns 3 elements print('Sum by columns: ') print(sumByCols) print('Sum by rows:') print(sumByRows) ``` ## Get the mean by rows or columns As with the sums, one can get the **mean** by rows or columns using the **axis** parameter. Just remember that the mean is the sum of the elements divided by the length of the vector $$ mean(\vec a) = \frac {{\sum_{i=1}^{n} a_i }}{n}$$ ``` nparray2 = np.array([[1, -1], [2, -2], [3, -3]]) # Define a 3 x 2 matrix. Chosen to be a matrix with 0 mean mean = np.mean(nparray2) # Get the mean for the whole matrix meanByCols = np.mean(nparray2, axis=0) # Get the mean for each column. Returns 2 elements meanByRows = np.mean(nparray2, axis=1) # get the mean for each row. Returns 3 elements print('Matrix mean: ') print(mean) print('Mean by columns: ') print(meanByCols) print('Mean by rows:') print(meanByRows) ``` ## Center the columns of a matrix Centering the attributes of a data matrix is another essential preprocessing step. Centering a matrix means to remove the column mean to each element inside the column. The sum by columns of a centered matrix is always 0. With NumPy, this process is as simple as this: ``` nparray2 = np.array([[1, 1], [2, 2], [3, 3]]) # Define a 3 x 2 matrix. nparrayCentered = nparray2 - np.mean(nparray2, axis=0) # Remove the mean for each column print('Original matrix') print(nparray2) print('Centered by columns matrix') print(nparrayCentered) print('New mean by column') print(nparrayCentered.mean(axis=0)) ``` **Warning:** This process does not apply for row centering. In such cases, consider transposing the matrix, centering by columns, and then transpose back the result. See the example below: ``` nparray2 = np.array([[1, 3], [2, 4], [3, 5]]) # Define a 3 x 2 matrix. nparrayCentered = nparray2.T - np.mean(nparray2, axis=1) # Remove the mean for each row nparrayCentered = nparrayCentered.T # Transpose back the result print('Original matrix') print(nparray2) print('Centered by columns matrix') print(nparrayCentered) print('New mean by rows') print(nparrayCentered.mean(axis=1)) ``` Note that some operations can be performed using static functions like `np.sum()` or `np.mean()`, or by using the inner functions of the array ``` nparray2 = np.array([[1, 3], [2, 4], [3, 5]]) # Define a 3 x 2 matrix. mean1 = np.mean(nparray2) # Static way mean2 = nparray2.mean(axis=1) # Dinamic way print(mean1, ' == ', mean2) ``` Even if they are equivalent, we recommend the use of the static way always. **Congratulations! You have successfully reviewed vector and matrix operations with Numpy!**
github_jupyter
# Hinge Loss In this project you will be implementing linear classifiers beginning with the Perceptron algorithm. You will begin by writing your loss function, a hinge-loss function. For this function you are given the parameters of your model θ and θ0 Additionally, you are given a feature matrix in which the rows are feature vectors and the columns are individual features, and a vector of labels representing the actual sentiment of the corresponding feature vector. 1. First, implement the basic hinge loss calculation on a single data-point. Instead of the entire feature matrix, you are given one row, representing the feature vector of a single data sample, and its label of +1 or -1 representing the ground truth sentiment of the data sample def hinge_loss_single(feature_vector, label, theta, theta_0): feature_vector - A numpy array describing the given data point. label - A real valued number, the correct classification of the data point. theta - A numpy array describing the linear classifier. theta_0 - A real valued number representing the offset parameter. Returns: A real number representing the hinge loss associated with the given data point and parameters. ``` import numpy as np feature_vector= np.array([1, 2]) label= 1 theta= np.array([-1, 1]) theta_0= -0.2 def hinge_loss_single(feature_vector, label, theta, theta_0): if (label* np.dot(feature_vector, theta) + theta_0) >=1: loss= 0 else: loss= 1 - (label* (np.dot(theta, feature_vector) + theta_0)) return loss ``` # The Complete Hinge Loss Now it's time to implement the complete hinge loss for a full set of data. Your input will be a full feature matrix this time, and you will have a vector of corresponding labels. The kth row of the feature matrix corresponds to the kth element of the labels vector. This function should return the appropriate loss of the classifier on the given dataset. ``` def hinge_loss_full(feature_matrix, labels, theta, theta_0): total_loss=[] for i, x in enumerate(feature_matrix): if (labels[i]*(np.dot(theta, feature_matrix[i]) + theta_0)) >= 1: loss= 0 else: loss= 1 - (labels[i]*(np.dot(theta, feature_matrix[i])+ theta_0)) total_loss.append(loss) return sum(total_loss)/len(feature_matrix) feature_matrix = np.array([[1, 2], [1, -1]]) label, theta, theta_0 = np.array([1, 1]), np.array([-1, 1]), -0.2 hinge_loss_full(feature_matrix, label, theta, theta_0) ```
github_jupyter
(Feedforward)= # Chapter 8 -- Feedforward Let's take a look at how feedforward is processed in a three layers neural net. <img src="images/feedForward.PNG" width="500"> Figure 8.1 From the figure 8.1 above, we know that the two input values for the first and the second neuron in the hidden layer are $$ h_1^{(1)} = w_{11}^{(1)}*x_1 + w_{21}^{(1)}*x_2 + w_{31}^{(1)}*x_3+ w_{41}^{(1)}*1 $$ (eq8_1) $$ h_2^{(1)} = w_{12}^{(2)}*x_1 + w_{22}^{(2)}*x_2 + w_{32}^{(2)}*x_3+ w_{42}^{(2)}*1 $$ (eq8_2) where the $w^{(n)}_{4m}$ term is the bias term in the form of weight. To simplify the two equations above, we can use matrix $$ H^{(1)} = [h_1^{(1)} \;\; h_2^{(1)}] = [x_1 \;\; x_2 \;\; x_3 \;\; 1] \begin{bmatrix} w^{(1)}_{11} & w^{(1)}_{12} \\ w^{(1)}_{21} & w^{(1)}_{22} \\ w^{(1)}_{31} & w^{(1)}_{32} \\ w^{(1)}_{41} & w^{(1)}_{4 2} \end{bmatrix} $$ (eq8_3) Similarly, the two outputs from the input layer can be the inputs for the hidden layer $$ \sigma(H^{(1)}) = [\sigma(h_1^{(1)}) \;\; \sigma( h_2^{(1)})] $$ (eq8_4) This in turns can be the input values for the next layer (output layer) $$ h^{(2)} = w^{(2)}_{11}* \sigma(h^{(1)}_1)+w^{(2)}_{21} *\sigma(h^{(1)}_2)+w^{(2)}_{31}*1 $$ (eq8_5) Again, we can simplify this equation by using matrix $$ H^{(2)} = [\sigma(h_1^{(1)}) \;\;\sigma(h_2^{(1)}) \; \; 1] \begin{bmatrix} w^{(2)}_{11} \\ w^{(2)}_{21} \\ w^{(2)}_{31} \end{bmatrix} $$ (eq8_6) Then we send this value $h^{(2)}$ into the sigma function in the final output layer to obtain the prediction $$ \hat{y} = \sigma(h^{(2)}) $$ (eq8_7) To put all the equation of three layers together, we can have $$ \hat{y} = \sigma(\sigma([x_1 \;\; x_2 \;\; x_3 \;\; 1] \begin{bmatrix} w^{(1)}_{11} & w^{(1)}_{12} \\ w^{(1)}_{21} & w^{(1)}_{22} \\ w^{(1)}_{31} & w^{(1)}_{32} \\ w^{(1)}_{41} & w^{(1)}_{42} \end{bmatrix}) \begin{bmatrix} w^{(2)}_{11} \\ w^{(2)}_{21} \\ w^{(2)}_{31} \end{bmatrix}) $$ (eq8_8) Or we can simplify it to be $$ \hat{y} = \sigma(\sigma(xW^{(1)})W^{(2)}) $$ (eq8_9) This is the feedforward process: based on the known weights $W$ and input $x$ to calculate the prediction $\hat{y}$. Finally, it's easy to write code computing the output from a Network instance. We begin by defining the sigmoid function: ``` def sigmoid(z): return 1.0/(1.0+np.exp(-z)) ``` Note that when the input z is a vector or Numpy array, Numpy automatically applies the function sigmoid elementwise, that is, in vectorized form. We then add a feedforward method to the Network class, which, given an input a for the network, returns the corresponding output: ``` def feedforward(self, a): """Returning the output a, which is the input to the next layer""" for b, w in zip(self.biases, self.weights): a = sigmoid(np.dot(w, a)+b) return a ```
github_jupyter
In this lab, we will optimize the weather simulation application written in Fortran (if you prefer to use C++, click [this link](../../C/jupyter_notebook/profiling-c.ipynb)). Let's execute the cell below to display information about the GPUs running on the server by running the pgaccelinfo command, which ships with the PGI compiler that we will be using. To do this, execute the cell block below by giving it focus (clicking on it with your mouse), and hitting Ctrl-Enter, or pressing the play button in the toolbar above. If all goes well, you should see some output returned below the grey cell. ``` !pgaccelinfo ``` ## Exercise 2 ### Learning objectives Learn how to identify and parallelise the computationally expensive routines in your application using OpenACC compute constructs (A compute construct is a parallel, kernels, or serial construct.). In this exercise you will: - Implement OpenACC parallelism using parallel directives to parallelise the serial application - Learn how to compile your parallel application with PGI compiler - Benchmark and compare the parallel version of the application with the serial version - Learn how to interpret PGI compiler feedback to ensure the applied optimization were successful From the top menu, click on *File*, and *Open* `miniWeather_openacc.f90` and `Makefile` from the current directory at `Fortran/source_code/lab2` directory and inspect the code before running below cells.We have already added OpenACC compute directives (`!$acc parallel loop`) around the expensive routines (loops) in the code. Once done, compile the code with `make`. View the PGI compiler feedback (enabled by adding `-Minfo=accel` flag) and investigate the compiler feedback for the OpenACC code. The compiler feedback provides useful information about applied optimizations. ``` !cd ../source_code/lab2 && make clean && make ``` Let's inspect part of the compiler feedback and see what it's telling us. <img src="images/ffeedback1-0.png"> - Using `-ta=tesla:managed`, instruct the compiler to build for an NVIDIA Tesla GPU using "CUDA Managed Memory" - Using `-Minfo` command-line option, we will see all output from the compiler. In this example, we use `-Minfo=accel` to only see the output corresponding to the accelerator (in this case an NVIDIA GPU). - The first line of the output, `compute_tendencies_x`, tells us which function the following information is in reference to. - The line starting with 247 and 252, shows we created a parallel OpenACC loop. This loop is made up of gangs (a grid of blocks in CUDA language) and vector parallelism (threads in CUDA language) with the vector size being 128 per gang. - The line starting with 249 and 252, `Loop is parallelizable` of the output tells us that on these lines in the source code, the compiler found loops to accelerate. - The rest of the information concerns data movement. Compiler detected possible need to move data and handled it for us. We will get into this later in this lab. It is very important to inspect the feedback to make sure the compiler is doing what you have asked of it. Now, **Run** the application for small values of `nx_glob`,`nz_glob`, and `sim_time`: **40, 20, 1000**. ``` !cd ../source_code/lab2 && ./miniWeather ``` **Profile** it with Nsight Systems command line `nsys`. ``` !cd ../source_code/lab2 && nsys profile -t nvtx,openacc --stats=true --force-overwrite true -o miniWeather_3 ./miniWeather ``` You can see that the changes made actually slowed down the code and it runs slower compared to the non-accelerated CPU only version. Let's checkout the profiler's report. [Download the profiler output](../source_code/lab2/miniWeather_3.qdrep) and open it via the GUI. From the "timeline view" on the top pane, double click on the "CUDA" from the function table on the left and expand it. Zoom in on the timeline and you can see a pattern similar to the screenshot below. The blue boxes are the compute kernels and each of these groupings of kernels is surrounded by purple and teal boxes (annotated with red color) representing data movements. **Screenshots represents profiler report for the values of 400,200,1500.** <img src="images/nsys_slow.png" width="80%" height="80%"> Let's hover your mouse over kernels (blue boxes) one by one from each row and checkout the provided information. <img src="images/occu-1.png" width="60%" height="60%"> **Note**: In the next two exercises, we start optimizing the application by improving the occupancy and reducing data movements. ## Post-Lab Summary If you would like to download this lab for later viewing, it is recommend you go to your browsers File menu (not the Jupyter notebook file menu) and save the complete web page. This will ensure the images are copied down as well. You can also execute the following cell block to create a zip-file of the files you've been working on, and download it with the link below. ``` %%bash cd .. rm -f openacc_profiler_files.zip zip -r openacc_profiler_files.zip * ``` **After** executing the above zip command, you should be able to download the zip file [here](../openacc_profiler_files.zip). ----- # <p style="text-align:center;border:3px; border-style:solid; border-color:#FF0000 ; padding: 1em"> <a href=../../profiling_start.ipynb>HOME</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="float:center"> <a href=profiling-fortran-lab3.ipynb>NEXT</a></span> </p> ----- # Links and Resources [OpenACC API Guide](https://www.openacc.org/sites/default/files/inline-files/OpenACC%20API%202.6%20Reference%20Guide.pdf) [NVIDIA Nsight System](https://docs.nvidia.com/nsight-systems/) [CUDA Toolkit Download](https://developer.nvidia.com/cuda-downloads) **NOTE**: To be able to see the Nsight System profiler output, please download Nsight System latest version from [here](https://developer.nvidia.com/nsight-systems). Don't forget to check out additional [OpenACC Resources](https://www.openacc.org/resources) and join our [OpenACC Slack Channel](https://www.openacc.org/community#slack) to share your experience and get more help from the community. --- ## Licensing This material is released by NVIDIA Corporation under the Creative Commons Attribution 4.0 International (CC BY 4.0).
github_jupyter
``` import matplotlib import matplotlib.pyplot as plt import os import random import io import imageio import glob import scipy.misc import numpy as np from six import BytesIO from PIL import Image, ImageDraw, ImageFont from IPython.display import display, Javascript from IPython.display import Image as IPyImage import tensorflow as tf from object_detection.utils import label_map_util from object_detection.utils import config_util from object_detection.utils import visualization_utils as viz_utils from object_detection.builders import model_builder %matplotlib inline ``` ### Function to run inference on a single image ``` def run_inference_single_image(model,image): image = np.asarray(image) input_tensor = tf.convert_to_tensor(image) input_tensor = input_tensor[tf.newaxis,...] model_fn = model.signatures["serving_default"] output = model_fn(input_tensor) num_detections = int(output.pop("num_detections")) output = {key:value[0, :num_detections].numpy() for key,value in output.items()} output['num_detections'] = num_detections output['detection_classes']=output['detection_classes'].astype(np.int64) return output LABEL_PATH = '/home/thirumalaikumar/Intern Projects/TrafficControl/content/sub_surf/gate_label_map.pbtxt' ci = label_map_util.create_category_index_from_labelmap(LABEL_PATH,use_display_name=True) def show_inference(model,frame): image_np = np.array(frame) output = run_inference_single_image(model,image_np) classes = np.squeeze(output['detection_classes'])#class to which the object belongs to boxes = np.squeeze(output['detection_boxes'])#box coords scores = np.squeeze(output['detection_scores'])#prob score of the model #condition for Detecting only the gate indices = np.argwhere(classes==2) boxes = np.squeeze(boxes[indices]) classes = np.squeeze(classes[indices]) scores = np.squeeze(scores[indices]) viz_utils.visualize_boxes_and_labels_on_image_array( image_np, boxes, classes, scores, ci, use_normalized_coordinates=True, max_boxes_to_draw=100, min_score_thresh=.8, agnostic_mode=False, ) return image_np model = tf.saved_model.load("/home/thirumalaikumar/Intern Projects/TrafficControl/content/sub_surf/saved_model") import cv2 def post_process_bb(model,img,threshold=0.5): img = cv2.imread(img) output = run_inference_single_image(model,img) assert len(output['detection_boxes']) == len(output['detection_scores']) max_score_index = np.squeeze(np.argwhere(output['detection_scores']>=threshold)) detection_score = output['detection_scores'][max_score_index] box_coords = output['detection_boxes'][max_score_index] detecction_class = output['detection_classes'][max_score_index] return img,detection_score,detecction_class,box_coords def midpoint(): img,score,classes,coords = post_process_bb(model,"/home/thirumalaikumar/hackathon/images_1005-20201030T064700Z-001/images_1005/1_458/download (37).jpg") im_width = img.shape[0] im_height = img.shape[1] try: coords = coords.reshape(1,coords.shape[0]) except ValueError as v: print("Your Object detector has detected more than 1 BB") print(coords.shape) for i in range(len(coords)): x1,y1,x2,y2 = coords[i] (left, right, top, bottom) = (y1 * im_width, y2 * im_width, x1 * im_height, x2 * im_height) p1 = (int(left), int(top)) p2 = (int(right), int(bottom)) #img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) _ = cv2.rectangle(img, p1, p2, (255,0,0), 15) x_center = int((left+right)/2) y_center = int(bottom) x2_center = int((left+right)/2) y2_center = int(top) center1 = (x_center, y_center) center2 = (x2_center,y2_center) res = tuple(map(lambda i, j: i + j, center1, center2)) res = tuple(map(lambda i: i / 2, res)) res = tuple(map(lambda i: int(i) , res)) img1 = cv2.circle(img,res, 15, (0, 255, 0), -1) #cv2.putText(img1,"Gate",p1, cv2.FONT_HERSHEY_SIMPLEX,1, (255, 255, 255), 2, cv2.LINE_AA) return img1 plot.imshow(midpoint()) vid = cv2.VideoCapture(0) while(True): # Capture the video frame # by frame ret, frame = vid.read() imagen = show_inference(model,frame) # Display the resulting frame cv2.imshow('frame', cv2.resize(imagen,(800,600))) # the 'q' button is set as the # quitting button you may use any # desired button of your choice if cv2.waitKey(1) & 0xFF == ord('q'): break # After the loop release the cap object vid.release() # Destroy all the windows cv2.destroyAllWindows() ```
github_jupyter
This script generates a zone plate pattern (based on partial filling) given the material, energy, grid size and number of zones as input ``` import numpy as np import matplotlib.pyplot as plt from numba import njit from joblib import Parallel, delayed from tqdm import tqdm, trange import urllib,os,pickle from os.path import dirname as up ``` Importing all the required libraries. Numba is used to optimize functions. ``` def repeat_pattern(X,Y,Z): flag_ = np.where((X>0)&(Y>0)) flag1 = np.where((X>0)&(Y<0)) flag1 = tuple((flag1[0][::-1],flag1[1])) Z[flag1] = Z[flag_] flag2 = np.where((X<0)&(Y>0)) flag2 = tuple((flag2[0],flag2[1][::-1])) Z[flag2] = Z[flag_] flag3 = np.where((X<0)&(Y<0)) flag3 = tuple((flag3[0][::-1],flag3[1][::-1])) Z[flag3] = Z[flag_] return Z ``` *repeat_pattern* : produces the zone plate pattern given the pattern in only one quadrant(X,Y>0) as input. * *Inputs* : X and Y grid denoting the coordinates and Z containing the pattern in one quadrant. * *Outputs* : Z itself is modified to reflect the repition. ``` def get_property(mat,energy): url = "http://henke.lbl.gov/cgi-bin/pert_cgi.pl" data = {'Element':str(mat), 'Energy':str(energy), 'submit':'Submit Query'} data = urllib.parse.urlencode(data) data = data.encode('utf-8') req = urllib.request.Request(url, data) resp = urllib.request.urlopen(req) respDat = resp.read() response = respDat.split() d = b'g/cm^3<li>Delta' i = response.index(d) delta = str(response[i+2])[:str(response[i+2]).index('<li>Beta')][2:] beta = str(response[i+4])[2:-1] return float(delta),float(beta) ``` *get_property* : gets delta and beta for a given material at the specified energy from Henke et al. * *Inputs* : mat - material, energy - energy in eV * *Outputs* : delta, beta ``` @njit # equivalent to "jit(nopython=True)". def partial_fill(x,y,step,r1,r2,n): x_ = np.linspace(x-step/2,x+step/2,n) y_ = np.linspace(y-step/2,y+step/2,n) cnts = 0 for i in range(n): for j in range(n): z = (x_[i] * x_[i] + y_[j] * y_[j]) if r1*r1 < z < r2*r2: cnts += 1 fill_factor = cnts/(n*n) return fill_factor ``` *partial_fill* : workhorse function for determining the fill pattern. This function is thus used in a loop. njit is used to optimize the function. * *Inputs* : x,y - coordinates of the point, step - step size, r1,r2 - inner and outer radii of ring, n - resolution * *Outputs* : fill_factor - value of the pixel based on amount of ring passing through it ``` #find the radius of the nth zone def zone_radius(n,f,wavel): return np.sqrt(n*wavel*f + ((n*wavel)/2)**2) ``` *zone_radius* : functon to find the radius of a zone given the zone number and wavelength * *Inputs* : n - zone number, f - focal length, wavel - wavelength * *Outputs* : radius of the zone as specified by the inputs ``` def make_quadrant(X,Y,flag,r1,r2,step,n,zone_number): z = np.zeros(np.shape(X)) Z = np.sqrt(X**2+Y**2) for l in range(len(flag[0])): i = flag[0][l] j = flag[1][l] if 0.75*r1< Z[i][j] < 1.25*r2: x1 = X[i][j] y1 = Y[i][j] z[i][j] = partial_fill(x1,y1,step,r1,r2,n) z[tuple((flag[1],flag[0]))] = z[tuple((flag[0],flag[1]))] return z ``` *make_quadrant* : function used to create a quadrant of a ring given the inner and outer radius and zone number * *Inputs* : X,Y - grid, flag - specifies the quadrant to be filled (i.e. where X,Y>0), r1,r2 - inner and outer radii, n - parameter for the partial_fill function * *Outputs* : z - output pattern with one quadrant filled. ``` #2D ZP def make_ring(i): print(i) r1 = radius[i-1] r2 = radius[i] n = 250 ring = make_quadrant(X,Y,flag,r1,r2,step_xy,n,zone_number = i) ring = repeat_pattern(X,Y,ring) ring_ = np.where(ring!=0) vals_ = ring[ring_] np.save('ring_locs_'+str(i)+'.npy',ring_) np.save('ring_vals_'+str(i)+'.npy',vals_) return ``` *make_ring* : function used to create a ring given the relevant parameters * *Inputs* : i-zone number,radius - array of radii ,X,Y - grid, flag - specifies the quadrant to be filled (i.e. where X,Y>0),n - parameter for the partial_fill function * *Outputs* : None. Saves the rings to memory. ``` mat = 'Au' energy = 10000 #Energy in EV f = 10e-3 #focal length in meters wavel = (1239.84/energy)*10**(-9) #Wavelength in meters delta,beta = get_property(mat,energy) zones = 700 #number of zones radius = np.zeros(zones) ``` Setting up the parameters and initializing the variables. ``` for k in range(zones): radius[k] = zone_radius(k,f,wavel) ``` Filling the radius array with the radius of zones for later use in making the rings. In the next few code blocks, we check if the parameters of the simulation make sense. First we print out the input and output pixel sizes assuming we will be using the 1FT propagator. Then we see if the pixel sizes are small enough compared to the outermost zone width. Finally we check if the focal spot can be contained for the given amount of tilt angle. ``` grid_size = 55296 input_xrange = 262e-6 step_xy = input_xrange/grid_size L_out = (1239.84/energy)*10**(-9)*f/(input_xrange/grid_size) step_xy_output = L_out/grid_size print(' Ouput L : ',L_out) print(' output pixel size(nm) : ',step_xy_output*1e9) print(' input pixel size(nm) : ',step_xy*1e9) drn = radius[-1]-radius[-2] print(' maximum radius(um) : ',radius[-1]*1e6) print(' outermost zone width(nm) :',drn*1e9) print(' max shift of focal spot(um) : ',(L_out/2)*1e6) # invert the following to get max tilt allowance # after which the focal spot falls of the # simulation plane # np.sin(theta*(np.pi/180))*f = (L_out/2) theta_max = np.arcsin((L_out/2)*(1/f))*(180/np.pi) print(' max wavefield aligned tilt(deg) : ',theta_max) if step_xy > 0.25*drn : print(' WARNING ! input pixel size too small') print(' ratio of input step size to outermost zone width', step_xy/drn) if step_xy_output > 0.25*drn : print(' WARNING ! output pixel size too small') print(' ratio of output step size to outermost zone width', step_xy_output/drn) zones_to_fill = [] for i in range(zones): if i%2 == 1 : zones_to_fill.append(i) zones_to_fill = np.array(zones_to_fill) ``` Making a list of zones to fill. (Since only alternate zones are filled in our case. This can be modified as per convenience) ``` try : os.chdir(up(os.getcwd())+str('/hard_xray_zp')) except : os.mkdir(up(os.getcwd())+str('/hard_xray_zp')) os.chdir(up(os.getcwd())+str('/hard_xray_zp')) ``` Store the location of each ring of the zone plate separately in a sub directory. This is more efficient than storing the whole zone plate array ! ``` x1 = input_xrange/2 x = np.linspace(-x1,x1,grid_size) step_xy = x[-1]-x[-2] zp_coords =[-x1,x1,-x1,x1] X,Y = np.meshgrid(x,x) flag = np.where((X>0)&(Y>0)&(X>=Y)) ``` Creating the input 1D array and setting the parameters for use by the make ring function. Note that X,Y,flag and step_xy will be read by multiple processes which we will spawn using joblib. ``` %%capture from joblib import Parallel, delayed results = Parallel(n_jobs=5)(delayed(make_ring)(i) for i in zones_to_fill) ``` Creating the rings ! (Adjust the number of jobs depending on CPU cores.) ``` params = {'grid_size':grid_size,'step_xy':step_xy,'energy(in eV)':energy,'wavelength in m':wavel,'focal_length':f,'zp_coords':zp_coords,'delta':delta,'beta':beta} pickle.dump(params,open('parameters.pickle','wb')) ``` Pickling and saving all the associated parameters along with the rings for use in simulation!
github_jupyter
``` %matplotlib nbagg import os os.environ["PYOPENCL_COMPILER_OUTPUT"]="1" import numpy import fabio import pyopencl from pyopencl import array as cla from matplotlib.pyplot import subplots ctx = pyopencl.create_some_context(interactive=True) queue = pyopencl.CommandQueue(ctx, properties=pyopencl.command_queue_properties.PROFILING_ENABLE) ctx image = fabio.open("/users/kieffer/workspace-400/tmp/pyFAI/test/testimages/Pilatus6M.cbf").data mask = (image<0).astype("int8") fig, ax = subplots() ax.imshow(image.clip(0,100)) %load_ext pyopencl.ipython_ext %%cl_kernel //read withou caching float inline read_simple(global int *img, int height, int width, int row, int col){ //This kernel reads the value and returns it without active caching float value = NAN; // Read if ((col>=0) && (col<width) && (row>=0) && (row<height)){ int read_pos = col + row*width; value = (float)img[read_pos]; if (value<0){ value = NAN; } } return value; } void inline read_and_store(global int *img, int height, int width, int row, int col, int half_wind_height, int half_wind_width, local float* storage){ //This kernel reads the value and stores in the local storage int line_size, write_pos, idx_line; float value = NAN; // Read if ((col>=0) && (col<width) && (row>0) && (row<height)){ int read_pos = col + row*width; value = (float)img[read_pos]; if (value<0){ value = NAN; } } // Save locally if ((col>=-half_wind_width) && (col<=width+half_wind_width) && (row>-half_wind_height) && (row<=height+half_wind_height)){ line_size = get_local_size(0) + 2 * half_wind_width; idx_line = (half_wind_height+row)%(2*half_wind_height+1); write_pos = line_size*idx_line + half_wind_width + col - get_group_id(0)*get_local_size(0); storage[write_pos] = value; } //return value } //Store a complete line void inline store_line(global int *img, int height, int width, int row, int half_wind_height, int half_wind_width, local float* storage){ read_and_store(img, height, width, row, get_global_id(0), half_wind_height, half_wind_width, storage); if (get_local_id(0)<half_wind_width){ // read_and_store_left read_and_store(img, height, width, row, get_group_id(0)*get_local_size(0)-half_wind_width+get_local_id(0), half_wind_height, half_wind_width, storage); //read_and_store_right read_and_store(img, height, width, row, (get_group_id(0)+1)*get_local_size(0)+get_local_id(0), half_wind_height, half_wind_width, storage); } } float read_back( int height, int width, int row, int col, int half_wind_height, int half_wind_width, local float* storage){ float value=NAN; int write_pos, line_size, idx_line; if ((col>=-half_wind_width) && (col<=width+half_wind_width) && (row>-half_wind_height) && (row<=height+half_wind_height)){ line_size = get_local_size(0) + 2 * half_wind_width; idx_line = (half_wind_height+row)%(2*half_wind_height+1); write_pos = line_size*idx_line + half_wind_width + col - get_group_id(0)*get_local_size(0); value = storage[write_pos]; } return value; } // workgroup size of kernel: 32 to 128, cache_read needs to be (wg+2*half_wind_width)*(2*half_wind_height+1)*sizeof(float) kernel void spot_finder(global int *img, int height, int width, int half_wind_height, int half_wind_width, float threshold, float radius, global int *cnt_high, //output global int *high, //output int high_size, local float *cache_read, local int *local_high, int local_size){ //decaration of variables int col, row, cnt, i, j, where; float value, sum, std, centroid_r, centroid_c, dist, mean; col = get_global_id(0); local int local_cnt_high[1]; local_cnt_high[0] = 0; for (i=0; i<local_size; i+=get_local_size(0)){ local_high[i+get_local_id(0)] = 0; } row=0; //pre-load data for the first line for (i=-half_wind_height; i<half_wind_height; i++){ store_line(img, height, width, row+i, half_wind_height, half_wind_width, cache_read); } barrier(CLK_LOCAL_MEM_FENCE); //loop within a column for (row=0;row<height; row++){ //read data store_line(img, height, width, row+half_wind_height, half_wind_height, half_wind_width, cache_read); barrier(CLK_LOCAL_MEM_FENCE); //calculate mean sum = 0.0f; centroid_r = 0.0f; centroid_c = 0.0f; cnt = 0; for (i=-half_wind_height; i<=half_wind_height; i++){ for (j=-half_wind_width; j<=half_wind_width; j++){ value = read_back(height, width, row+i, col+j, half_wind_height, half_wind_width, cache_read); if (isfinite(value)){ sum += value; centroid_r += value*i; centroid_c += value*j; cnt += 1; } } } if (cnt){ mean = sum/cnt; dist = sum*radius; if ((fabs(centroid_r)<dist) && (fabs(centroid_c)<dist)){ // calculate std sum = 0.0; for (i=-half_wind_height; i<=half_wind_height; i++){ for (j=-half_wind_width; j<=half_wind_width; j++){ value = read_back(height, width, row+i, col+j, half_wind_height, half_wind_width, cache_read); if (isfinite(value)){ sum += pown(mean-value,2); } } } std = sqrt(sum/cnt); value = read_back(height, width, row, col, half_wind_height, half_wind_width, cache_read); if ((value-mean)>threshold*std){ where = atomic_inc(local_cnt_high); if (where<local_size){ local_high[where] = col+width*row; } } // if intense signal } // if properly centered } // if patch not empty barrier(CLK_LOCAL_MEM_FENCE); } //for row //Store the results in global memory barrier(CLK_LOCAL_MEM_FENCE); if (get_local_id(0) == 0) { cnt = local_cnt_high[0]; if ((cnt>0) && (cnt<local_size)) { where = atomic_add(cnt_high, cnt); if (where+cnt>high_size){ cnt = high_size-where; //store what we can } for (i=0; i<cnt; i++){ high[where+i] = local_high[i]; } } }//store results } //kernel // workgroup size of kernel: without cacheing read kernel void simple_spot_finder(global int *img, int height, int width, int half_wind_height, int half_wind_width, float threshold, float radius, global int *cnt_high, //output global int *high, //output int high_size, local int *local_high, int local_size){ //decaration of variables int col, row, cnt, i, j, where, tid, blocksize; float value, sum, std, centroid_r, centroid_c, dist, mean, M2, delta, delta2, target_value; col = get_global_id(0); row = get_global_id(1); //Initialization of output array in shared local int local_cnt_high[2]; blocksize = get_local_size(0) * get_local_size(1); tid = get_local_id(0) + get_local_id(1) * get_local_size(0); if (tid < 2){ local_cnt_high[tid] = 0; } for (i=0; i<local_size; i+=blocksize){ if ((i+tid)<local_size) local_high[i+tid] = 0; } barrier(CLK_LOCAL_MEM_FENCE); //Calculate mean + std + centroids mean = 0.0f; M2 = 0.0f; centroid_r = 0.0f; centroid_c = 0.0f; cnt = 0; for (i=-half_wind_height; i<=half_wind_height; i++){ for (j=-half_wind_width; j<=half_wind_width; j++){ value = read_simple(img, height, width, row+i, col+j); if (isfinite(value)){ centroid_r += value*i; centroid_c += value*j; cnt += 1; delta = value - mean; mean += delta / cnt; delta2 = value - mean; M2 += delta * delta2; } } } if (cnt){ dist = mean*radius*cnt; std = sqrt(M2 / cnt); target_value = read_simple(img, height, width, row, col); if (((target_value-mean)>threshold*std) && (fabs(centroid_r)<dist) && (fabs(centroid_c)<dist)){ where = atomic_inc(local_cnt_high); if (where<local_size){ local_high[where] = col+width*row; } } // if intense signal properly centered } // if patch not empty //Store the results in global memory barrier(CLK_LOCAL_MEM_FENCE); if (tid==0) { cnt = local_cnt_high[0]; if ((cnt>0) && (cnt<local_size)) { where = atomic_add(cnt_high, cnt); if (where+cnt>high_size){ cnt = high_size-where; //store what we can } local_cnt_high[0] = cnt; local_cnt_high[1] = where; } } barrier(CLK_LOCAL_MEM_FENCE); //copy the data from local to global memory for (i=0; i<local_cnt_high[0]; i+=blocksize){ high[local_cnt_high[1]+i+tid] = local_high[i+tid]; }//store results } //kernel def peak_count(img, window=3, threshold=3.0, radius=1.0, workgroup=32, array_size=10000): img_d = cla.to_device(queue, image) high_d = cla.zeros(queue, (array_size,), dtype=numpy.int32) high_cnt_d = cla.zeros(queue, (1,), dtype=numpy.int32) read_cache = pyopencl.LocalMemory(4*(workgroup+2*window)*(2*window+1)) write_cache = pyopencl.LocalMemory(4096) height, width = img.shape size = (width+workgroup-1)&~(workgroup-1) ev = spot_finder(queue, (size,), (workgroup,), img_d.data, numpy.int32(height), numpy.int32(width), numpy.int32(window), numpy.int32(window), numpy.float32( threshold), numpy.float32( radius), high_cnt_d.data, high_d.data, numpy.int32(array_size), read_cache, write_cache, numpy.int32(1024)) size = high_cnt_d.get()[0] print("found %i peaks in %.3fms"%(size, (ev.profile.end-ev.profile.start)*1e-6)) return high_d.get()[:size] %time raw = peak_count(image, window=5, threshold=6) x=raw%image.shape[-1] y=raw//image.shape[-1] ax.plot(x,y,".w") def simple_peak_count(img, window=3, threshold=3.0, radius=1.0, workgroup=32, array_size=10000): img_d = cla.to_device(queue, image) high_d = cla.zeros(queue, (array_size,), dtype=numpy.int32) high_cnt_d = cla.zeros(queue, (1,), dtype=numpy.int32) #read_cache = pyopencl.LocalMemory(4*(workgroup+2*window)*(2*window+1)) write_cache = pyopencl.LocalMemory(4096) height, width = img.shape size_w = (width+workgroup-1)&~(workgroup-1) size_h = (height+workgroup-1)&~(workgroup-1) ev = simple_spot_finder(queue, (size_w,size_h), (workgroup, workgroup), img_d.data, numpy.int32(height), numpy.int32(width), numpy.int32(window), numpy.int32(window), numpy.float32( threshold), numpy.float32( radius), high_cnt_d.data, high_d.data, numpy.int32(array_size), #read_cache, write_cache, numpy.int32(1024)) size = high_cnt_d.get()[0] print("found %i peaks in %.3fms"%(size, (ev.profile.end-ev.profile.start)*1e-6)) return high_d.get()[:size] %time raw = simple_peak_count(image, window=5, threshold=6) x=raw%image.shape[-1] y=raw//image.shape[-1] ax.plot(x,y,".y") # Work on scan from math import log2 n = 32 ary = numpy.ones(n) ary ary1 = numpy.copy(ary) ary2 = numpy.empty_like(ary) for i in range(int(log2(n))): start = 1<<i print(i,start) for j in range(start): ary2[j] = ary1[j] for j in range(start, n): ary2[j] = ary1[j] + ary1[j-start] ary1, ary2 = ary2, ary1 print(ary1) ary-numpy.ones(n).cumsum() (32+6)*7*4*2*4 ```
github_jupyter
``` # default_exp downloaders #export import requests import pathspec import time from pathlib import Path, PurePosixPath from tightai.lookup import Lookup from tightai.conf import CLI_ENDPOINT #hide test = False if test: CLI_ENDPOINT = "http://cli.desalsa.io:8000" #export class DownloadVersion(Lookup): path = "." dest_path = "." project_id = "" version = "" api = CLI_ENDPOINT def __init__(self, path=".", project_id=None, version=None, *args, **kwargs): api = None if "api" in kwargs: api = kwargs.pop("api") super().__init__(*args, **kwargs) if api != None: self.api = api assert project_id != None if "v" in f"{version}": version = version.replace("v", "") try: version = int(version) except: raise Exception("Version must be a number or in the format v1, v2, v3, and so on.") self.path = Path(path).resolve() self.version = version self.project_id = project_id self.endpoint = f"{self.api}/projects/{project_id}/versions/{version}/download/" def save_from_url(self, dest, url, force=True): dest_path = Path(dest) if not force: if dest_path.exists(): print(f"{dest_path} already exists") return None dest_path_parent = dest_path.resolve().parent dest_path_parent.mkdir(parents=True, exist_ok=True) # NOTE the stream=True parameter below with requests.get(url, stream=True) as r: r.raise_for_status() with open(dest_path, 'wb') as f: for chunk in r.iter_content(chunk_size=8192): # If you have chunk encoded response uncomment if # and set chunk_size parameter to None. #if chunk: f.write(chunk) return dest def download(self, overwrite=False): r = self.http_get(self.endpoint) self.handle_invalid_lookup(r, expected_status_code=200) files = r.json() for fdict in files: fname = fdict['fname'] furl = fdict['url'] dest = PurePosixPath(self.path / fname) print("Downloading", fname, "to", dest) self.save_from_url(dest, furl, force=overwrite) return #hide # path_str = "/Users/jmitch/tight/my-tight-apps/dl-tests" # path_str = Path(path_str) # assert path.exists() == True # dl = DownloadVersion(path=path_str, project_id='news-categories', version=1) # dl.download(overwrite=True) ```
github_jupyter
# Import Required Modules ``` import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import re from IPython.display import HTML %matplotlib inline HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide(); } else { $('div.input').show(); } code_show = !code_show } $( document ).ready(code_toggle); </script> The raw code for this IPython notebook is by default hidden for easier reading. To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''') ``` # Import Dataset The dataset was downloaded from the [Brainspan - Atlas of the Developing Human Brain](http://www.brainspan.org/static/download.html) website on April 6, 2018. ``` # change working directory to directory with data files os.chdir("/Users/daniel/Documents/Yale/Projects/computations/Allen_brain_development/genes_matrix_csv/") # load data file and two files with metadata gene_metadata = pd.read_csv("rows_metadata.csv") patients_metadata = pd.read_csv("columns_metadata.csv") expression_data = pd.read_csv("expression_matrix.csv", header=None, index_col=0) ``` # Data Overview The database provides RNA-sequencing results (in RPKM units) for 524 samples: data file | size (rows, cols) --------- | ----------------- `gene_metadata` | 52376, 5 `patients_metadata` | 524, 8 `expression_data` | 52376, 524 # Cleaning and Joining Data Files The joined data file `expression_data_joined` is of size 524, 52376+4 (patients, genes + `age`, `gender`, `structure_acronym`, `structure_name`) with indices being `donor_name`s and column names being gene names. ``` def transform_data(): ''' Run this function once to transform/join above input data files --> creates and returns "expression_data_joined". Assign function to keep return value alive! ''' # join gene_metadata with expression data _gene_metadata = gene_metadata.set_index("row_num", drop=True) _gene_metadata = _gene_metadata.drop(["gene_id", "ensembl_gene_id", "entrez_id"], axis=1) _expression_data_joined = _gene_metadata.join(other=expression_data) _expression_data_joined = _expression_data_joined.set_index("gene_symbol", drop=True).transpose() # join patients_metadata with expression data (/joined data frame from above) _patients_metadata = patients_metadata.set_index("column_num", drop=True).drop( ["donor_id", "donor_name", "structure_id"], axis=1) _expression_data_joined = _patients_metadata.join(other=_expression_data_joined) # return the joined table as described above return(_expression_data_joined) expression_data_joined = transform_data() # replace age column with float def replace_age(_input=expression_data_joined): ''' Replace string values of 'age' column with floats (depending on unit). ''' # define regex patterns for replacement pattern1 = re.compile("^.*pcw$") pattern2 = re.compile("^.*mos$") pattern3 = re.compile("^.*yrs$") # create a new age column: age_in_years _input["age_in_years"] = None # loop over 'age' column and replace with appropriate value for i in range(len(_input.age.values)): # test for pattern 1 if bool(pattern1.match(_input.age.values[i])): _res = round((- (40 - int((_input.age.values[i].split()[0]))) / 52), 2) _input.loc[_input.index[i], "age_in_years"] = _res # test for pattern 2 elif bool(pattern2.match(_input.age.values[i])): _res = round(((int(_input.age.values[i].split()[0])) / 12), 2) _input.loc[_input.index[i], "age_in_years"] = _res # test for pattern 3 if bool(pattern3.match(_input.age.values[i])): _res = int(_input.age.values[i].split()[0]) _input.loc[_input.index[i], "age_in_years"] = _res # convert 'age_in_years' column to float type _input["age_in_years"] = _input["age_in_years"].astype(float) # return the data frame with transformed 'age' column return(_input) expression_data_joined = replace_age() # add a dichotomized age column expression_data_joined["Age Category"] = np.array( ["< 10 yrs" if value < 10 else ">= 10 yrs" for value in expression_data_joined["age_in_years"]]) # create a version of data frame with log2 transformed NCS1 expression expression_data_joined_log2 = expression_data_joined.copy() expression_data_joined_log2[["NCS1"]] = np.log2(expression_data_joined[["NCS1"]]+1) ``` # Exploratory Data Analysis Scatter plots of NCS1 expression levels (gene level RPKM values) vs. donor age, stratified according to brain regions. ``` expression_data_joined_log2.plot(x="age_in_years", y="NCS1", kind="scatter"); plt.vlines(x=0, ymin=4, ymax=9, colors="black", linestyles="dashed"); # plot unique brain regions: regions, counts = np.unique(expression_data_joined.structure_name, return_counts=True) regions = regions[counts > 10] def plot_unique_regions(df, regions=regions): ''' Docstring ''' # create subplots according to length of regions array [19]; this could be automatized... fig, ((ax1, ax2, ax3, ax4), (ax5, ax6, ax7, ax8), (ax9, ax10, ax11, ax12), (ax13, ax14, ax15, ax16)) = plt.subplots(nrows=4, ncols=4, sharey=True, sharex=True, figsize=(24, 20)) # generate an iterator axes = iter(list([ax1, ax2, ax3, ax4, ax5, ax6, ax7, ax8, ax9, ax10, ax11, ax12, ax13, ax14, ax15, ax16])) # loop over selected regions for region in regions: _df = df[df["structure_name"] == region] _axis = next(axes) _df.plot(x="age_in_years", y="NCS1", kind="scatter", ax=_axis); _axis.set_title(str(region)); _axis.axvline(x=0, color="black", linestyle="dashed") _axis.set_xlabel("Age (in years)", fontsize=14) _axis.set_ylabel("NCS Expression [log2(FPKM+1)]", fontsize=14) plot_unique_regions(df=expression_data_joined, regions=regions) unique, counts = np.unique(expression_data_joined_log2["structure_name"], return_counts=True) _dict1 = dict(zip(unique, counts)) _dict2 = dict(zip(pd.unique(expression_data_joined_log2["structure_name"]), pd.unique(expression_data_joined_log2["structure_acronym"]))) dicts = [_dict1, _dict2] regions_dict = {} # loop over dictionaries and replace values by tuples ('acronym', int(count)) for d in dicts: for k, v in d.items(): if k in regions_dict: regions_dict[k] = (regions_dict[k], v) else: regions_dict[k] = v ## boxplot of NCS1 expression levels for different brain regions # identify brain regions with at least 10 observations... regions, counts = np.unique(expression_data_joined.structure_name, return_counts=True) regions = regions[counts > 10] # filter dataframe accordingly... _df = expression_data_joined_log2[["NCS1", "structure_name", "structure_acronym", "Age Category"]] _df = _df[_df["structure_name"].isin(regions)] # ... and plot. _df.boxplot("NCS1", "structure_acronym", figsize=(14, 8), rot=30, grid=False); plt.title("NCS1 Expression in 16 Different Brain Regions", fontsize=24); plt.xlabel("Brain Structure", fontsize=20); plt.ylabel("NCS1 Expression [log2(RPKM+1)]", fontsize=20); plt.xticks(fontsize=12); plt.yticks(fontsize=18); # select four brain regions of interest _dis = ["AMY", "STR", "MFC", "CBC", "HIP"] _df_selected = _df[([True if value in _dis else False for value in _df.structure_acronym.values])] # plot either all or just selected brain regions fc = sns.factorplot(x="structure_acronym", y="NCS1", hue="Age Category", saturation=0.5, width=0.7, fliersize=8, linewidth=4, data=_df_selected, kind="box", size=7, aspect=1.3, legend_out=False, order=["HIP", "MFC", "AMY", "CBC", "STR"]); fc.despine(top=False, right=False); plt.grid(b=True, which="major"); plt.ylabel("NCS1 Gene Expression [log2(RPKM+1)]", fontsize=16); plt.xticks(fontsize=16) plt.yticks(fontsize=16) plt.xlabel("Brain Regions", fontsize=16); plt.title("NCS1 Expression in 16 Different Brain Regions", fontsize=20); plt.savefig("/Users/daniel/Desktop/NCS1_in_16_brain_regions.pdf", bbox_inches="tight", pad_inches=1) ```
github_jupyter
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/Spark%20v2.7.6%20Notebooks/21.Gender_Classifier.ipynb) # 21. Gender Classifier **Gender Classifier** detects the gender of the patient in the clinical document. It can classify the documents into `Female`, `Male` and `Unknown`. -'**Classifierdl_gender_sbert**' (works with licensed `sbiobert_base_cased_mli`) It has been trained on more than four thousands clinical documents (radiology reports, pathology reports, clinical visits etc.) which were annotated internally. ## Colab Setup ``` import json from google.colab import files license_keys = files.upload() with open(list(license_keys.keys())[0]) as f: license_keys = json.load(f) %%capture for k,v in license_keys.items(): %set_env $k=$v !wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jsl_colab_setup.sh !bash jsl_colab_setup.sh -p 2.4.4 import json import os from pyspark.ml import Pipeline,PipelineModel from pyspark.sql import SparkSession from sparknlp.annotator import * from sparknlp_jsl.annotator import * from sparknlp.base import * import sparknlp_jsl import sparknlp params = {"spark.driver.memory":"16G", "spark.kryoserializer.buffer.max":"2000M", "spark.driver.maxResultSize":"2000M"} spark = sparknlp_jsl.start(license_keys['SECRET'],params=params) print (sparknlp.version()) print (sparknlp_jsl.version()) spark # if you want to start the session with custom params as in start function above def start(secret): builder = SparkSession.builder \ .appName("Spark NLP Licensed") \ .master("local[*]") \ .config("spark.driver.memory", "16G") \ .config("spark.serializer", "org.apache.spark.serializer.KryoSerializer") \ .config("spark.kryoserializer.buffer.max", "2000M") \ .config("spark.jars.packages", "com.johnsnowlabs.nlp:spark-nlp_2.11:"+version) \ .config("spark.jars", "https://pypi.johnsnowlabs.com/"+secret+"/spark-nlp-jsl-"+jsl_version+".jar") return builder.getOrCreate() #spark = start(secret) ``` # Gender Classifier Pipeline with **sbert** ``` document = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("document") sbert_embedder = BertSentenceEmbeddings().pretrained("sbiobert_base_cased_mli", 'en', 'clinical/models')\ .setInputCols(["document"])\ .setOutputCol("sentence_embeddings")\ .setMaxSentenceLength(512) gender_classifier = ClassifierDLModel.pretrained( 'classifierdl_gender_sbert', 'en', 'clinical/models') \ .setInputCols(["document", "sentence_embeddings"]) \ .setOutputCol("class") gender_pred_pipeline_sbert = Pipeline(stages = [ document, sbert_embedder, gender_classifier ]) empty_data = spark.createDataFrame([[""]]).toDF("text") model_sbert = gender_pred_pipeline_sbert.fit(empty_data) text ="""social history: shows that does not smoke cigarettes or drink alcohol,lives in a nursing home.family history: shows a family history of breast cancer.""" gender_pipeline_sbert = LightPipeline(model_sbert) result = gender_pipeline_sbert.annotate(text) result['class'][0] ``` ### Sample Clinical Notes ``` text1 = '''social history: shows that does not smoke cigarettes or drink alcohol,lives in a nursing home. family history: shows a family history of breast cancer.''' result = gender_pipeline_sbert.annotate(text1) result['class'][0] text2 = '''The patient is a 48- year-old, with severe mitral stenosis diagnosed by echocardiography, moderate aortic insufficiency and moderate to severe pulmonary hypertension who is being evaluated as a part of a preoperative workup for mitral and possible aortic valve repair or replacement.''' result = gender_pipeline_sbert.annotate(text2) result['class'][0] text3 = '''HISTORY: The patient is a 57-year-old XX, who I initially saw in the office on 12/27/07, as a referral from the Tomball Breast Center. On 12/21/07, the patient underwent image-guided needle core biopsy of a 1.5 cm lesion at the 7 o'clock position of the left breast (inferomedial). The biopsy returned showing infiltrating ductal carcinoma high histologic grade. The patient stated that xx had recently felt and her physician had felt a palpable mass in that area prior to her breast imaging.''' result = gender_pipeline_sbert.annotate(text3) result['class'][0] text4 = '''The patient states that xx has been overweight for approximately 35 years and has tried multiple weight loss modalities in the past including Weight Watchers, NutriSystem, Jenny Craig, TOPS, cabbage diet, grape fruit diet, Slim-Fast, Richard Simmons, as well as over-the-counter measures without any long-term sustainable weight loss. At the time of presentation to the practice, xx is 5 feet 6 inches tall with a weight of 285.4 pounds and a body mass index of 46. xx has obesity-related comorbidities, which includes hypertension and hypercholesterolemia.''' result = gender_pipeline_sbert.annotate(text4) result['class'][0] text5 = '''Prostate gland showing moderately differentiated infiltrating adenocarcinoma, Gleason 3 + 2 extending to the apex involving both lobes of the prostate, mainly right.''' result = gender_pipeline_sbert.annotate(text5) result['class'][0] text6 = '''SKIN: The patient has significant subcutaneous emphysema of the upper chest and anterior neck area although he states that the subcutaneous emphysema has improved significantly since yesterday.''' result = gender_pipeline_sbert.annotate(text6) result['class'][0] text7 = '''INDICATION: The patient is a 42-year-old XX who is five days out from transanal excision of a benign anterior base lesion. xx presents today with diarrhea and bleeding. Digital exam reveals bright red blood on the finger. xx is for exam under anesthesia and control of hemorrhage at this time. ''' result = gender_pipeline_sbert.annotate(text7) result['class'][0] text8 = '''INDICATION: ___ year old patient with complicated medical history of paraplegia and chronic indwelling foley, recurrent MDR UTIs, hx Gallbladder fossa abscess,type 2 DM, HTN, CAD, DVT s/p left AKA complicated complicated by respiratory failure requiring tracheostomy and PEG placement, right ischium osteomyelitis due to chronic pressure ulcers with acute shortness of breath...''' result = gender_pipeline_sbert.annotate(text8) result['class'][0] ```
github_jupyter