code
stringlengths
2.5k
150k
kind
stringclasses
1 value
### Research Question 1 <p><b>What is the exact demographic most at risk for covid19 in Toronto?</b></p> To find this we'll need to produce graphs of Age, Gender, and if they were hospitalized. ``` import pandas as pd import numpy as np import pandas_profiling import sys, os import seaborn as sns from matplotlib import pyplot as plt sys.path.insert(0, os.path.abspath('..')) from scripts import project_functions as sc path = "\\analysis\\Analysis Notebook" #this formats everything quickly. df = sc.ez_format(path) df #create a new data frame of values we care about ndf = df.groupby(["Gender", "Age_Group", "Ever_Hospitalized"]).size().reset_index() ndf = ndf.rename(columns = {0:'D_Count'}) ndf = ndf.sort_values(by = 'D_Count', ascending = False) #drop those who were not hosplitalized, and then drop the hopitalized column. ndf = ndf.loc[ndf["Ever_Hospitalized"].str.contains("Yes")] ndf = ndf.drop(["Ever_Hospitalized"], axis = 1) ndf = ndf.loc[~df["Gender"].str.contains("UNKNOWN")] #create a demographic column, combine gender and agegroup together ndf.insert( 2,column = 'Demographic', value = (ndf['Gender'][:] + " " + ndf['Age_Group'][:])) order = ndf['Demographic'].to_list() ndf = ndf.reset_index(drop = True) ndf import seaborn as sns from matplotlib import pyplot as plt plt.figure(figsize=(6,5)) sns.set_context("notebook") sns.barplot(x ='D_Count', y = 'Demographic', data= ndf, order= order, palette = 'viridis_r') sns.despine(top = True, right = True, left = False, bottom = True) plt.title("Demographic by Hospitalization Cases") plt.xlabel("Amount of Cases resulting in Hospitalization") #quickly draw up a new dataframe. df = sc.ez_format(path) df = df.loc[df["Ever_Hospitalized"].str.contains("Yes")] df = df.loc[~df["Gender"].str.contains("UNKNOWN")] df = df.reset_index(drop = True) ls = df['Age_Group'].value_counts().to_dict() plt.figure(figsize=(7,5)) """ Use the seaborn count plot to visualize our data. Personally, graph 1 is easier to look at than graph 2, but I also want to go for style points. """ sns.countplot(x= 'Gender', hue = "Age_Group", data=df, hue_order = ls, dodge = True, palette = 'viridis_r' ) sns.despine(top = True, right = True, left = False) plt.title("Demographic by Hospitalization Cases") plt.xlabel("Gender") plt.legend(loc='upper right') ``` ### Result By our graph we can see that senior males are most succeptable to the virus. This is intriuging becase our data set actually has more females than males. My initial hypothesis was that the most succeptable demographic would be female, since there was over 1000 more female cases. Here's the image of that graph that from EDA_MattKuelker.ipynb. (If you still can't find it, it's in output.png here) ![plot showing females as largest demographic](output.png "image") Not only that, but it apears that across the board, elderly men are most at risk for the virus, despite being a less prominent group in the data set. As for those least affected, 90+ is to be expected, since most people don't live that long. I am surpised that there was a trangender case that was hospitalized, considering that there are not that many trans people compared to cis gendered people. From this data set we can also guess that females <19 are the least to be affected. Not only are there few hospitializations, but also their male <19 counterparts are also at the bottom of the case count. ### Research Question 2 <p><b>Do women have a higher spread via physical contact than men? </b></p> <i>Women are scientifically proven to have closer relationships than males. Let's see if we can draw a correllation between physical contact and cases for gender. </i> How are we going to prove this? Well first to visualize our data lets draw up a countplot via gender, than see if we can draw a correllation. ``` path ="\\analysis\\Analysis Notebook" #this formats everything quickly. df = sc.ez_format(path) df.head(5) ``` We're going to be filtering out pending, unkown, and N/A / outbreak associated, since those do not directly tell us how exactly people got covid 19. We'll also only focus on males and females since there isn't enough transgender cases to add draw any meaningful conclusions. ``` df = df[~df['Source_of_Infection'].isin(['N/A - Outbreak associated','Pending','Unknown/Missing'])] df = df.loc[df["Gender"].str.contains("MALE")] #We're dropping all other genders and whatnots from the dataset beccause they are minscule in count. ls = df['Source_of_Infection'].value_counts().to_dict() plt.figure(figsize=(6,5)) sns.set_context("notebook") sns.countplot(y = "Source_of_Infection", data =df, order = ls, hue = "Gender", palette = "Blues_r") plt.legend(loc = "center right") plt.ylabel("Source of Infection") plt.xlabel("Count") sns.despine() ``` Look at this! After some decent filtering it turns out my hypothesis is completely wrong, and the most notable desparity between male and female is actually community contact. Turns out the demographic with the highest infection count in community contact was male. Let's modify our question then and ask ourselves what the age demographic is of males who were infected via community contact. My guess is bachelor men in their 20's and seniors in elderly homes, since we had proven in our research question above older men were most at risk for the virus. <p><b>New Question 2: What is the demographic of these males infected in community? </b></p> Time for some filtering. ``` df = sc.ez_format(path) df = df.loc[df["Gender"].str.contains("MALE")] #This almost got me. Always anything containing male will be kept.... which means we'd still have females in our dataset. df = df.loc[~df["Gender"].str.contains("FEMALE")] df = df.loc[df["Source_of_Infection"].str.contains("Community")] df ls = df['Age_Group'].value_counts(ascending = True).to_dict() plt.figure(figsize=(6,5)) sns.set_context("notebook") sns.countplot(x = "Age_Group", data = df, order = ls, palette = "Spectral") plt.ylabel("Males Infected") plt.xlabel("Community Contact Cases") plt.title("Infected Males via Community Contact") sns.despine() ``` ### Conclusion As I had hypothesized, Males in their 20's had a large amount of ... however the largest group wasn't elderly men but men in their 50's. Likely however we can conclude that community contact then isn't caused exclusivley by males living together in the same residence. ``` df = pd.read_csv(sc.autopath(path)) df ```
github_jupyter
``` # подгружаем все нужные пакеты import pandas as pd import numpy as np # игнорируем warnings import warnings warnings.filterwarnings("ignore") import seaborn as sns import matplotlib import matplotlib.pyplot as plt import matplotlib.ticker %matplotlib inline # настройка внешнего вида графиков в seaborn sns.set_context( "notebook", font_scale = 1.5, rc = { "figure.figsize" : (12, 9), "axes.titlesize" : 18 } ) train = pd.read_csv('mlbootcamp5_train.csv', sep=';', index_col='id') print('Размер датасета: ', train.shape) train.head() train_uniques = pd.melt(frame=train, value_vars=['gender','cholesterol', 'gluc', 'smoke', 'alco', 'active', 'cardio']) train_uniques = pd.DataFrame(train_uniques.groupby(['variable', 'value'])['value'].count()) \ .sort_index(level=[0, 1]) \ .rename(columns={'value': 'count'}) \ .reset_index() sns.factorplot(x='variable', y='count', hue='value', data=train_uniques, kind='bar', size=12); train_uniques = pd.melt(frame=train, value_vars=['gender','cholesterol', 'gluc', 'smoke', 'alco', 'active'], id_vars=['cardio']) train_uniques = pd.DataFrame(train_uniques.groupby(['variable', 'value', 'cardio'])['value'].count()) \ .sort_index(level=[0, 1]) \ .rename(columns={'value': 'count'}) \ .reset_index() sns.factorplot(x='variable', y='count', hue='value', col='cardio', data=train_uniques, kind='bar', size=9); for c in train.columns: n = train[c].nunique() print(c) if n <= 3: print(n, sorted(train[c].value_counts().to_dict().items())) else: print(n) print(10 * '-') train.head() corrTrain = train.corr() sns.heatmap(corrTrain); #choresterol gluc train2 = train.melt(id_vars=['height'], value_vars=['gender']) sns.violinplot(x='value', y='height', data=train2); _, axes = plt.subplots(1, 2, sharey=True, figsize=(16,6)) sns.kdeplot(train2[train2['value'] == 1]['height'], ax = axes[0]) sns.kdeplot(train2[train2['value'] == 2]['height'], ax = axes[1]) corrTrain = train.corr(method = 'spearman') sns.heatmap(corrTrain); #ap_hi ap_ho #Природа данных #g = sns.jointplot(corrTrain['ap_hi'], corrTrain['ap_lo'], data = filtered) df4=train.copy()[['ap_hi','ap_lo']] df4=df4[(df4['ap_hi']>0) & (df4['ap_lo']>0)] df4['l'+'ap_hi']=df4['ap_hi'].apply(np.log1p) df4['l'+'ap_lo']=df4['ap_lo'].apply(np.log1p) g=sns.jointplot(x='l'+'ap_hi',y='l'+'ap_lo',data=df4, dropna=True) #"""Сетка""" g.ax_joint.grid(True) #"""Преобразуем логарифмические значения на шкалах в реальные""" g.ax_joint.yaxis.set_major_formatter(matplotlib.ticker.FuncFormatter(lambda x, pos: str(round(int(np.exp(x)))))) g.ax_joint.xaxis.set_major_formatter(matplotlib.ticker.FuncFormatter(lambda x, pos: str(round(int(np.exp(x)))))) #3?? train['age_years'] = (train['age'] // 365.25).astype(int) train.head() plt.subplots(figsize = (18,10)) sns.countplot(x = 'age_years', hue = 'cardio', data = train) #53 ```
github_jupyter
# K-means clustering When working with large datasets it can be helpful to group similar observations together. This process, known as clustering, is one of the most widely used in Machine Learning and is often used when our dataset comes without pre-existing labels. In this notebook we're going to implement the classic K-means algorithm, the simplest and most widely used clustering method. Once we've implemented it we'll use it to split a dataset into groups and see how our clustering compares to the 'true' labelling. ## Import Modules ``` import numpy as np import random import pandas as pd import matplotlib.pyplot as plt from scipy.stats import multivariate_normal ``` ## Generate Dataset ``` modelParameters = {'mu':[[-2,1], [0.5, -1], [0,1]], 'pi':[0.2, 0.35, 0.45], 'sigma':0.4, 'n':200} #Check that pi sums to 1 if np.sum(modelParameters['pi']) != 1: print('Mixture weights must sum to 1!') data = [] #determine which mixture each point belongs to def generateLabels(n, pi): #Generate n realisations of a categorical distribution given the parameters pi unif = np.random.uniform(size = n) #Generate uniform random variables labels = [(u < np.cumsum(pi)).argmax() for u in unif] #assign cluster return labels #Given the labels, generate from the corresponding normal distribution def generateMixture(labels, params): normalSamples = [] for label in labels: #Select Parameters mu = params['mu'][label] Sigma = np.diag([params['sigma']**2]*len(mu)) #sample from multivariate normal samp = np.random.multivariate_normal(mean = mu, cov = Sigma, size = 1) normalSamples.append(samp) normalSamples = np.reshape(normalSamples, (len(labels), len(params['mu'][0]))) return normalSamples labels = generateLabels(100, modelParameters['pi']) #labels - (in practice we don't actually know what these are!) X = generateMixture(labels, modelParameters) #features - (we do know what these are) ``` # Quickly plot the data so we know what it looks like ``` plt.figure(figsize=(10,6)) plt.scatter(X[:,0], X[:,1],c = labels) plt.show() ``` When doing K-means clustering, our goal is to sort the data into 3 clusters using the data $X$. When we're doing clustering we don't have access to the colour (label) of each point, so the data we're actually given would look like this: ``` plt.figure(figsize=(10,6)) plt.scatter(X[:,0], X[:,1]) plt.title('Example data - no labels') plt.show() ``` If we inspect the data we can still see that the data are roughly made up by 3 groups, one in the top left corner, one in the top right corner and one in the bottom right corner ## How does K-means work? The K in K-means represents the number of clusters, K, that we will sort the data into. Let's imagine we had already sorted the data into K clusters (like in the first plot above) and were trying to decide what the label of a new point should be. It would make sense to assign it to the cluster which it is closest to. But how do we define 'closest to'? One way would be to give it the same label as the point that is closest to it (a 'nearest neighbour' approach), but a more robust way would be to determine where the 'middle' of each cluster was and assign the new point to the cluster with the closest middle. We call this 'middle' the Cluster Centroid and we calculate it be taking the average of all the points in the cluster. That's all very well and good if we already have the clusters in place, but the whole point of the algorithm is to find out what the clusters are! To find the clusters, we do the following: 1. Randomly initialise K Cluster Centroids 2. Assign each point to the Cluster Centroid that it is closest to. 3. Update the Cluster Centroids as the average of all points currently assigned to that centroid 4. Repeat steps 2-3 until convergence ### Why does K-means work? Our aim is to find K Cluster Centroids such that the overall distance between each datapoint and its Cluster Centroid is minimised. That is, we want to choose cluster centroids $C = \{C_1,...,C_K\}$ such that the error function: $$E(C) = \sum_{i=1}^n ||x_i-C_{x_i}||^2$$ is minimised, where $C_{x_i}$ is the Cluster Centroid associated with the ith observation and $||x_i-C_{x_i}||$ is the Euclidean distance between the ith observation and associated Cluster Centroid. Now assume after $m$ iterations of the algorithm, the current value of $E(C)$ was $\alpha$. By carrying out step 2, we make sure that each point is assigned to the nearest cluster centroid - by doing this, either $\alpha$ stays the same (every point was already assigned to the closest centroid) or $\alpha$ gets smaller (one or more points is moved to a nearer centroid and hence the total distance is reduced). Similarly with step 3, by changing the centroid to be the average of all points in the cluster, we minimise the total distance associated with that cluster, meaning $\alpha$ can either stay the same or go down. In this way we see that as we run the algorithm $E(C)$ is non-increasing, so by continuing to run the algorithm our results can't get worse - hopefully if we run it for long enough then the results will be sensible! ``` class KMeans: def __init__(self, data, K): self.data = data #dataset with no labels self.K = K #Number of clusters to sort the data into #Randomly initialise Centroids self.Centroids = np.random.normal(0,1,(self.K, self.data.shape[1])) #If the data has p features then should be a K x p array def closestCentroid(self, x): #Takes a single example and returns the index of the closest centroid #Recall centroids are saved as self.Centroids pass def assignToCentroid(self): #Want to assign each observation to a centroid by passing each observation to the function closestCentroid pass def updateCentroids(self): #Now based on the current cluster assignments (stored in self.assignments) update the Centroids pass def runKMeans(self, tolerance = 0.00001): #When the improvement between two successive evaluations of our error function is less than tolerance, we stop change = 1000 #Initialise change to be a big number numIterations = 0 self.CentroidStore = [np.copy(self.Centroids)] #We want to be able to keep track of how the centroids evolved over time #while change > tolerance: #Code goes here... print(f'K-means Algorithm converged in {numIterations} steps') myKM = KMeans(X,3) myKM.runKMeans() ``` ## Let's plot the results ``` c = [0,1,2]*len(myKM.CentroidStore) plt.figure(figsize=(10,6)) plt.scatter(np.array(myKM.CentroidStore).reshape(-1,2)[:,0], np.array(myKM.CentroidStore).reshape(-1,2)[:,1],c=np.array(c), s = 200, marker = '*') plt.scatter(X[:,0], X[:,1], s = 12) plt.title('Example data from a mixture of Gaussians - Cluster Centroid traces') plt.show() ``` The stars of each colour above represents the trajectory of each cluster centroid as the algorithm progressed. Starting from a random initialisation, the centroids raplidly converged to a separate cluster, which is encouraging. Now let's plot the data with the associated labels that we've assigned to them. ``` plt.figure(figsize=(10,6)) plt.scatter(X[:,0], X[:,1], s = 20, c = myKM.assignments) plt.scatter(np.array(myKM.Centroids).reshape(-1,2)[:,0], np.array(myKM.Centroids).reshape(-1,2)[:,1], s = 200, marker = '*', c = 'red') plt.title('Example data from a mixture of Gaussians - Including Cluster Centroids') plt.show() ``` The plot above shows the final clusters (with red Cluster Centroids) assigned by the model, which should be pretty close to the 'true' clusters at the top of the page. Note: It's possible that although the clusters are the same the labels might be different - remember that K-means isn't supposed to identify the correct label, it's supposed to group the data in clusters which in reality share the same labels. The data we've worked with in this notebook had an underlying structure that made it easy for K-means to identify distinct clusters. However let's look at an example where K-means doesn't perform so well ## The sting in the tail - A more complex data structure ``` theta = np.linspace(0, 2*np.pi, 100) r = 15 x1 = r*np.cos(theta) x2 = r*np.sin(theta) #Perturb the values in the circle x1 = x1 + np.random.normal(0,2,x1.shape[0]) x2 = x2 + np.random.normal(0,2,x2.shape[0]) z1 = np.random.normal(0,3,x1.shape[0]) z2 = np.random.normal(0,3,x2.shape[0]) x1 = np.array([x1,z1]).reshape(-1) x2 = np.array([x2,z2]).reshape(-1) plt.scatter(x1,x2) plt.show() ``` It might be the case that the underlying generative structure that we want to capture is that the 'outer ring' in the plot corresponds to a certain kind of process and the 'inner circle' corresponds to another. ``` #Get data in the format we want newX = [] for i in range(x1.shape[0]): newX.append([x1[i], x2[i]]) newX = np.array(newX) #Run KMeans myNewKM = KMeans(newX,2) myNewKM.runKMeans() plt.figure(figsize=(10,6)) plt.scatter(newX[:,0], newX[:,1], s = 20, c = np.array(myNewKM.assignments)) plt.scatter(np.array(myNewKM.Centroids).reshape(-1,2)[:,0], np.array(myNewKM.Centroids).reshape(-1,2)[:,1], s = 200, marker = '*', c = 'red') plt.title('Assigned K-Means labels for Ring data ') plt.show() ``` The above plot indicates that K-means isn't able to identify the ring-like structure that we mentioned above. The clustering it has performed is perfectly valid - remember in K-means' world, labels don't exist and this is a legitmate clustering of the data! However if we were to use this clustering our subsequent analyses might be negatively impacted. In a future post we'll implement a method which is capable of capturing non-linear relationships more effectively (the Gaussian Mixture Model).
github_jupyter
# Transfer Learning In this notebook, you'll learn how to use pre-trained networks to solved challenging problems in computer vision. Specifically, you'll use networks trained on [ImageNet](http://www.image-net.org/) [available from torchvision](http://pytorch.org/docs/0.3.0/torchvision/models.html). ImageNet is a massive dataset with over 1 million labeled images in 1000 categories. It's used to train deep neural networks using an architecture called convolutional layers. I'm not going to get into the details of convolutional networks here, but if you want to learn more about them, please [watch this](https://www.youtube.com/watch?v=2-Ol7ZB0MmU). Once trained, these models work astonishingly well as feature detectors for images they weren't trained on. Using a pre-trained network on images not in the training set is called transfer learning. Here we'll use transfer learning to train a network that can classify our cat and dog photos with near perfect accuracy. With `torchvision.models` you can download these pre-trained networks and use them in your applications. We'll include `models` in our imports now. ``` %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms, models ``` Most of the pretrained models require the input to be 224x224 images. Also, we'll need to match the normalization used when the models were trained. Each color channel was normalized separately, the means are `[0.485, 0.456, 0.406]` and the standard deviations are `[0.229, 0.224, 0.225]`. ``` data_dir = './Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = test_transforms = # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=64) ``` We can load in a model such as [DenseNet](http://pytorch.org/docs/0.3.0/torchvision/models.html#id5). Let's print out the model architecture so we can see what's going on. ``` model = models.densenet121(pretrained=True) model ``` This model is built out of two main parts, the features and the classifier. The features part is a stack of convolutional layers and overall works as a feature detector that can be fed into a classifier. The classifier part is a single fully-connected layer `(classifier): Linear(in_features=1024, out_features=1000)`. This layer was trained on the ImageNet dataset, so it won't work for our specific problem. That means we need to replace the classifier, but the features will work perfectly on their own. In general, I think about pre-trained networks as amazingly good feature detectors that can be used as the input for simple feed-forward classifiers. ``` # Freeze parameters so we don't backprop through them for param in model.parameters(): param.requires_grad = False from collections import OrderedDict classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(1024, 500)), ('relu', nn.ReLU()), ('fc2', nn.Linear(500, 2)), ('output', nn.LogSoftmax(dim=1)) ])) model.classifier = classifier ``` With our model built, we need to train the classifier. However, now we're using a **really deep** neural network. If you try to train this on a CPU like normal, it will take a long, long time. Instead, we're going to use the GPU to do the calculations. The linear algebra computations are done in parallel on the GPU leading to 100x increased training speeds. It's also possible to train on multiple GPUs, further decreasing training time. PyTorch, along with pretty much every other deep learning framework, uses [CUDA](https://developer.nvidia.com/cuda-zone) to efficiently compute the forward and backwards passes on the GPU. In PyTorch, you move your model parameters and other tensors to the GPU memory using `model.to('cuda')`. You can move them back from the GPU with `model.to('cpu')` which you'll commonly do when you need to operate on the network output outside of PyTorch. As a demonstration of the increased speed, I'll compare how long it takes to perform a forward and backward pass with and without a GPU. ``` import time for device in ['cpu', 'cuda']: criterion = nn.NLLLoss() # Only train the classifier parameters, feature parameters are frozen optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) model.to(device) for ii, (inputs, labels) in enumerate(trainloader): # Move input and label tensors to the GPU inputs, labels = inputs.to(device), labels.to(device) start = time.time() outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() if ii==3: break print(f"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds") ``` You can write device agnostic code which will automatically use CUDA if it's enabled like so: ```python # at beginning of the script device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") ... # then whenever you get a new Tensor or Module # this won't copy if they are already on the desired device input = data.to(device) model = MyModule(...).to(device) ``` From here, I'll let you finish training the model. The process is the same as before except now your model is much more powerful. You should get better than 95% accuracy easily. >**Exercise:** Train a pretrained models to classify the cat and dog images. Continue with the DenseNet model, or try ResNet, it's also a good model to try out first. Make sure you are only training the classifier and the parameters for the features part are frozen. ``` class MyClassifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(1024,512) self.fc2 = nn.Linear(512,256) self.output = nn.Linear(256,2) self.dropout = nn.Dropout(0.2) #self.fc1 = nn.Linear(32,32) def forward(self,x): x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) #x = self.dropout(F.relu(self.fc3(x))) #x = self.dropout(F.relu(self.fc4(x))) x = F.log_softmax(self.output(x),dim=1) return x ## TODO: Use a pretrained model to classify the cat and dog images device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Only train the classifier parameters, feature parameters are frozen # Freeze parameters so we don't backprop through them for param in model.parameters(): param.requires_grad = False model.classifier = MyClassifier() model.to(device) criterion = nn.NLLLoss() optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) epochs = 1 train_losses,test_losses = [],[] steps = 0 print_every = 5 start = time.time() for epoch in range(epochs): currentLoss = 0 steps = 0 for inputs, labels in trainloader: steps += 1 # Move input and label tensors to the GPU inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() currentLoss += loss.item() if steps % print_every == 0: ## TODO: Implement the validation pass and print out the validation accuracy test_loss = 0 accuracy = 0 # Turn off gradients for validation, saves memory and computations with torch.no_grad(): model.eval() for images, labels in testloader: images,labels = images.to(device),labels.to(device) log_ps = model(images) test_loss += criterion(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) model.train() train_losses.append(currentLoss/len(trainloader)) test_losses.append(test_loss/len(testloader)) print("Epoch: {}/{}.. ".format(epoch+1, epochs), "Step: {}/{}.. ".format(steps, len(trainloader)), "Training Loss: {:.3f}.. ".format(currentLoss/len(trainloader)), "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)), "Test Accuracy: {:.3f}".format(accuracy/len(testloader))) end = time.time() print(f"Device = {device}; Trainning Time: {(end-start)/60:.3f} minutes") plt.plot(train_losses, label='Training loss') plt.plot(test_losses, label='Validation loss') plt.legend(frameon=False) model #Save the classifier torch.save(model, 'checkpoint_dog_cat.pth') #Load the Classifier cls = torch.load('checkpoint_dog_cat.pth') print(cls) # Import helper module (should be in the repo) import helper cross_data = datasets.ImageFolder(data_dir + '/cross/', transform=test_transforms) crossloader = torch.utils.data.DataLoader(cross_data, batch_size=64,shuffle=True) # Test out your network! #cls.eval() dataiter = iter(crossloader) images, labels = dataiter.next() images, labels = images.to(device),labels.to(device) # Calculate the class probabilities (softmax) for img with torch.no_grad(): output = cls(images) ps = torch.exp(output) for i in range(len(images)): index = torch.argmax(ps[i]) if index == 0: pred = "Cat" else: pred = "Dog" print("Predicted: " + str(pred)) helper.imshow(images[i].cpu(), normalize=True) ```
github_jupyter
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W0D5_Statistics/student/W0D5_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Tutorial 2: Statistical Inference **Week 0, Day 5: Statistics** **By Neuromatch Academy** __Content creators:__ Ulrik Beierholm If an editor really did a lot of content creation add "with help from Name Surname" to the above __Content reviewers:__ Ethan Cheng, Manisha Sinha Name Surname, Name Surname. This includes both reviewers and editors. Add reviewers first then editors (paper-like seniority :) ). --- #Tutorial Objectives This tutorial builds on Tutorial 1 by explaining how to do inference through inverting the generative process. By completing the exercises in this tutorial, you should: * understand what the likelihood function is, and have some intuition of why it is important * know how to summarise the Gaussian distribution using mean and variance * know how to maximise a likelihood function * be able to do simple inference in both classical and Bayesian ways * (Optional) understand how Bayes Net can be used to model causal relationships ``` #@markdown Tutorial slides (to be added) # you should link the slides for all tutorial videos here (we will store pdfs on osf) from IPython.display import HTML HTML('<iframe src="https://mfr.ca-1.osf.io/render?url=https://osf.io/kaq2x/?direct%26mode=render%26action=download%26mode=render" frameborder="0" width="960" height="569" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe>') ``` --- # Setup Make sure to run this before you get started ``` # Imports import numpy as np import matplotlib.pyplot as plt import scipy as sp from numpy.random import default_rng # a default random number generator from scipy.stats import norm # the normal probability distribution #@title Figure settings import ipywidgets as widgets # interactive display from ipywidgets import interact, fixed, HBox, Layout, VBox, interactive, Label, interact_manual %config InlineBackend.figure_format = 'retina' # plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/NMA2020/nma.mplstyle") #@title Plotting & Helper functions def plot_hist(data, xlabel, figtitle = None, num_bins = None): """ Plot the given data as a histogram. Args: data (ndarray): array with data to plot as histogram xlabel (str): label of x-axis figtitle (str): title of histogram plot (default is no title) num_bins (int): number of bins for histogram (default is 10) Returns: count (ndarray): number of samples in each histogram bin bins (ndarray): center of each histogram bin """ fig, ax = plt.subplots() ax.set_xlabel(xlabel) ax.set_ylabel('Count') if num_bins is not None: count, bins, _ = plt.hist(data, max(data), bins = num_bins) else: count, bins, _ = plt.hist(data, max(data)) # 10 bins default if figtitle is not None: fig.suptitle(figtitle, size=16) plt.show() return count, bins def plot_gaussian_samples_true(samples, xspace, mu, sigma, xlabel, ylabel): """ Plot a histogram of the data samples on the same plot as the gaussian distribution specified by the give mu and sigma values. Args: samples (ndarray): data samples for gaussian distribution xspace (ndarray): x values to sample from normal distribution mu (scalar): mean parameter of normal distribution sigma (scalar): variance parameter of normal distribution xlabel (str): the label of the x-axis of the histogram ylabel (str): the label of the y-axis of the histogram Returns: Nothing. """ fig, ax = plt.subplots() ax.set_xlabel(xlabel) ax.set_ylabel(ylabel) # num_samples = samples.shape[0] count, bins, _ = plt.hist(samples, density=True) # probability density function plt.plot(xspace, norm.pdf(xspace, mu, sigma),'r-') plt.show() def plot_likelihoods(likelihoods, mean_vals, variance_vals): """ Plot the likelihood values on a heatmap plot where the x and y axes match the mean and variance parameter values the likelihoods were computed for. Args: likelihoods (ndarray): array of computed likelihood values mean_vals (ndarray): array of mean parameter values for which the likelihood was computed variance_vals (ndarray): array of variance parameter values for which the likelihood was computed Returns: Nothing. """ fig, ax = plt.subplots() im = ax.imshow(likelihoods) cbar = ax.figure.colorbar(im, ax=ax) cbar.ax.set_ylabel('log likelihood', rotation=-90, va="bottom") ax.set_xticks(np.arange(len(mean_vals))) ax.set_yticks(np.arange(len(variance_vals))) ax.set_xticklabels(mean_vals) ax.set_yticklabels(variance_vals) ax.set_xlabel('Mean') ax.set_ylabel('Variance') def posterior_plot(x, likelihood=None, prior=None, posterior_pointwise=None, ax=None): """ Plots normalized Gaussian distributions and posterior. Args: x (numpy array of floats): points at which the likelihood has been evaluated auditory (numpy array of floats): normalized probabilities for auditory likelihood evaluated at each `x` visual (numpy array of floats): normalized probabilities for visual likelihood evaluated at each `x` posterior (numpy array of floats): normalized probabilities for the posterior evaluated at each `x` ax: Axis in which to plot. If None, create new axis. Returns: Nothing. """ if likelihood is None: likelihood = np.zeros_like(x) if prior is None: prior = np.zeros_like(x) if posterior_pointwise is None: posterior_pointwise = np.zeros_like(x) if ax is None: fig, ax = plt.subplots() ax.plot(x, likelihood, '-C1', LineWidth=2, label='Auditory') ax.plot(x, prior, '-C0', LineWidth=2, label='Visual') ax.plot(x, posterior_pointwise, '-C2', LineWidth=2, label='Posterior') ax.legend() ax.set_ylabel('Probability') ax.set_xlabel('Orientation (Degrees)') plt.show() return ax def plot_classical_vs_bayesian_normal(num_points, mu_classic, var_classic, mu_bayes, var_bayes): """ Helper function to plot optimal normal distribution parameters for varying observed sample sizes using both classic and Bayesian inference methods. Args: num_points (int): max observed sample size to perform inference with mu_classic (ndarray): estimated mean parameter for each observed sample size using classic inference method var_classic (ndarray): estimated variance parameter for each observed sample size using classic inference method mu_bayes (ndarray): estimated mean parameter for each observed sample size using Bayesian inference method var_bayes (ndarray): estimated variance parameter for each observed sample size using Bayesian inference method Returns: Nothing. """ xspace = np.linspace(0, num_points, num_points) fig, ax = plt.subplots() ax.set_xlabel('n data points') ax.set_ylabel('mu') plt.plot(xspace, mu_classic,'r-', label = "Classical") plt.plot(xspace, mu_bayes,'b-', label = "Bayes") plt.legend() plt.show() fig, ax = plt.subplots() ax.set_xlabel('n data points') ax.set_ylabel('sigma^2') plt.plot(xspace, var_classic,'r-', label = "Classical") plt.plot(xspace, var_bayes,'b-', label = "Bayes") plt.legend() plt.show() ``` --- # Section 1: Statistical Inference and Likelihood ``` #@title Video 4: Inference from IPython.display import YouTubeVideo video = YouTubeVideo(id="765S2XKYoJ8", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ``` A generative model (such as the Gaussian distribution from the previous tutorial) allows us to make prediction about outcomes. However, after we observe $n$ data points, we can also evaluate our model (and any of its associated parameters) by calculating the **likelihood** of our model having generated each of those data points $x_i$. $$P(x_i|\mu,\sigma)=\mathcal{N}(x_i,\mu,\sigma)$$ For all data points $\mathbf{x}=(x_1, x_2, x_3, ...x_n) $ we can then calculate the likelihood for the whole dataset by computing the product of the likelihood for each single data point. $$P(\mathbf{x}|\mu,\sigma)=\prod_{i=1}^n \mathcal{N}(x_i,\mu,\sigma)$$ As a function of the parameters (when the data points $x$ are fixed), this is referred to as the **likelihood function**, $L(\mu,\sigma)$. In the last tutorial we reviewed how the data was generated given the selected parameters of the generative process. If we do not know the parameters $\mu$, $\sigma$ that generated the data, we can ask which parameter values (given our model) gives the best (highest) likelihood. ## Exercise 1A: Likelihood, mean and variance We can use the likelihood to find the set of parameters that are most likely to have generated the data (given the model we are using). That is, we want to infer the parameters that gave rise to the data we observed. We will try a couple of ways of doing statistical inference. In the following exercise, we will sample from the Gaussian distribution (again), plot a histogram and the Gaussian probability density function, and calculate some statistics from the samples. Specifically we will calculate: * Likelihood * Mean * Standard deviation Statistical moments are defined based on the expectations. The first moment is the expected value, i.e. the mean, the second moment is the expected squared value, i.e. variance, and so on. The special thing about the Gaussian is that mean and standard deviation of the random sample can effectively approximate the two parameters of a Gaussian, $\mu, \sigma$. Hence using the sample mean, $\bar{x}=\frac{1}{n}\sum_i x_i$, and variance, $\bar{\sigma}^2=\frac{1}{n} \sum_i (x_i-\bar{x})^2 $ should give us the best/maximum likelihood, $L(\bar{x},\bar{\sigma}^2)$. Let's see if that actually works. If we search through different combinations of $\mu$ and $\sigma$ values, do the sample mean and variance values give us the maximum likelihood (of observing our data)? You need to modify two lines below to generate the data from a normal distribution $N(5, 1)$, and plot the theoretical distribution. Note that we are reusing functions from tutorial 1, so review that tutorial if needed. Then you will use this random sample to calculate the likelihood for a variety of potential mean and variance parameter values. For this tutorial we have chosen a variance parameter of 1, meaning the standard deviation is also 1 in this case. Most of our functions take the standard deviation sigma as a parameter, so we will write $\sigma = 1$. (Note that in practice computing the sample variance like this $$\bar{\sigma}^2=\frac{1}{(n-1)} \sum_i (x_i-\bar{x})^2 $$ is actually better, take a look at any statistics textbook for an explanation of this.) ``` def generate_normal_samples(mu, sigma, num_samples): """ Generates a desired number of samples from a normal distribution, Normal(mu, sigma). Args: mu (scalar): mean parameter of the normal distribution sigma (scalar): standard deviation parameter of the normal distribution num_samples (int): number of samples drawn from normal distribution Returns: sampled_values (ndarray): a array of shape (samples, ) containing the samples """ random_num_generator = default_rng(0) sampled_values = random_num_generator.normal(mu, sigma, num_samples) return sampled_values def compute_likelihoods_normal(x, mean_vals, variance_vals): """ Computes the log-likelihood values given a observed data sample x, and potential mean and variance values for a normal distribution Args: x (ndarray): 1-D array with all the observed data mean_vals (ndarray): 1-D array with all potential mean values to compute the likelihood function for variance_vales (ndarray): 1-D array with all potential variance values to compute the likelihood function for Returns: likelihood (ndarray): 2-D array of shape (number of mean_vals, number of variance_vals) for which the likelihood of the observed data was computed """ # Initialise likelihood collection array likelihood = np.zeros((mean_vals.shape[0], variance_vals.shape[0])) # Compute the likelihood for observing the gvien data x assuming # each combination of mean and variance values for idxMean in range(mean_vals.shape[0]): for idxVar in range(variance_vals.shape[0]): likelihood[idxVar,idxMean]= sum(np.log(norm.pdf(x, mean_vals[idxMean], variance_vals[idxVar]))) return likelihood ################################################################### ## TODO for students: Generate 1000 random samples from a normal distribution ## with mu = 5 and sigma = 1 # Fill out the following then remove raise NotImplementedError("Student exercise: need to generate samples") ################################################################### # Generate data mu = 5 sigma = 1 # since variance = 1, sigma = 1 x = ... # You can calculate mean and variance through either numpy or scipy print("This is the sample mean as estimated by numpy: " + str(np.mean(x))) print("This is the sample standard deviation as estimated by numpy: " + str(np.std(x))) # or meanX, varX = sp.stats.norm.stats(x) print("This is the sample mean as estimated by scipy: " + str(meanX[0])) print("This is the sample standard deviation as estimated by scipy: " + str(varX[0])) ################################################################### ## TODO for students: Use the given function to compute the likelihood for ## a variety of mean and variance values # Fill out the following then remove raise NotImplementedError("Student exercise: need to compute likelihoods") ################################################################### # Let's look through possible mean and variance values for the highest likelihood # using the compute_likelihood function meanTest = np.linspace(1, 10, 10) # potential mean values to try varTest = np.array([0.7, 0.8, 0.9, 1, 1.2, 1.5, 2, 3, 4, 5]) # potential variance values to try likelihoods = ... # Uncomment once you've generated the samples and compute likelihoods # xspace = np.linspace(0, 10, 100) # plot_gaussian_samples_true(x, xspace, mu, sigma, "x", "Count") # plot_likelihoods(likelihoods, meanTest, varTest) ``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D5_Statistics/solutions/W0D5_Tutorial2_Solution_7687f6b1.py) *Example output:* <img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D5_Statistics/static/W0D5_Tutorial2_Solution_7687f6b1_1.png> <img alt='Solution hint' align='left' width=534 height=414 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D5_Statistics/static/W0D5_Tutorial2_Solution_7687f6b1_2.png> The top figure shows hopefully a nice fit between the histogram and the distribution that generated the data. So far so good. Underneath you should see the sample mean and variance values, which are close to the true values (that we happen to know here). In the heatmap we should be able to see that the mean and variance parameters values yielding the highest likelihood (yellow) corresponds to (roughly) the combination of the calculated sample mean and variance from the dataset. But it can be hard to see from such a rough **grid-search** simulation, as it is only as precise as the resolution of the grid we are searching. Implicitly, by looking for the parameters that give the highest likelihood, we have been searching for the **maximum likelihood** estimate. $$(\hat{\mu},\hat{\sigma})=argmax_{\mu,\sigma}L(\mu,\sigma)=argmax_{\mu,\sigma} \prod_{i=1}^n \mathcal{N}(x_i,\mu,\sigma)$$. For a simple Gaussian this can actually be done analytically (you have likely already done so yourself), using the statistical moments: mean and standard deviation (variance). In next section we will look at other ways of inferring such parameter variables. ## Interactive Demo: Maximum likelihood inference We want to do inference on this data set, i.e. we want to infer the parameters that most likely gave rise to the data given our model. Intuitively that means that we want as good as possible a fit between the observed data and the probability distribution function with the best inferred parameters. For now, just try to see how well you can fit the probability distribution to the data by using the demo sliders to control the mean and standard deviation parameters of the distribution. ``` #@title #@markdown Make sure you execute this cell to enable the widget and fit by hand! vals = generate_normal_samples(mu, sigma, 1000) def plotFnc(mu,sigma): #prepare to plot fig, ax = plt.subplots() ax.set_xlabel('x') ax.set_ylabel('probability') loglikelihood= sum(np.log(norm.pdf(vals,mu,sigma))) #calculate histogram count, bins, ignored = plt.hist(vals,density=True) x = np.linspace(0,10,100) #plot plt.plot(x, norm.pdf(x,mu,sigma),'r-') plt.show() print("The log-likelihood for the selected parameters is: " + str(loglikelihood)) #interact(plotFnc, mu=5.0, sigma=2.1); #interact(plotFnc, mu=widgets.IntSlider(min=0.0, max=10.0, step=1, value=4.0),sigma=widgets.IntSlider(min=0.1, max=10.0, step=1, value=4.0)); interact(plotFnc, mu=(0.0, 10.0, 0.1),sigma=(0.1, 10.0, 0.1)); ``` Did you notice the number below the plot? That is the summed log-likelihood, which increases (becomes less negative) as the fit improves. The log-likelihood should be greatest when $\mu$ = 5 and $\sigma$ = 1. Building upon what we did in the previous exercise, we want to see if we can do inference on observed data in a bit more principled way. ## Exercise 1B: Maximum Likelihood Estimation Let's again assume that we have a data set, $\mathbf{x}$, assumed to be generated by a normal distribution (we actually generate it ourselves in line 1, so we know how it was generated!). We want to maximise the likelihood of the parameters $\mu$ and $\sigma^2$. We can do so using a couple of tricks: * Using a log transform will not change the maximum of the function, but will allow us to work with very small numbers that could lead to problems with machine precision. * Maximising a function is the same as minimising the negative of a function, allowing us to use the minimize optimisation provided by scipy. In the code below, insert the missing line (see the `compute_likelihoods_normal` function from previous exercise), with the mean as theta[0] and variance as theta[1]. ``` mu = 5 sigma = 1 # Generate 1000 random samples from a Gaussian distribution dataX = generate_normal_samples(mu, sigma, 1000) # We define the function to optimise, the negative log likelihood def negLogLike(theta): """ Function for computing the negative log-likelihood given the observed data and given parameter values stored in theta. Args: dataX (ndarray): array with observed data points theta (ndarray): normal distribution parameters (mean is theta[0], variance is theta[1]) Returns: Calculated negative Log Likelihood value! """ ################################################################### ## TODO for students: Compute the negative log-likelihood value for the ## given observed data values and parameters (theta) # Fill out the following then remove raise NotImplementedError("Student exercise: need to compute the negative \ log-likelihood value") ################################################################### return ... # Define bounds, var has to be positive bnds = ((None, None), (0, None)) # Optimize with scipy! # Uncomment once function above is implemented # optimal_parameters = sp.optimize.minimize(negLogLike, (2, 2), bounds = bnds) # print("The optimal mean estimate is: " + str(optimal_parameters.x[0])) # print("The optimal variance estimate is: " + str(optimal_parameters.x[1])) # optimal_parameters contains a lot of information about the optimization, # but we mostly want the mean and variance ``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D5_Statistics/solutions/W0D5_Tutorial2_Solution_29984e0b.py) These are the approximations of the parameters that maximise the likelihood ($\mu$ ~ 5.281 and $\sigma$ ~ 1.170) Compare these values to the first and second moment (sample mean and variance) from the previous exercise, as well as to the true values (which we only know because we generated the numbers!). Consider the relationship discussed about statistical moments and maximising likelihood. Go back to the previous exercise and modify the mean and standard deviation values used to generate the observed data $x$, and verify that the values still work out. [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D5_Statistics/solutions/W0D5_Tutorial2_Solution_b0145e28.py) --- # Section 2: Bayesian Inference ``` #@title Video 5: Bayes from IPython.display import YouTubeVideo video = YouTubeVideo(id="12tk5FsVMBQ", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ``` For Bayesian inference we do not focus on the likelihood function $L(y)=P(x|y)$, but instead focus on the posterior distribution: $$P(y|x)=\frac{P(x|y)P(y)}{P(x)}$$ which is composed of the likelihood function $P(x|y)$, the prior $P(y)$ and a normalising term $P(x)$ (which we will ignore for now). While there are other advantages to using Bayesian inference (such as the ability to derive Bayesian Nets, see optional bonus task below), we will first mostly focus on the role of the prior in inference. ## Exercise 2A: Performing Bayesian inference In the above sections we performed inference using maximum likelihood, i.e. finding the parameters that maximised the likelihood of a set of parameters, given the model and data. We will now repeat the inference process, but with an added Bayesian prior, and compare it to the classical inference process we did before (Section 1). When using conjugate priors we can just update the parameter values of the distributions (here Gaussian distributions). For the prior we start by guessing a mean of 6 (mean of observed data points 5 and 7) and variance of 1 (variance of 5 and 7). This is a simplified way of applying a prior, that allows us to just add these 2 values (pseudo-data) to the real data. In the code below, complete the missing lines. ``` def classic_vs_bayesian_normal(mu, sigma, num_points, prior): """ Compute both classical and Bayesian inference processes over the range of data sample sizes (num_points) for a normal distribution with parameters mu, sigma for comparison. Args: mu (scalar): the mean parameter of the normal distribution sigma (scalar): the standard deviation parameter of the normal distribution num_points (int): max number of points to use for inference prior (ndarray): prior data points for Bayesian inference Returns: mean_classic (ndarray): estimate mean parameter via classic inference var_classic (ndarray): estimate variance parameter via classic inference mean_bayes (ndarray): estimate mean parameter via Bayesian inference var_bayes (ndarray): estimate variance parameter via Bayesian inference """ # Initialize the classical and Bayesian inference arrays that will estimate # the normal parameters given a certain number of randomly sampled data points mean_classic = np.zeros(num_points) var_classic = np.zeros(num_points) mean_bayes = np.zeros(num_points) var_bayes = np.zeros(num_points) for nData in range(num_points): ################################################################### ## TODO for students: Complete classical inference for increasingly ## larger sets of random data points # Fill out the following then remove raise NotImplementedError("Student exercise: need to code classical inference") ################################################################### # Randomly sample nData + 1 number of points x = ... # Compute the mean of those points and set the corresponding array entry to this value mean_classic[nData] = ... # Compute the variance of those points and set the corresponding array entry to this value var_classic[nData] = ... # Bayesian inference with the given prior is performed below for you xsupp = np.hstack((x, prior)) mean_bayes[nData] = np.mean(xsupp) var_bayes[nData] = np.var(xsupp) return mean_classic, var_classic, mean_bayes, var_bayes # Set normal distribution parameters, mu and sigma mu = 5 sigma = 1 # Set the prior to be two new data points, 5 and 7, and print the mean and variance prior = np.array((5, 7)) print("The mean of the data comprising the prior is: " + str(np.mean(prior))) print("The variance of the data comprising the prior is: " + str(np.var(prior))) # Uncomment once the function above is completed # mean_classic, var_classic, mean_bayes, var_bayes = classic_vs_bayesian_normal(mu, sigma, 30, prior) # plot_classical_vs_bayesian_normal(30, mean_classic, var_classic, mean_bayes, var_bayes) ``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D5_Statistics/solutions/W0D5_Tutorial2_Solution_4cfc70ca.py) *Example output:* <img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D5_Statistics/static/W0D5_Tutorial2_Solution_4cfc70ca_1.png> <img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D5_Statistics/static/W0D5_Tutorial2_Solution_4cfc70ca_2.png> Hopefully you can see that the blue line stays a little closer to the true values ($\mu=5$, $\sigma^2=1$). Having a simple prior in the Bayesian inference process (blue) helps to regularise the inference of the mean and variance parameters when you have very little data, but has little effect with large data. You can see that as the number of data points (x-axis) increases, both inference processes (blue and red lines) get closer and closer together, i.e. their estimates for the true parameters converge as sample size increases. ## Think! 2A: Bayesian Brains It should be clear how Bayesian inference can help you when doing data analysis. But consider whether the brain might be able to benefit from this too. If the brain needs to make inferences about the world, would it be useful to do regularisation on the input? [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D5_Statistics/solutions/W0D5_Tutorial2_Solution_daa12602.py) ## Exercise 2B: Finding the posterior computationally ***(Exercise moved from NMA2020 Bayes day, all credit to original creators!)*** Imagine an experiment where participants estimate the location of a noise-emitting object. To estimate its position, the participants can use two sources of information: 1. new noisy auditory information (the likelihood) 2. prior visual expectations of where the stimulus is likely to come from (visual prior). The auditory and visual information are both noisy, so participants will combine these sources of information to better estimate the position of the object. We will use Gaussian distributions to represent the auditory likelihood (in red), and a Gaussian visual prior (expectations - in blue). Using Bayes rule, you will combine them into a posterior distribution that summarizes the probability that the object is in each possible location. We have provided you with a ready-to-use plotting function, and a code skeleton. * You can use `my_gaussian` from Tutorial 1 (also included below), to generate an auditory likelihood with parameters $\mu$ = 3 and $\sigma$ = 1.5 * Generate a visual prior with parameters $\mu$ = -1 and $\sigma$ = 1.5 * Calculate the posterior using pointwise multiplication of the likelihood and prior. Don't forget to normalize so the posterior adds up to 1 * Plot the likelihood, prior and posterior using the predefined function `posterior_plot` ``` def my_gaussian(x_points, mu, sigma): """ Returns normalized Gaussian estimated at points `x_points`, with parameters: mean `mu` and standard deviation `sigma` Args: x_points (ndarray of floats): points at which the gaussian is evaluated mu (scalar): mean of the Gaussian sigma (scalar): standard deviation of the gaussian Returns: (numpy array of floats) : normalized Gaussian evaluated at `x` """ px = 1/(2*np.pi*sigma**2)**1/2 *np.exp(-(x_points-mu)**2/(2*sigma**2)) # as we are doing numerical integration we may have to remember to normalise # taking into account the stepsize (0.1) px = px/(0.1*sum(px)) return px def compute_posterior_pointwise(prior, likelihood): """ Compute the posterior probability distribution point-by-point using Bayes Rule. Args: prior (ndarray): probability distribution of prior likelihood (ndarray): probability distribution of likelihood Returns: posterior (ndarray): probability distribution of posterior """ ############################################################################## # TODO for students: Write code to compute the posterior from the prior and # likelihood via pointwise multiplication. (You may assume both are defined # over the same x-axis) # # Comment out the line below to test your solution raise NotImplementedError("Finish the simulation code first") ############################################################################## posterior = ... return posterior def localization_simulation(mu_auditory = 3.0, sigma_auditory = 1.5, mu_visual = -1.0, sigma_visual = 1.5): """ Perform a sound localization simulation with an auditory prior. Args: mu_auditory (float): mean parameter value for auditory prior sigma_auditory (float): standard deviation parameter value for auditory prior mu_visual (float): mean parameter value for visual likelihood distribution sigma_visual (float): standard deviation parameter value for visual likelihood distribution Returns: x (ndarray): range of values for which to compute probabilities auditory (ndarray): probability distribution of the auditory prior visual (ndarray): probability distribution of the visual likelihood posterior_pointwise (ndarray): posterior probability distribution """ ############################################################################## ## Using the x variable below, ## create a gaussian called 'auditory' with mean 3, and std 1.5 ## create a gaussian called 'visual' with mean -1, and std 1.5 # # ## Comment out the line below to test your solution raise NotImplementedError("Finish the simulation code first") ############################################################################### x = np.arange(-8, 9, 0.1) auditory = ... visual = ... posterior = compute_posterior_pointwise(auditory, visual) return x, auditory, visual, posterior # Uncomment the lines below to plot the results # x, auditory, visual, posterior_pointwise = localization_simulation() # _ = posterior_plot(x, auditory, visual, posterior_pointwise) ``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D5_Statistics/solutions/W0D5_Tutorial2_Solution_ab4b98de.py) *Example output:* <img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W0D5_Statistics/static/W0D5_Tutorial2_Solution_ab4b98de_1.png> Combining the the visual and auditory information could help the brain get a better estimate of the location of an audio-visual object, with lower variance. For this specific example we did not use a Bayesian prior for simplicity, although it would be a good idea in a practical modeling study. **Main course preview:** On Week 3 Day 1 (W3D1) there will be a whole day devoted to examining whether the brain uses Bayesian inference. Is the brain Bayesian?! --- # Summary ``` #@title Video 6: Outro from IPython.display import YouTubeVideo video = YouTubeVideo(id= "BL5qNdZS-XQ", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ``` Having done the different exercises you should now: * understand what the likelihood function is, and have some intuition of why it is important * know how to summarise the Gaussian distribution using mean and variance * know how to maximise a likelihood function * be able to do simple inference in both classical and Bayesian ways --- # Bonus For more reading on these topics see: Textbook ## Extra exercise: Bayes Net If you have the time, here is another extra exercise. Bayes Net, or Bayesian Belief Networks, provide a way to make inferences about multiple levels of information, which would be very difficult to do in a classical frequentist paradigm. We can encapsulate our knowledge about causal relationships and use this to make inferences about hidden properties. We will try a simple example of a Bayesian Net (aka belief network). Imagine that you have a house with an unreliable sprinkler system installed for watering the grass. This is set to water the grass independently of whether it has rained that day. We have three variables, rain ($r$), sprinklers ($s$) and wet grass ($w$). Each of these can be true (1) or false (0). See the graphical model representing the relationship between the variables. ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAJEAAAB2CAYAAADImXEZAAAAAXNSR0IArs4c6QAAKcJJREFUeAHtnemPXFma1t/YIzK2zAjn7t3YripXTdHd00CroRCjaaSeRvRoJBjNwKgRIyT+EyRAfAch+ILEJ76MYGgxrUFAU91dVJXLZVe5vC/pXJwZmRmRGZGRsfN7zo1rZ2U50w4v5YzrPHZkbPfeuOc9z3n3855Qj2aH7ZACL0CB8Auce3jqIQUcBQ5BdAiEF6bAIYhemISHFzgE0SEGXpgC0Re+grtAz7qo59LQwxbib8d63Y6FwhHr8jbijtHne2GW4zk5xPdcSS+4HmfxHOIL9wjvda67eLD/9E0fZwKFetCqbb1QyCI8HJEczUWC/oHuvb5T61mHE8M6mUcXMoZDGqWXR8+XAqIuN6f+uMHmtjsgKhQBEL2ubVbbtrLZtGqja61601qdNp2JWCzKIxmxkVTcjmRiNpqKWCTSslBbnQtxfpfz6Tgo7Onaoscb2hw0oGUIAHW7XTf8+myj3rD1ass2al2rN1vWbrWZfAAmHLdEPGrpWMjGclEbzUTd+w4nhThfg9XjWmohB0T38rn/MO4aqRdr3BaDDOcRO4L71FsdW1yu2OLKltVBQBymkgjzHAtbFDQwNwCaAaiQNVt0iPPAlE0URmxmYsSSOkF3xbFdQBTmmi+hry/Wydd4tiYj/MdxkEarZYvr2/Zwed3qjZZFwkmLRLtMyjCv4TCabRzfbjNpoet2m0kL5ymkR+z4ZAZAJXgbdZNSQ39gQOTGG/HVoQeLa9t2Z6kC++xaLhGykRizICYuI3wJEPSJE7pMC4QYHW1bo920RjNuG9UOX4Zsajxjs4ApzvEexCMQ8DWO4uv+aWjZYtI9XKvag5WKNZs9y6YT0NYsGe1ZnEkWFYDg/o5MGgsI10J2NaCzJvXmdtu2uUghM2KnprOWGYkDNo2cSP5ixH0uTuRmhmSMh2f+dqzF/dxcqNr6WtnGRsKWy2ZsBJbKfye6hCKd4YkmZoDOYqaIC2nWNNst22q2bbPetnKtbWlE3bnZoiWTUWlJdDTi+uwJu/5P8xS81nUiqYfID4U6TMaQA8LthVVb32hYmok5lo5aKhFzkzMaicKJ4CwOCJ6eI2hIt5RkaHeaiLmONRigCkBaQ73o9sJ2aiZrxfyIibohOJXOeV4wPReIOsgiBwixWH5cLPPqfNm269s2m0+C8oSlknFwwwxBTjlFTp3USTsbdy4ZLiB1Oh1rM4N0jSpAKlXRnzjh7OwYs4dZQ5M4l3Iegi3tvpS+CUprMfAh+hpFRG1ud+3OvTVo07N8Hv0m1WNiJeE80BfwOOXaibHHFBEgNOO8yS6uD3AgXgO9aQs9aq3WslWue6KYtakxgIS0CDM+Em96hCUuBmjPpVhrEHtwDiG3yUz5an7dGtzcqYksinIEnSaFjgN4dnXuG/dFv93NI9/CsOJoL2oxzosnmnCvqJU2a3Znfs3s2LgVRgRGOslF3Kz5xsWC8YHXN3GhNjpPx248KIlIDHbSsnDncNxTmqMRZBntiXoNQFCTLqmmydyRQg7XisdigHDbEhg78ys1fsdsupgRGxoYPO7i/Hk+EAm1sEA0GLu1tA66t+zsVN4SsNg4qI4BiDA3/qzNZ6N6DnG+RF4u00FRTKBENuzO3KolT46jAwh1YkfPfu1nvYeDchw9hMuiA0GLm3NMIAAxWUgBIOmWcfQeuM9TOIVPT79PDmi8iQEgcS+NTdSJsLrNPawwXhg1Y1n3W/45gzw/F4iQQW52LK1tYIFV7Z2ZjGUyaYuBfKcA92fAIDfiHyv5HMWKC4VSdDYBL47afGnTri9s2vsnco4NOX4EkYPWHnOVntMvt7C+jh/J2mg6CQA0+JBdnd7R992AeRJN3OTsnyPOHxFHSiVtCs7eRFe6M1/DBRCx7MiIN34cMwh1BxN+/TvUIDbx+9ycW7fJ3IgV8lkUaDiQj/QdnXxSp572GTBCHMLVYN1pTNJ8Nmnl8qatVeq4jWSBOOXoaZcZyu97iJ1qvWMrpYoV6Hcug3iHA0kkhUPMeQwM8aoXaREkRSwSt2QKlwqirI3z8uHqFgZOi5EFQJ40fOafeC4QiRE9XK2g8YftaDFmoXiMQfeZ2ot10L9zzR6xbYnI0XzKcvGe3ZrfcNYGHkn/sMA8+1xIovzOfNWyCZToLLolz2G07Mcc58XpKyVbem0Yf9EI1z9aiNvSasM2t7YB0YAIYgSeC0TtZtduLZZRyJKWGkn13e+vZjylHI5g6R3B6itvN52Zi1vt1fzYa7yqD5JqrYFBsYVzMGHpFLplCAV6AP3yWbrgW196lo40MZqCw3dtpYzXW5bcgDj12ce+v/1oljitP2SVrSamZ8jeG8OJCIv1CbDXRdo9TPdq2VZR4ra5yQjWW2FqwsZSMczPbp9GT74VdTTGI5WSc22LGVPHUsGa6Df/3vz3Q/fMxPdmv7zShiGxBVfvomOmnB4UweG6v4bixcasU7WlhRK+to4VJmZQARL9eJl0qb1RIf1I1vSR/IYtlZs4eruWwAMuL/eztieP3K6zd4KkzaCvoJ/IkSh/kNfBvW9ShOk0q3b1o7+0f/Uv/qPVcbMmojE7+72/YT/7Z39qp8dzzqG2nxolT2wiEbc855Y2tvEp9R5Nzp33tuu2h+OtSMfklHOwh4d5ESMiy+RKJqUiMJCPI9h79Adn5Oai/dl/+g/257/83Cqb2/adH/49+9nP/sCOTowia2RDo0/tqUdxA/jyJkfTNr+8YVUYxChccO8R/eZtPDvc+ud2cQiuc6NjUviiKfQ8ZOs+P6mb6RLvWS2tWK0Rsd/74z+x3/3RD6z06Yf2F395yemIT5PD+oU4dmgaIDVx3StmFJzm6SByyG7jsd8mpJFJydMv55/m+NOH8/6nv7D/9j+v2IUf/Mj++B//oZ07edRihErE2iT49xsfR0eOyyY9y6yOyiBn8iDtmTiRf0HNev1ADW/neE6+hph1iLwrKr9XE4mkBncbxHuKJ+yv/fADiz74xK79n/9rFYClju7HhXRdXcMixIgw/WU91BttuKA+fTqBderBbv0+oFG3FIVnkjpLV742caew+qm5/s2+SpQrJURB6ofrq3aiHbUL3//AWcw53AIhrinPt2j3pPP5kMZ3hFekdmEfkQ3QduEoz5XpHfG0vwOBSDetTm7DWVKpPNcWgJgxvPpmFx//dBhzrtvdsoUbF+3f/5t/abF6zVayM/bj3z5rEa7XRXneR2xzfcQXl4uiBOp1k1jQ/r/4+LeH4pV0TQjYwMSOYF8n5BNyTiEpC6LuHo1zFDKaPPuB/eHvXbX/8b/+i/3rLz+zn/7RP7K/89fftQzgaQFE0Q4KP/EimpSaxVFlAoDVjgPyYJxoYHGmLunG8b/z49yCAoVPvD3vQ30nVi2ghOFJ9c11u3/vgVlq0s6fOU20mQOUKbVPE/8RS45wjR5TSwlvgWoiktI3MDrcoGpGOfasLzT4ev5mE8R0WGZi3D74/T+xf/JP/9TOZ6r2n//tv7OPLt0kVkYAF9qG9zG3dGVHXUx+F4gVdxOoB2j7j94TLqSbjhJ+UAKUKS9FH+hO9mnEpQFbzI5Mn7W//w//yH76kx9aZO6afXrtoWOjXgx57wtIJ5JZr6i/JqYcb+7F3qcM1zeiIRxd8UPXRebIfsO4c5AVT9yu1Sw/OmXf/5u/Yz/+g79r3ZVVu333odXbXIgR7mHd7teYnoyQwljwKwempwzorosNJM68c0kuw7rabmzzdtTDj2PHT/5hEUM6npE3FImO2LGzF+wtXAM3v7xsP/+vP7e//b1/bvmkurFP82gM14IL8VsK0joqP/kn97nQwfvKgaX/J4GVpMFsidPuQ9OvW6Qhu/LRX9if/eKideJJqyzftWo6ZxNTGXRWLiyFNMazgLpnk7oBY2iTHIgo3U+1eNIlBgKRbj6CaMoRqa+QroGWbR1QjpDh2nvfZBjQTZ95237nJwWbSqcxX8/aBz/+id1ZJI8Ix2Uouf/5Qoz8SVvoQhFyYVJEs30aC1RfJ+qTunmQP5OWR//hRPF4Ak6AzkfKRhdO34sjYvhKnuUn0Vd9VyrNiVMX7OTUvN0rrVpu4rT9g9/9vv3Wqel+Hpfws/fYiDKSBFuot3XEqZIIn8K4vkHMgUCks+X8KuaSdo30jyYgcqjd5x71VTQOBzr3V23iZMuKLjeoaO9+92/Z2yhEeZKs9G/fBoEbsGb5QLIkvMnc9095GoH2ve6B+dKbCHHyh8aJFa6RSzVdaNpIJNm3zp58o+q78trHZk/bj376+1atVqFL2PJjY1YsjpH1SNT+GbzdCoGsVrY4tkuym4C8r1z4xs0MDCIlmTnzsdu0VXJSpsdS7sa/ceUdHyiuNlYouBmh4J8QUCyOg3i4GJ18KhDAWB2LsLrZsNnJEc5/hnN2/P7Bf6mQsrh8B6df0r4gdlajvxkGlMx1991eWo1ol8DHM3v0KC6C1iOaKnj9LADyabO8XrMx4pMJUk6whPyPn+l5r3t74slOdDDwWVZoTOSydnuxwnGINJ8tPPEsIENHZZ7rIQtA75Wd56L+zzBTxPFWN2DxBA4n8jl0K4m/YDQNl+PFbuDgInmi63iZS5WO4/QSZ+I2ezXRUhNT9MxkSEkmnSOVIo1mAJlUhvOt15ESBLrTBLwlXQdpA4FIN6x/+p3JyRxst068peaUYuksSsfsYl7svgedp06583UNHmr+8zdumGsoVZZLWg+RV8GLurxatskj6FPkvegegtP6vaFLIdJfkuhFx1moUCpvEXBuQACoiTh3acTwJM9K+Xrvfbr6NNa3e9KW7zrEMrs8nKuG5ztLa5bC0ziWHXEJhZrog7TBjvavjEgbxaKaYonPtfuruOq1bkMavvwcfcvCP/Y5ntVBWWK439CFAGppw60emRnP4lXQLQ8ms5/jFr7dU9zkRIoAFgVEjxQylop06HfV40aIOXGjkPw9T5k/+4HncacQniIvdF5CjJXJ05oaI8BNtoQD0FN+4/F1vFfPBSI5D6WAnSQVRJ2+MfeQxCY5FPl1Br/PaHb/1jO+B4xcR/ym12nZQxS+SqVhx6eL7jc9Nr2b1z3jpQ/6YSIcfY8lwnaSCVPd6tiD1ZoHJN07IHPOyBfsRwgHpBZCbtSadm9xw8ZJBZlIx11ee0gSY8DrPx+IGMNwjHwXouonJzKsxGyT5biMGa78FxQzOvv8TewdQKL/3H7Yto++WrV8mlSFLOm3TinH5QQpg9rk11eIIkdC2uxE2q4/2LDLdzesxrIqF5F/CV2X8bXO8qFbCxskpcVsCmVeK5EFIAav/3h2Cg9snblLa8bAjRI4twpZnFSksy6SRnD17qK9fXyKzwnUiv0CJn0rUDmXuuOhCltIHEEu7lnHcSCkE6fWazm9SHpjndWvrixxTTIGkNXvnEQpV9gDCMlLG9gmZoRuJLUkulm1z66voh+umfKtv3O2QDwMGrgl5nh3RAbSZf0pq2fNX+mmfU1LpNVBfA6N+weKu9/FKBohWDaJdZ1hkkZhCk63kt716Io69+lNYzd4o4fyD8kJFosmbCIbsaPjCZZFd+3ja3O2xEpNWAk33uqbqOocWhNsVA+0JqAlMHENeWdZt6Z4mPSgEor6lRsL5Bhv41NSGKBhn90pWZXMAS3mEyW4dGCbhwtNlJB9OVez++iD+fiWbW1s2uWvHtq99bq1qFegmJi1GQRorqB4yCnK6KZYsspBkv4kxVmGjoyeEJ/Xtlt2hTVs1+6vYYVF4EAAiIR95XCHoat4g0PvgNR9Pk6k3+KhDkeVPwBjYR0cy54jtsQKyy/nKpZlTdPRYhoukoAzSQZzDjNITbPDC6kCHoCjFQflcgPwof/QUbHWE5OY85irN/JVZk3NbpCO+9unx7gGYBzQGeZ+dIj+iE8vrTXsw+sVQkxm7xzN2unphG0BnjnosLC8adNYqgUckylSaGPi42Jd4jjMMHF4SQFFHFtkBmzWMU5Wt3EbtPCK9/DtJckHi5PHjWORxRWeGHt+Aj0XiOQvetT0WhwpMWJKDonBoihCwbr6ll27t2I93OjpGKsW0mQ0ov3HiOOEmTUt4jRVwm+qGlLFzR82MupYIXSykCS3OA6QetZghrx9fNtuX1y3iwRr3zmah2i6ZfEykTp4TSJdmS4f31xzKsJ3TqTszPGCFcfyOBOJVpJMX6p1bG6+YvcgfRyXh5YUacVxnMC4ANTutojgd0ljbhGeYg0bIZQcdJ/JxSytNfwcq2XYMopcEYgXJONzgWi3GRmVDEbpDafSAEa+jjaZci0bbTSR5V2Xf7SGhdVaFeuVYgwEOD7OQkQWG9gsCW7yj6SZGSnSQpN4W7XALhJu2unZjB29u8ky7Q27DXd7dxb2TOcD0xxXlqYnEx4awq21nu/X10uscOnYe8dHbTw/hhGTtA6hnxSsKcUK1jqZpTWAssVjHStrmbRhpZLIbPfBgcSyInHGTC7FqmKKa0DbEbJDHffR0iNJB4mUF2wvZTR855TWhUdQttt4XPOAKdWKuZSRJjpPG5ksx6GixcrW070r7UCcS5Fjre6UR9ut0GQ6OccZK2DHR3Osri3bvasVu3Rrxc5PHXMAfMF+H5zTRQgBCY6uAIe40P+7VnJ65Q/OUA5mugCARlyflagmkGkhY3qkY5km9QrgMsp772AZE4N3OhKE5XjoCh3dsnRUDp3jFX94+SGjlwKi3SOi+Jo4jUDRTYj7MM8gkvd4rBVrxqh54RApdo+nhSMqb7PU1fkrs3m7eA/F8u6a/fCdGTuJvhQscebZUgLI7YWa/QZdqJjs2oVjo4RBiBXiYJXWoMmqIg+aYKJZHHAoXdmLFDymq0dFrslxfmzSf3YEf8l/XgmI/HvUjeshQPjNf+0Dxn/W9/53eu19TnoE7HeaMMDbs2n75Y1N++z2mh07Mst1dVQQmkcbWWPbcJWPvirZKl7kD95KUpQq7/ovPUmyx6eP53BV3+Um2QEeruEf49PVf36VlJJK/0razpvXa/8hAjhRtYPr+DfgH/P1cwlKss7/7EyOwK/ZpdslW9mou1MkHqVie1qWezF0f/zpJffF3Yeb9unNVRtPh+yt40UbJcgdw2fkl+bx6bOzkz49fZr675907M7zXubrVwaiZ7nJnWDZ63gdE8OamCUMcGYiaQsUkLhMvK6HAhlyzsuvc7C9rnNwP2eCgSStYPnw6ipWbc3eO0WluMmCJahmJgAd9Hbw7xAKKkmqgIXx9nSeNAmzz25VKNTUREHnS+d5HIpuPBELTn/hz835Lbt4q2wzRADens2xDh9/BxNIMdeD3oaC+opuJ1hGfZL0k5MTIbsLwW88qLgZLJtExB7mVsd6/eXVkrW2tuxdyudo5WoikaRfpKricT7obShAJCIq12aMNfjn8d6G8MJ+cmMVHxRLqoUfufqHprmkGecXklUlRXiBlI+r99ftWKFnb50ctRy6UEgAQpl+FpH/urs+HCDCrxTFp5QkDfTUVNFmRnuuRuRt0hhkuSjoOExNoQm3dg4us4VD9jIWZ4cEtAsnxqiKlrcYDkXFJl23hqBvwwGivnkbI0H/yGjGzh4bsTpEv4jJ38Msftq6qoMEMM/bIVMcgwCAzJdq9jkB5mPjESzQMRdRl4U1TG0o7lZagXMvEnzNZpJ2bnoUBTRin+OAnF/ZFNMfGpo78aQsCChfI73js1vrrkrZ+8dHWOGRI/6lVfAHXw/aSfChAJFAIlO3JwUbIk8fGSP8kcRCq9vFm3CjjpeLjNsIvxH60QEfAyeC6dPcw6pdubNuedbxySufpiaRx4WGZ1IITEMBIt2ogCS3v6qnZkhSO0NEn7Rg++xuxVYI7spU08YpyvY+yCASvuXfqhKNv4RJv0ZVtPOzpKcWCxbHH6bEu742pG4PRRsaEO2kpuo4nsD5eG5KJYqrdvFO1a2GUG1DpZeSa7Lz8AP12nEhbu8euw9cvkfC2UjI3jmeJcUlDbiGM5YzlCBSFl4+m7W3j+UsFW0zo9dYl9b0vLt4IJVYcXBbyMrkWn0KF9qobtiF2aSdwC8ki2xY21CCSL7FGM64GRYJnJqI2/wiKZ8Pyt4YsGxJWX0Hp8kvhJDFHJNp3yHn5/pCxa7MlW08G4YLFSybzTlRfXDuebA7GUIQeaIqQpK5VmyenckzOGYXSeKqbrVdvk3PxUMGI8SrPFr309VNImZXN+uOc26xh8nbKNMz42MuUv8qf/9VX3sIQSQuo5xtMvaSCTsznaPiaYIK9OusECnLjjtYeilc0bMutaq3SyGMqn2Fd3p2NG7vHs8jlikWJnt/iNtw3n1/JYN2GBynmv+FY0m31dUn18vkFsv6OUgjIs5J1iI3tcpihEvkTjdbTbtwNIWrIkeqB5H6AyV+B6fdcIIIkCiJS/E0JZ6fmUxTiDRmXzzYJCcH3Uiig5CC8wq7P68vtubhg7RX1tJ9db9iN1nvdbQYt/NwoRR+IVIU4VTD3YYWRJrZSrWJkdw/URy1M1MjVmH5yKW7qyivzH75jRgbrbt63U33usxuSZ/hWOyxl9m7x7IUpxh1Ra1cvhDfD3MbThBBcT8NFHZEvCmNqTxixTTmPitDHlKVXjJNQ+N5ul+f/0V+IW2b+cXcJr6hTTaqi9m5o6OUUKZu9JDFyPYC+tCCSLNbQMJl5LZZUtzpzEzclkt1AppwI0k0FFmtoFDxgtfRHNCRZ8usWr10W9mYLceFxgkiR6hiFpQ2tCDSAAhIUlq1NCabY2kRJvMIzsdPCGqukWaqmtB9ofatj5dXT8hc/cWrgPoeutqpyZiLkakYVVC4kAg71CDykaGagymcj8fZJ+TEZMLmlrfs6hzbXko1AmfeKjf/6Ff9LIWeH0YX0wbL8+wj9jmgTrAq47dQpifhmBEMgiC1AIBIKMFvxF4YRXbAvkBgNhJqEt1nsxPKp0gx6jqO9e0Mm8f7JEZJ9VDO0501qrrWUfyxIqdyloyxjiwAVN9JzYB0R+Y+3Ihahaemsna8GGbf2KpdI7wgphCSyf8tNRdy4UcJcNhcqWGX75TZkanrlkMXxthRiVQWpb0GqQUERDAjcSPEhLYSPcdCx5YSvm6wsxFLjb/NoL6EmJT9LX73MrrQ8uqmnZ9JoA+R9ppkczpR/PXo+a8Mt4EAkUSIdlfWNupyPp6eHGMTuAhmdcXur2Du00vpKU5XeUWkfHRtKfOIT+lll++WjS1y8ajnbAyLTNu0KwMhYIwoGNLZs9K8sUlQXWQa5+N5QiHaCvNzQiEtVoPoGN8t8LJxJADtvAfVQvyM9JS1ct3OzaTtBPE9lQh2haSChiCIGQhOtBMULjBLEYhzM1lXae3yvbItUjDqVcoQH0C6D1xTdntpw1UkK7Ic+t1jeeovknAWnOIBO8ntXgcCRL4o8bgNhZ/IfJxlP/mzVN9fIfPx0h0UbLyPTer7KJLuH7+TGi6RrS/ypIZLbXn84BOWbbtP/C8fnUy2EOa8Sg12KHu3yY4/X8CF1qsNgDxCwlmGtFdyp79FC/HRrX1LL4Y3nW4HgXZyAn3sNrFR5iMK9tW5Dadgv8PrK3eXqciftu+dn2JVhdd1Acq5AMgM0IxyJjq52oq9ydHtdGDlA6HLuPek4Cqgotfe74bgPOv2JYUmTkzk2Ci5Y5+T930kE7N3ToyScIZ3OuDlAQMBoh14cjBQXccYRbOmJ7N2arpkn6Ngf/jlA7u6ULf3jkXs/VPULtQmMwIDem6YbEhJdhneet/TggC9E5AEI8ehdGmFWfhEX/U5i7IVVV/xoxsVSsJsW41iU7Xtjv3gTArHJ2WT0YXctTg9qC2AIFKdnw7FR8so1SWLJijPEl531cc6bMwb7qIfqfK/4yQaVoYYLtMk8l9ln9paDSDgJGywF6pCE0kS31SmLsOynjTZlFqJK7D5TbsIdMgP0v5vl+/XXMm8qRwVylDw51axGhMtkvEBbB90/nlBeg4giBxGrEwa6sfoJm0cM224Q7nWttGMlmLTZceFPD3mIYli96mZvbiywvYEFWtoM0DtxQq7UTnAcJQah4mUc2QWCmkKbhUoUj4GMMhIFIfin1aZUDrRFtZZXBlD5wq37Befl1hk2aW6xwhFT9m1O0io2dWXQIIoijg7Ohaz77Kq9BOWE63U2C+to328zHLUppEge7CybV/ceGD35u5YrboGcLYszra7WnKkdfBhKrGqsmq0C/fqRMFVkvLKPBaX7XpulIKkBbIGimQQUAtRx+OjrqMPdQVaovXHij07fqRnabzVUtq10URQWyBBFKFm5NTYiP3grUlXXe1j0jBuLUcY8KQ14DAfX523L6/ftHJpyULNBiKqSYJhxxKuJC+V5UFFhEcU1IkbaTO7CMAMRRroQ5Rxr5EnTY2keQqVnz8x7vQjATRKzK6YjthbxMneO5G2cwRcM+yZEWTLTBODzWukHQSs0aMOtZy361tWWltz+458fqtmC9UOy3QoFl5fsQ67YidCDbdrUYQdDlVC2WVK8uwq2uLXUWHSMIDUrgEqiayimzEBDGD1+G7bktaNZqkVHaVK/SYcq0Gt7aidPVqgxtAkm/sS6lDJXyEswC2gIJIzh3Q0/DdNOM3mxobdvL9sP//wirU2V6wQbwEIFGQBBNkll4DAokcUBVjhkxAhCnEiVcLVe5VG1iNK4Xfw484Ph2NspUWV+xKirB0jn0nLu4vsMllk87oU9aLJn2aOeitQgqsVBVCcSQORj0cMFlGEZZXq5KzRWmH7tBVLh2seKAQUTHilz7q64LyW1iKLTN5lWWHiIKodHQI4+lzHhh0nkl7lgS9KsXEYDzsXhkiMy9sR9qhPUUdJnMvBJrjYecRbA8hnPRNcRdel7LYx3W8vVeyzr25YqrdN8ppAwcjCcbQPvbO8BQ7Yi8Cj2s9uJYkPHj7zwCT9SOBR2V+O0/E867sCK00mC1Er11XZ3nMN+NgJorbwCD39FwEEET2DtcgZSNiVTVG27eKVG7ZeussGNnAaAQcACBDqvMd15P/xQCIOIrEWQf/hvyfqxK3EjQQyiT0dy2cCkMseQMzl2SYhzk6Riw/XCXngJug33ynpvw/icyBBpJCE/DfaBv0mPqD5uzfYJ6zhACPuEVaCkUAkwIgp8UcR9jDgc+DgMxeqAGxhdCaJMOlFEmcy/92mKo6LCVwCLAo3lNTG3Fso83NYbW12jXSV7kmI070EuQUSRKpx3WN/kQqbplz96o51m6sMvnQcsCMxxqA7kaXXJLLJJeBEFcDQHrMOXI4zIRo5Ngq4xJWcXsTnfOSu54AnFPLgL55tbXTTsfVyxdapO+S0M4X1g40hx9EDN0nABjvytIjgb9rq0gJiDEVb08UBCeCIG4Eot22EQCHtSVxHVpcAgtUlgDjLTOa9RJhgwua+HG6xEN/rfHEhAY/XsuYiJOPn2JOjzdZbJUrdCETuPAexwJH5UYdE2sA1bC5iWSjUCyVr18uIIjELiSa667iPx3kEpghe6ptfXLJf/eZjW6eqiMr5leau23//85/bpa8euB2QQq1Nu/ibD+3DX39ubCHmxJtEn64nnchdm/dCWIZNgMN4rLUat8FmgD1xqoCzokCCSINWqzdsea3MroRsyOeAI34jZVm6jfQgcRC4jTaP69Zs8d68LbHlg3b6Wb5/w379qw/tyzvzeLjNWrVVu3t7zsobPVwGOCaFRTDjyv9xHelDgiNXx+8URaSx6nV725W6cQASjgLcggkifESNBnuebtYQPbLRNMD8Y8BlLUn8eJYXugw60QS+nTi23Or6ujXqNQa/wYZ+YWtQyaxCbcXyasm2UG2KM1PEwiAZKJSOJQXdX13rxGX/N5Rm0qFKfhOxJqABtwBDCHIEsXeyh+qstthm71PPuuqDB2A80oNQgGTeR2Apo+NT7CsWt1qFYC3R/K1e3M6cPWep8JYtLyzaynLZ2sm0HZ0dtwTXdhIKyjlACiT91yGBin/aC1crYAUicaJgQyigIJKzugsn6BBVd+ILwHghC8+343Qh6TOgQVwqlSuyx2rGtqslu3Z70dosMHzr/QuWQ5FevHvLFjZalhkrUMmDXGlxIelSEovibLx3Xm+u5J6dqGSPN0IuSpf17LYgTtXHfQokJ2Lyu5RXcR1xCefbUZ/doAMkf+BljvFZNM42mbMzFm7ViO4/IEk7b2fOnWVFbdbmbt2y+fWmzU7NsvtzEkV5h1iEC/ki8jE3IjiLb8iL3sGHZOLrhgLcggki5r+2t0yR+tEDQe4fnMfpMbxDxjl/kIw1BVn1vjA5abFm2eZu37dIKm9TExM2VRyxhTtf2lKJLcRnZ11KiEuZdfqQx4Uk0h6JtT5QuuxX7wQbl/Y2gQkwguhaAAOw0lVw/JEslmUb9cp2CqDIQmPQAYycic5x6LgIZr+4EyfkR8ft7LlTVomN2/Gj05YFgMdOHbd33n/H4pNvUamWKCtMRTqUlCKd558rkeaDiQ9JtSV+BlgT8i04JqQ/wdWMApkKoiU8a6tl+9+ffGE3b921ySS5zwxoTE5DuI52ZQ4rrUOAcq9jKMzkWbfqKNVd4mCsFSP9tUfYpF7HwqIaWy6foWa2lHEBhoi//E4CIO+BU19XUlZjiBLD/B71JL97dtatOROnE7CD2gLJiciWJ4uRdfmshA3fY/M5rCzlCWlho8x7jxtJQQZEcCRJtG6MVNZUxkbxVvMhHmz8P7GUIdFIlRVolCoiJAgwPHkvHYD4w8f6HCdnvY2TsUtNgAT+IrZZ0IEBBpAmhrofuCZFWvtkTIyxJTiJ8vUOg8mHLowh0DggAAmZ+XAjhThiAhgPX7+Bz/BPHmdhQCKM7dl5cjoRn3jxN4Cjj/SH43RspdbCvKfGdi6Nwh5FJwu6Wh1QEIlnkDZkE+yjOj01bo1whsEkUQyQKNgqTuRSPeA2LvAqEafveO98R3IJwFU8vUdgE1gEnD549B1gFOAcvoilwbwQfQ0XM1MgtpCX5efS4wI3SXd3KJCcyHUSQKSzSTs5PUGyfIYSM0p3BQQ8nMMREDggiBtxrAuiCkhwpQjHuIQzwOKAxedeAps+74Onr0x73AuOBfd5iCugxXq16UKWiD7LhDjvMCltN+SG5L0MbJDBAsKUTRbTdnL2iHXCCZRecQyAwkMcB/w4Uab3YMIBTBF7Db5zRIozwW0cqHj28o5gPnim5S7wrDJ+ipNXKi1bIquxOMbatGKeFbasv+cfV3Mib0hI91y3GUzFuj9wSiTL5TJ2+tikbW1v2VqpzC7PMr3x4oj7CFB6Fj8GURp0CS6km1N0YEh9oAAEgcmJMH3JA1EldEifWiPtY25p07kVjrGffZqSyOJ4b0oLrDiTGNHAJzDPp1h9cfbkNE7EuJWIpG6xnFrO7AR5RnoWapzviD8OXOIggMuBRBxHwBLS9FrYwFITjwFhtsCGfdfm1zi/x2qPghVHKamHK+BNaoEFkQDk6yMJFhDOEmQ9f2rakpjxpTLLqhv4dBBdESnFEm3yF0mhBhi++JLupOvsdCq6ZDXwQ80GUm/rdp2N72IxyukdHaey/5gDrQPaG4SiQE8ZAUBNSWMp4l6nZ45RCjhudx4suy0/N7Z7Np3vuDX64jwxiTZxIKcg6UQxG7gPOUZK6QgDuAarRx5Swm/u4Za1cUZOFyk2OjNuR9hmIZEa4XgJxDerBdJjvdcQtlstV4SqtELyPqsyltY2rYpZHqe6vfxJOQo+ZNJhG2HRYYzPBAeldDRaIdusd1nF0SRjseHAM5qOUtgha9MTRRsdBUBUD3HJ/S7O8WbB6I0BkS/auqSHNJvbgKFmpdV1qrtqM9+abZIa2+4w+HAb55sWFxNTcnyljQbUpgZ1lz054mzWR4B2bMwKpI+kqI4fUeUQp50r4OrpYnsBOYifvzEg2jl4Wg3SoqaQksbqZDJukIy2weqMGlmMdWobteBYKssnP5JbQo3neYQ19XlE4hig0bYKyZEkuhAhEo7xnZE7f+NNev3GgcjjSPIkYaLjTuoAKIm5VstLZ5Weo9RWcSzpQxJRAouCt1FiYSryECX91VtvhtLtxJcg82aJsJ2T5I0D0c7O+6+VqKEFhspG1M5EyCQHMB8XElXiNq71lfU3FzI+1R4/H4LoMS2+9srXofwPfUvPf3/4/JgCgTbxH3dz8FeHoHl2mvV59LOfcHjkIQV2U+AQRLspcvh+YAocgmhgkh2esJsChyDaTZHD9wNT4BBEA5Ps8ITdFDgE0W6KHL4fmAKHIBqYZIcn7KbAIYh2U+Tw/cAUOATRwCQ7PGE3Bf4/TmiGLnyuPiYAAAAASUVORK5CYII=) There is a table below describing all the relationships between $w, r$, and s$. Obviously the grass is more likely to be wet if either the sprinklers were on or it was raining. On any given day the sprinklers have probability 0.25 of being on, $P(s = 1) = 0.25$, while there is a probability 0.1 of rain, $P (r = 1) = 0.1$. The table then lists the conditional probabilities for the given being wet, given a rain and sprinkler condition for that day. \begin{array}{|l | l || ll |} \hline r &s&P(w=0|r,s) &P(w=1|r,s)$\\ \hline 0& 0 &0.999 &0.001\\ 0& 1 &0.1& 0.9\\ 1& 0 &0.01 &0.99\\ 1& 1& 0.001 &0.999\\ \hline \end{array} You come home and find that the the grass is wet, what is the probability the sprinklers were on today (you do not know if it was raining)? We can start by writing out the joint probability: $P(r,w,s)=P(w|r,s)P(r)P(s)$ The conditional probability is then: $ P(s|w)=\frac{\sum_{r} P(w|s,r)P(s) P(r)}{P(w)}=\frac{P(s) \sum_{r} P(w|s,r) P(r)}{P(w)} $ Note that we are summing over all possible conditions for $r$ as we do not know if it was raining. Specifically, we want to know the probability of sprinklers having been on given the wet grass, $P(s=1|w=1)$: $ P(s=1|w=1)=\frac{P(s = 1)( P(w = 1|s = 1, r = 1) P(r = 1)+ P(w = 1|s = 1,r = 0) P(r = 0))}{P(w = 1)} $ where \begin{eqnarray} P(w=1)=P(s=1)( P(w=1|s=1,r=1 ) P(r=1) &+ P(w=1|s=1,r=0) P(r=0))\\ +P(s=0)( P(w=1|s=0,r=1 ) P(r=1) &+ P(w=1|s=0,r=0) P(r=0))\\ \end{eqnarray} This code has been written out below, you just need to insert the right numbers from the table. ``` ############################################################################## # TODO for student: Write code to insert the correct conditional probabilities # from the table; see the comments to match variable with table entry. # Comment out the line below to test your solution raise NotImplementedError("Finish the simulation code first") ############################################################################## Pw1r1s1 = ... # the probability of wet grass given rain and sprinklers on Pw1r1s0 = ... # the probability of wet grass given rain and sprinklers off Pw1r0s1 = ... # the probability of wet grass given no rain and sprinklers on Pw1r0s0 = ... # the probability of wet grass given no rain and sprinklers off Ps = ... # the probability of the sprinkler being on Pr = ... # the probability of rain that day # Uncomment once variables are assigned above # A= Ps * (Pw1r1s1 * Pr + (Pw1r0s1) * (1 - Pr)) # B= (1 - Ps) * (Pw1r1s0 *Pr + (Pw1r0s0) * (1 - Pr)) # print("Given that the grass is wet, the probability the sprinkler was on is: " + # str(A/(A + B))) ``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W0D5_Statistics/solutions/W0D5_Tutorial2_Solution_204db048.py) The probability you should get is about 0.7522. Your neighbour now tells you that it was indeed raining today, $P (r = 1) = 1$, so what is now the probability the sprinklers were on? Try changing the numbers above. ## Think! Bonus: Causality in the Brain In a causal stucture this is the correct way to calculate the probabilities. Do you think this is how the brain solves such problems? Would it be different for task involving novel stimuli (e.g. for someone with no previous exposure to sprinklers), as opposed to common stimuli? **Main course preview:** On W3D5 we will discuss causality further!
github_jupyter
# Web Scraping: Selenium A menudo, los datos están disponibles públicamente para nosotros, pero no en una forma que sea fácilmente utilizable. Ahí es donde entra en juego el web scraping, podemos usar web scraping para obtener nuestros datos deseados en un formato conveniente que luego se puede usar. a continuación, mostraré cómo se puede extraer información de interés de un sitio web usando el paquete Selenium en Python. Selenium nos permite manejar una ventana del navegador e interactuar con el sitio web mediante programación. Selenium también tiene varios métodos que facilitan la extracción de datos. En este Jupyter Notebook vamos a usar Python 3 en Windows. En primer lugar, tendremos que descargar un controlador. Usaremos ChromeDriver para Google Chrome. Para obtener una lista completa de controladores y plataformas compatibles, consulte [Selenium](https://www.selenium.dev/downloads/). Si desea utilizar Google Chrome, diríjase a [chrome](https://chromedriver.chromium.org/) y descargue el controlador que corresponde a su versión actual de Google Chrome. Como saber cual es la version de chrome que utilizo simple utilizamos pegamos el siguiente enlace en la barra de chrome chrome://settings/help Antes de comenzar se preguntaran si ya se BeautifulSoup cual es la diferencia con Selenium. A diferencia BeautifulSoup, Selenium no trabaja con el texto fuente en HTML de la web en cuestión, sino que carga la página en un navegador sin interfaz de usuario. El navegador interpreta entonces el código fuente de la página y crea, a partir de él, un Document Object Model (modelo de objetos de documento o DOM). Esta interfaz estandarizada permite poner a prueba las interacciones de los usuarios. De esta forma se consigue, por ejemplo, simular clics y rellenar formularios automáticamente. Los cambios en la web que resultan de dichas acciones se reflejan en el DOM. La estructura del proceso de web scraping con Selenium es la siguiente: URL → Solicitud HTTP → HTML → Selenium → DOM ## Comencemos importando las bibliotecas que usaremos: ``` from selenium import webdriver import urllib3 # urllib3 es un cliente HTTP potente y fácil de usar para Python. import re # Expresiones regulares import time import pandas as pd ``` El objeto driver es con el que trabajaremos a partir de ahora ``` # especificamos el path hasta nuestro driver recién descargado: chrome_driver_path = 'chromedriver.exe' options = webdriver.ChromeOptions() # Creamos el driver con el que nos vamos a manejar en la sesión de scrapeo: driver = webdriver.Chrome(executable_path = chrome_driver_path, options = options) # indicamos la URL de la página web a la que queremos acceder: url = 'https://insolvencyinsider.ca/filing/' # el objeto driver nos va a permitir alterar el estado del la página driver.get(url) ``` Ahora si queremos hacer click en el boton de "Load more".. Selenium proporciona varios métodos para localizar elementos en la página web. Usaremos el método find_element_by_xpath() para crear un objeto de botón, con el que luego podremos interactuar: /html/body/div[2]/div/main/div/div/div/button ``` loadMore = driver.find_element_by_xpath(xpath="") ``` Antes de continuar, necesitaremos saber cuántas páginas hay para saber cuántas veces debemos hacer clic en el botón. Necesitaremos una forma de extraer el código fuente del sitio web. Afortunadamente, este proceso es relativamente sencillo con las bibliotecas urllib3 y re. ``` url = "https://insolvencyinsider.ca/filing/" http = urllib3.PoolManager() r = http.request("GET", url) text = str(r.data) ``` ```text``` ahora es una cadena. Ahora, necesitamos una forma de extraer total_pages de nuestra cadena de texto. Imprima texto para ver cómo podemos extraerlo usando RegEx con el paquete re. Podemos totalizar_páginas así: ``` totalPagesObj = re.search(pattern='"total_pages":\d+', string=text) totalPagesStr = totalPagesObj.group(0) totalPages = int((re.search(pattern="\d+", string=totalPagesStr)).group(0)) ``` El método de búsqueda toma un patrón y una cadena. En este caso nuestro patrón es '"total_pages":\d+' . Si no está familiarizado con RegEx, todo esto significa que estamos buscando la cadena "total_pages": con dos o más dígitos después de los dos puntos. \d se refiere a un dígito entre 0 y 9, mientras que + indica que Python debe buscar una o más de las expresiones regulares anteriores. Puedes leer más sobre el paquete re aquí. El método search() devuelve un objeto Match. re proporciona el método group() que devuelve uno o más subgrupos de la coincidencia. Pasamos 0 como argumento para indicar que queremos el parche completo. La tercera línea simplemente extrae el entero correspondiente a total_pages de la cadena. ``` print(totalPagesObj) print(totalPagesStr) print(totalPages) ``` Con eso completo, ahora podemos cargar todas las páginas de Insolvency Insider. Podemos hacer clic en el botón Cargar más accediendo al método click() del objeto. Esperamos tres segundos entre clics para no sobrecargar el sitio web. Recuerde que el total de las páginas son 88 pero comenzamos en 0 asi que es 88-1 ``` for i in range(totalPages-1): loadMore.click() time.sleep(3) ``` Una vez que ejecute esto, debería ver que se hace clic en el botón Cargar más y que se cargan las páginas restantes. Una vez que se carga cada página, podemos comenzar a raspar el contenido. Ahora, eliminar ciertos elementos como el nombre de presentación, la fecha y la hiperreferencia es bastante sencillo. Podemos usar los métodos find_elements_by_class_name() y find_elements_by_xpath() de Selenium (importante la ```s``` extra después de element): filing-name filing-date //*[@id='content']/div[2]/div/div[1]/h3/a ``` filingNamesElements = driver.find_elements_by_class_name("") filingDateElements = driver.find_elements_by_class_name("") filingHrefElements = driver.find_elements_by_xpath("") ``` También nos gustaría conocer los metadatos de presentación, es decir, el tipo de archivo, el sector de la empresa y el lugar en la que operan. Extraer estos datos requiere un poco más de trabajo. //*[@id='content']/div[2]/div[%d]/div[2]/div[1] ``` filingMetas = [] for i in range(len(filingNamesElements) + 1): filingMetai = driver.find_elements_by_xpath(("" %(i))) for element in filingMetai: filingMetaTexti = element.text filingMetas.append(filingMetaTexti) ``` De cada elemento de la presentación de Metas podemos extraer el tipo de presentación, la industria y la provincia, así: ******** ``` metaDict = {"Filing Type": [], "Industry": [], "Province": []} for filing in filingMetas: filingSplit = filing.split("\n") for item in filingSplit: itemSplit = item.split(":") if itemSplit[0] == "Filing Type": metaDict["Filing Type"].append(itemSplit[1]) elif itemSplit[0] == "Industry": metaDict["Industry"].append(itemSplit[1]) elif itemSplit[0] == "Province": metaDict["Province"].append(itemSplit[1]) if "Filing Type" not in filing: metaDict["Filing Type"].append("NA") elif "Industry" not in filing: metaDict["Industry"].append("NA") elif "Province" not in filing: metaDict["Province"].append("NA") for key in metaDict: print(len(metaDict[key])) ``` ********* Ahora, todavía tenemos que poner nuestros nombres y fechas de presentación en las listas. Hacemos esto agregando el texto de cada elemento a una lista usando el método text() de antes: ``` filingName = [] filingDate = [] filingLink = [] # para cada elemento en la lista de elementos de nombre de archivo, agrega el # texto del elemento a la lista de nombres de archivo. for element in filingNamesElements: filingName.append(element.text) # para cada elemento en la lista de elementos de la fecha de presentación, agrega el # texto del elemento a la lista de fechas de presentación. for element in filingDateElements: filingDate.append(element.text) for link in filingHrefElements: if link.get_attribute("href"): filingLink.append(link.get_attribute("href")) ``` Una vez que tengamos eso, estamos listos para poner todo en un diccionario y luego crear un DataFrame de pandas: ``` # Crea un diccionario final con nombres y fechas de archivo. fullDict = { "Filing Name": filingName, "Filing Date": filingDate, "Filing Type": metaDict["Filing Type"], "Industry": metaDict["Industry"], "Province": metaDict["Province"], "Link": filingLink } # Crea un DataFrame. df = pd.DataFrame(fullDict) df["Filing Date"] = pd.to_datetime(df["Filing Date"], infer_datetime_format=True) df ``` ------------------------ # Ahora algo más visual ``` driver = webdriver.Chrome(executable_path = chrome_driver_path, options = options) # indicamos la URL de la página web a la que queremos acceder: url = 'https://www.filmaffinity.com/es/main.html' # el objeto driver nos va a permitir alterar el estado del la página driver.get(url) ``` La página de Filmaffinity se ha abierto Pero.... Nos hemos encontrado con un pop-up que nos pide aceptar cookies 1. Buscamos el botón 2. Hacemos click en el botón Vamos a quitar el boton para seguir ``` elements_by_tag = driver.find_elements_by_tag_name('button') elements_by_class_name = driver.find_elements_by_class_name('css-v43ltw') element_by_xpath = driver.find_element_by_xpath('/html/body/div[1]/div/div/div/div[2]/div/button[2]') ``` Una vez tenemos los elementos podemos hacer varias cosas con ellos Podemos extraer todos los atributos que tenga ``` dir(element_by_xpath) # obtenemos todos sus métodos y atributos: ``` Podemos evaluar que tipo de elemento es (tag) ``` element_by_xpath.tag_name ``` Podemos sacar el valor que tiene (el texto) ``` element_by_xpath.text for i in range(0,len(elements_by_tag)): print(elements_by_tag[i].text) ``` Incluso podemos guardar una imagen del elemento ``` type(element_by_xpath) # Vemos que es tipo 'WebElement' y en la documentación podremos encontrar sus métodos # guardamos como 'mi_imagen.png' la imagen asociada al xpath element_by_xpath.screenshot('mi_imagen.png') ``` Evaluamos que elementos hemos encontrado por el tag: ``` for index, element in enumerate(elements_by_tag): print('Elemento:', index) print('Texto del elemento',index, 'es', element.text) print('El tag del elemento',index, 'es', element.tag_name) element.screenshot('mi_imagen'+str(index)+'.png') ``` Basta de tonterias seguimos Instanciamos el elemento del tag [2] en la variable boton aceptar ``` boton_aceptar = elements_by_tag[2] ``` Si el elemento es interactivo podremos hacer más cosas además de las anteriores. Por ejemplo: hacer click ``` boton_aceptar.click() ``` Buscamos una película por título ``` from selenium.webdriver.common.keys import Keys ``` /html/body/div[2]/div[1]/div/div[2]/form/div/input ``` buscador = driver.find_element_by_xpath('') buscador.send_keys('') buscador.clear() # una vez escrita la búsqueda deberíamos poder activarla: buscador.send_keys(Keys.ENTER) # volvemos a la página anterior driver.back() ``` ### Vamos a buscar todas las películas que se estrenan el próximo viernes 1. Cogemos los containers que hay en la zona lateral ``` menu_lateral = driver.find_element_by_id('lsmenu') menu_lateral mis_secciones = menu_lateral.find_elements_by_tag_name('a') ``` 2. Vemos con cuál nos tenemos que quedar ``` for a in mis_secciones: if a.text == 'Próximos estrenos': a.click() break ``` Accedemos al container central, en el que aparecen los estrenos por semana que queremos ver, exactamente igual que hemos hecho antes ``` cajon_central = driver.find_elements_by_id('main-wrapper-rdcat') type(cajon_central) for semana in cajon_central: print(semana.find_element_by_tag_name('div').text) print(semana.find_element_by_tag_name('div').get_attribute('id')) for semana in cajon_central: fecha = semana.find_element_by_tag_name('div').get_attribute('id') if fecha == '2022-02-25': break ``` Buscamos cómo acceder a las películas ``` caratulas = semana.find_elements_by_class_name('') lista_pelis = [] for peli in caratulas: lista_pelis.append(peli.find_element_by_tag_name('a').get_attribute('href')) lista_pelis ``` Una vez tenemos todas las urls vamos a ver qué hacemos con cada una de ellas ``` # Accedemos a la página de la primera pelicula driver.get(lista_pelis[0]) ``` Vamos a ver el proceso que deberíamos hacer con cada una de las películas: 1. Sacamos toda la información que nos interesa ``` # titulo, nota, numero de votos y ficha técnica titulo = driver.find_element_by_xpath('/html/body/div[4]/table/tbody/tr/td[2]/div[1]/div[4]/h1/span').text nota = driver.find_element_by_xpath('/html/body/div[4]/table/tbody/tr/td[2]/div[1]/div[4]/div/div[2]/div[2]/div[1]/div[2]/div[1]').text votos = driver.find_element_by_xpath('/html/body/div[4]/table/tbody/tr/td[2]/div[1]/div[4]/div/div[2]/div[2]/div[1]/div[2]/div[2]/span').text ficha = driver.find_element_by_xpath('/html/body/div[4]/table/tbody/tr/td[2]/div[1]/div[4]/div/div[3]/dl[1]') titulo ``` 2. Creamos una lista a partir de la ficha técnica ``` # Los nombres estan con tag = 'dt' y los valores con 'dd' ficha_names = [] ficha_values = [] for name in ficha.find_elements_by_tag_name('dt'): ficha_names.append(name.text) for value in ficha.find_elements_by_tag_name('dd'): ficha_values.append(value.text) ficha_values ``` 3. Creamos un dataframe con la info ``` columns = ['Titulo', 'Nota', 'Votos'] columns.extend(ficha_names) len(columns) values = [titulo, nota, votos] values.extend(ficha_values) len(values) pd.DataFrame([values],columns=columns) ``` Ahora vamos a crear una función que nos haga todo esto para cada una de las películas: ``` def sacar_info(driver): titulo = driver.find_element_by_xpath('/html/body/div[4]/table/tbody/tr/td[2]/div[1]/div[4]/h1/span').text try: nota = driver.find_element_by_xpath('/html/body/div[4]/table/tbody/tr/td[2]/div[1]/div[4]/div/div[2]/div[2]/div[1]/div[2]').text votos = driver.find_element_by_xpath('/html/body/div[4]/table/tbody/tr/td[2]/div[1]/div[4]/div/div[2]/div[2]/div[1]/div[2]/div[2]').text except: nota = None votos = None ficha = driver.find_element_by_xpath('/html/body/div[4]/table/tbody/tr/td[2]/div[1]/div[4]/div/div[3]/dl[1]') return titulo, nota, votos, ficha def sacar_ficha(ficha): ficha_names = [] ficha_values = [] for name in ficha.find_elements_by_tag_name('dt'): ficha_names.append(name.text) for value in ficha.find_elements_by_tag_name('dd'): ficha_values.append(value.text) return ficha_names, ficha_values def montar_df(ficha_names, ficha_values, titulo, nota, votos): columns = ['Titulo', 'Nota', 'Votos'] columns.extend(ficha_names) values = [titulo, nota, votos] values.extend(ficha_values) return pd.DataFrame([values], columns = columns) def nueva_pelicula(driver): titulo, nota, votos, ficha = sacar_info(driver) ficha_names, ficha_values = sacar_ficha(ficha) df_peli = montar_df(ficha_names, ficha_values, titulo, nota, votos) return df_peli ``` Vamos a ver cómo nos podemos mover entre ventanas del navegador Abrir nueva ventana: ``` driver.execute_script('window.open("");') ``` Movernos a otra ventana ``` driver.switch_to.window(driver.window_handles[0]) ``` Cerrar ventana ``` driver.close() ``` Una vez cerramos la ventana tenemos que indicarle a qué ventana tiene que ir ``` driver.switch_to.window(driver.window_handles[-1]) ``` Sabiendo cómo podemos movernos por entre las ventanas y sabiendo cómo extraer de cada página toda la información que necesitamos vamos a crear nuestro dataframe: ``` # para abrir todos los links en lista_pelis for link in lista_pelis: driver.execute_script('window.open("'+link+'");') driver.switch_to.window(driver.window_handles[-1]) driver.get(link) # Creamos un dataframe con todas las pelis que se estrenan la próxima semana: df_peliculas = pd.DataFrame() for link in lista_pelis: driver.execute_script('window.open("");') driver.switch_to.window(driver.window_handles[-1]) driver.get(link) nueva_peli = nueva_pelicula(driver) df_peliculas = df_peliculas.append(nueva_peli) df_peliculas.info() df_peliculas ``` Ya tenemos un dataframe con todas las películas que se van a estrenar el próximo viernes
github_jupyter
<img src="NotebookAddons/blackboard-banner.png" width="100%" /> <font face="Calibri"> <br> <font size="7"> <b> GEOS 657: Microwave Remote Sensing<b> </font> <font size="5"> <b>Lab 9: InSAR Time Series Analysis using GIAnT within Jupyter Notebooks</b> </font> <br> <font size="4"> <b> Franz J Meyer & Joshua J C Knicely; University of Alaska Fairbanks</b> <br> <img src="NotebookAddons/UAFLogo_A_647.png" width="170" align="right" /><font color='rgba(200,0,0,0.2)'> <b>Due Date: </b>NONE</font> </font> <font size="3"> This Lab is part of the UAF course <a href="https://radar.community.uaf.edu/" target="_blank">GEOS 657: Microwave Remote Sensing</a>. The primary goal of this lab is to demonstrate how to process InSAR data, specifically interferograms, using the Generic InSAR Analysis Toolbox (<a href="http://earthdef.caltech.edu/projects/giant/wiki" target="_blank">GIAnT</a>) in the framework of *Jupyter Notebooks*.<br> <b>Our specific objectives for this lab are to:</b> - Learn how to prepare data for GIAnT. - Use GIAnT to create maps of surface deformation. - Understand its capabilities. - Understand its limitations. </font> <br> <font face="Calibri"> <font size="5"> <b> Target Description </b> </font> <font size="3"> In this lab, we will analyze the volcano Sierra Negra. This is a highly active volcano on the Galapagos hotpsot. The most recent eruption occurred from 29 June to 23 August 2018. The previous eruption occurred in October 2005, prior to the launch of the Sentinel-1 satellites, which will be the source of data we use for this lab. We will be looking at the deformation that occurred prior to the volcano's 2018 eruption. </font> <font size="4"> <font color='rgba(200,0,0,0.2)'> <b>THIS NOTEBOOK INCLUDES NO HOMEWORK ASSIGNMENTS.</b></font> <br> Contact me at fjmeyer@alaska.edu should you run into any problems. </font> ``` import url_widget as url_w notebookUrl = url_w.URLWidget() display(notebookUrl) from IPython.display import Markdown from IPython.display import display notebookUrl = notebookUrl.value user = !echo $JUPYTERHUB_USER env = !echo $CONDA_PREFIX if env[0] == '': env[0] = 'Python 3 (base)' if env[0] != '/home/jovyan/.local/envs/insar_analysis': display(Markdown(f'<text style=color:red><strong>WARNING:</strong></text>')) display(Markdown(f'<text style=color:red>This notebook should be run using the "insar_analysis" conda environment.</text>')) display(Markdown(f'<text style=color:red>It is currently using the "{env[0].split("/")[-1]}" environment.</text>')) display(Markdown(f'<text style=color:red>Select "insar_analysis" from the "Change Kernel" submenu of the "Kernel" menu.</text>')) display(Markdown(f'<text style=color:red>If the "insar_analysis" environment is not present, use <a href="{notebookUrl.split("/user")[0]}/user/{user[0]}/notebooks/conda_environments/Create_OSL_Conda_Environments.ipynb"> Create_OSL_Conda_Environments.ipynb </a> to create it.</text>')) display(Markdown(f'<text style=color:red>Note that you must restart your server after creating a new environment before it is usable by notebooks.</text>')) ``` <font face='Calibri'><font size='5'><b>Overview</b></font> <br> <font size='3'><b>About GIAnT</b> <br> GIAnT is a Python framework that allows rapid time series analysis of low amplitude deformation signals. It allows users to use multiple time series analysis technqiues: Small Baseline Subset (SBAS), New Small Baseline Subset (N-SBAS), and Multiscale InSAR Time-Series (MInTS). As a part of this, it includes the ability to correct for atmospheric delays by assuming a spatially uniform stratified atmosphere. <br><br> <b>Limitations</b> <br> GIAnT has a number of limitations that are important to keep in mind as these can affect its effectiveness for certain applications. It implements the simplest time-series inversion methods. Its single coherence threshold is very conservative in terms of pixel selection. It does not include any consistency checks for unwrapping errors. It has a limited dictionary of temporal model functions. It cannot correct for atmospheric effects due to differing surface elevations. <br><br> <b>Steps to use GIAnT</b><br> Although GIAnT is an incredibly powerful tool, it requires very specific input. Because of the input requirements, the majority of one's effort goes to getting the data into a form that GIAnT can manipulate and to creating files that tell GIAnT what to do. The general steps to use GIAnT are below. - Download Data - Identify Area of Interest - Subset (Crop) Data to Area of Interest - Prepare Data for GIAnT - Adjust file names - Remove potentially disruptive default values (optional) - Convert data from '.tiff' to '.flt' format - Create Input Files for GIAnT - Create 'ifg.list' - Create 'date.mli.par' - Make prepxml_SBAS.py - Run prepxml_SBAS.py - Make userfn.py - Run GIAnT - PrepIgramStack.py* - ProcessStack.py - SBASInvert.py - SBASxval.py - Data Visualization <br> The steps from PrepIgramStack.py and above have been completed for you in order to save disk space and computation time. This allows us to concentrate on the usage of GIAnT and data visualization. Some of the code to create the prepatory files (e.g., 'ifg.list', 'date.mli.par', etc.) have been incldued for your potential use. More information about GIAnT can be found here: (<a href="http://earthdef.caltech.edu/projects/giant/wiki" target="_blank">http://earthdef.caltech.edu/projects/giant/wiki</a>). <hr> <font face="Calibri" size="5" color="darkred"> <b>Important Note about JupyterHub</b> </font> <br><br> <font face="Calibri" size="3"> <b>Your JupyterHub server will automatically shutdown when left idle for more than 1 hour. Your notebooks will not be lost but you will have to restart their kernels and re-run them from the beginning. You will not be able to seamlessly continue running a partially run notebook.</b> </font> <font face='Calibri'><font size='5'><b>0. Import Python Libraries:</b></font><br><br> <font size='3'><b>Import the Python libraries and modules we will need to run this lab:</b></font> ``` %%capture from datetime import date import glob import h5py # for is_hdf5 import os import shutil from osgeo import gdal import matplotlib.pyplot as plt import matplotlib.animation from matplotlib import rc import numpy as np from IPython.display import HTML import opensarlab_lib as asfn asfn.jupytertheme_matplotlib_format() ``` <font face='Calibri'><font size='5'><b>1. Transfer data to a local directory</b></font><br> <font size='3'>The data cube (referred to as a stack in the GIAnT documentation and code) and several other needed files have been created and stored in the GIAnT server. We will download this data to a local directory and unzip it. </font></font> <font face="Calibri" size="3"> Before we download anything, <b>create a working directory for this analysis and change into it:</b> </font> ``` path = f"{os.getcwd()}/2019/lab_9_data" if not os.path.exists(path): os.makedirs(path) os.chdir(path) print(f"Current working directory: {os.getcwd()}") ``` <font face = 'Calibri' size='3'>First step is to find the zip file and download it to a local directory. This zip file has been placed in the S3 bucket for this class. <br><br> <b>Display the contents of the S3 bucket:</b></font> ``` !aws s3 ls --region=us-west-2 --no-sign-request s3://asf-jupyter-data-west/ ``` <font face = 'Calibri' size='3'><b>Copy the desired file ('Lab9Files.zip') to your data directory:</b></font> ``` !aws s3 cp --region=us-west-2 --no-sign-request s3://asf-jupyter-data-west/Lab9Files.zip . ``` <font face='Calibri'><font size='3'><b>Create the directories where we will perform the GIAnT analysis and store the data:</b></font> ``` stack_path = f"{os.getcwd()}/Stack" # directory GIAnT prefers to access and store data steps. if not os.path.exists(stack_path): os.makedirs(stack_path) ``` <font face='Calibri'><font size='3'><b>Extract the zipped file to path and delete it:</b></font> ``` zipped = 'Lab9Files.zip' asfn.asf_unzip(path, zipped) if os.path.exists(zipped): os.remove(zipped) ``` <font face='Calibri' size='3'>The files have been extracted and placed in a folder called 'Lab9Files'. <b>Move the amplitude image, data.xml, date.mli.par, and sbas.xml files to path and RAW-STACK.h5 to stack_path:</b></font> ``` temp_dir = f"{path}/Lab9Files" if not os.path.exists(f"{stack_path}/RAW-STACK.h5"): shutil.move(f"{temp_dir}/RAW-STACK.h5", stack_path) files = glob.glob(f"{temp_dir}/*.*") for file in files: if os.path.exists(file): shutil.move(file, path) if os.path.exists(temp_dir): os.rmdir(temp_dir) ``` <font face='Calibri'><font size='5'><b>2. Create Input Files And Code for GIAnT</b></font> <br> <font size ='3'>The code below shows how to create the input files and specialty code that GIAnT requires. For this lab, 'ifg.list' is not needed, 'date.mli.par' has already been provided, 'prepxml_SBAS.py' is not needed as the 'sbas.xml' and 'data.xml' files it would create have already been provided, and 'userfn.py' is not needed as we are skipping the step in which it would be used. <br>The files that would be created are listed below. <br> - ifg.list - List of the interferogram properties including master and slave date, perpendicular baseline, and sensor. - date.mli.par - File from which GIAnT pulls requisite information about the sensor. - This is specifically for GAMMA files. When using other interferogram processing techniques, an alternate file is required. - prepxml_SBAS.py - Python function to create an xml file that specifies the processing options to GIAnT. - This must be modified by the user for their particular application. - userfn.py - Python function to map the interferogram dates to a phyiscal file on disk. - This must be modified by the user for their particular application. </font> </font> <font face='Calibri' size='4'> <b>2.1 Create 'ifg.list' File </b> </font> </font> <br> <font face='Calibri' size='3'> This will create simple 4 column text file will communicate network information to GIAnT. It will be created within the <b>GIAnT</b> folder. <br><br> <b>This step has already been done, so we will not actually create the 'ifg.list' file. This code is displayed for your potential future use.</b></font> ``` """ # Get one of each file name. This assumes the unwrapped phase geotiff has been converted to a '.flt' file files = [f for f in os.listdir(datadirectory) if f.endswith('_unw_phase.flt')] # Get all of the master and slave dates. masterDates,slaveDates = [],[] for file in files: masterDates.append(file[0:8]) slaveDates.append(file[9:17]) # Sort the dates according to the master dates. master_dates,sDates = (list(t) for t in zip(*sorted(zip(masterDates,slaveDates)))) with open( os.path.join('GIAnT', 'ifg.list'), 'w') as fid: for i in range(len(master_dates)): masterDate = master_dates[i] # pull out master Date (first set of numbers) slaveDate = sDates[i] # pull out slave Date (second set of numbers) bperp = '0.0' # according to JPL notebooks sensor = 'S1' # according to JPL notebooks fid.write(f'{masterDate} {slaveDate} {bperp} {sensor}\n') # write values to the 'ifg.list' file. """ ``` <font face='Calibri'><font size='3'>You may notice that the code above sets the perpendicular baseline to a value of 0.0 m. This is not the true perpendicular baseline. That value can be found in metadata file (titled '$<$master timestamp$>$_$<$slave timestamp$>$.txt') that comes with the original interferogram. Generally, we would want the true baseline for each interferogram. However, since Sentinel-1 has such a short baseline, a value of 0.0 m is sufficient for our purposes. </font></font> <font face='Calibri' size='4'> <b>2.2 Create 'date.mli.par' File </b></font> <br> <font face='Calibri' size='3'>As we are using GAMMA products, we must create a 'date.mli.par' file from which GIAnT will pull necessary information. If another processing technique is used to create the interferograms, an alterante file name and file inputs are required. <br><br> <b>Again, this step has already been completed and the code is only displayed for your potential future use.</b></font> ``` """ # Create file 'date.mli.par' # Get file names files = [f for f in os.listdir(datadirectory) if f.endswith('_unw_phase.flt')] # Get WIDTH (xsize) and FILE_LENGTH (ysize) information ds = gdal.Open(datadirectory+files[0], gdal.GA_ReadOnly) type(ds) nLines = ds.RasterYSize nPixels = ds.RasterXSize trans = ds.GetGeoTransform() ds = None # Get the center line UTC time stamp; can also be found inside <date>_<date>.txt file and hard coded dirName = os.listdir('ingrams')[0] # get original file name (any file can be used; the timestamps are different by a few seconds) vals = dirName.split('-') # break file name into parts using the separator '-' tstamp = vals[2][9:16] # extract the time stamp from the 2nd datetime (could be the first) c_l_utc = int(tstamp[0:2])*3600 + int(tstamp[2:4])*60 + int(tstamp[4:6]) rfreq = 299792548.0 / 0.055465763 # radar frequency; speed of light divided by radar wavelength of Sentinel1 in meters # write the 'date.mli.par' file with open(os.path.join(path, 'date.mli.par'), 'w') as fid: # Method 1 fid.write(f'radar_frequency: {rfreq} \n') # when using GAMMA products, GIAnT requires the radar frequency. Everything else is in wavelength (m) fid.write(f'center_time: {c_l_utc} \n') # Method from Tom Logan's prepGIAnT code; can also be found inside <date>_<date>.txt file and hard coded fid.write( 'heading: -11.9617913 \n') # inside <date>_<date>.txt file; can be hardcoded or set up so code finds it. fid.write(f'azimuth_lines: {nLines} \n') # number of lines in direction of the satellite's flight path fid.write(f'range_samples: {nPixels} \n') # number of pixels in direction perpendicular to satellite's flight path fid.close() # close the file """ ``` <font face='Calibri'><font size='4'><b>2.3 Make prepxml_SBAS.py</b> </font> <br> <font size='3'>We will create a prepxml_SBAS.py function and put it into our GIAnT working directory. Again, this is shown for anyone that may want to use GIAnT on their own.<br>If we do wish to change 'sbas.xml' or 'data.xml', this can be done by creating and running a new 'prepxml_SBAS.py'. </font> </font> <font face='Calibri'> <font size='3'><b>2.3.1 Necessary prepxml_SBAS.py edits</b></font> <br> <font size='3'> GIAnT comes with an example prepxml_SBAS.py, but requries significant edits for our purposes. These alterations have already been made, so we don't have to do anything now, but it is good to know the kinds of things that have to be altered. The details of some of these options can be found in the GIAnT documentation. The rest must be found in the GIAnT processing files themselves, most notably the tsxml.py and tsio.py functions. <br>The following alterations were made: <br> - Changed 'example' &#9658; 'date.mli.par' - Removed 'xlim', 'ylim', 'ref_x_lim', and 'ref_y_lim' - These are used for clipping the files in GIAnT. As we have already done this, it is not necessary. - Removed latfile='lat.map' and lonfile='lon.map' - These are optional inputs for the latitude and longitude maps. - Removed hgtfile='hgt.map' - This is an optional altitude file for the sensor. - Removed inc=21. - This is the optional incidence angle information. - It can be a constant float value or incidence angle file. - For Sentinel1, it varies from 29.1-46.0&deg;. - Removed masktype='f4' - This is the mask designation. - We are not using any masks for this. - Changed unwfmt='RMG' &#9658; unwfmt='GRD' - Read data using GDAL. - Removed demfmt='RMG' - Changed corfmt='RMG' &#9658; corfmt='GRD' - Read data using GDAL. - Changed nvalid=30 -> nvalid=1 - This is the minimum number of interferograms in which a pixel must be coherent. A particular pixel will be included only if its coherence is above the coherence threshold, cohth, in more than nvalid number of interferograms. - Removed atmos='ECMWF' - This is an amtospheric correction command. It depends on a library called 'pyaps' developed for GIAnT. This library has not been installed yet. - Changed masterdate='19920604' &#9658; masterdate='20161119' - Use our actual masterdate. - I simply selected the earliest date as the masterdate. </font> <font face='Calibri' size='3'>Defining a reference region is a potentially important step. This is a region at which there should be no deformation. For a volcano, this should be some significant distance away from the volcano. GIAnT has the ability to automatically select a reference region which we will use for this exercise. <br>Below is an example of how the reference region would be defined. If we look at the prepxml_SBAS.py code below, ref_x_lim and ref_y_lim, the pixel based location of the reference region, is within the code, but has been commented out. <br><br> <b>Define reference region:</b></font> ``` ref_x_lim, ref_y_lim = [0, 10], [95, 105] ``` <font face='Calibri' size='3'>Below is an example of how the reference region would be defined. Look at the prepxml_SBAS.py code below. Note that ref_x_lim and ref_y_lim (the pixel based location of the reference region) are within the code. <br><br> <b>This has already been completed but the code is here as an example script for creating XML files for use with the SBAS processing chain.</b></font> ``` ''' #!/usr/bin/env python import tsinsar as ts import argparse import numpy as np def parse(): parser= argparse.ArgumentParser(description='Preparation of XML files for setting up the processing chain. Check tsinsar/tsxml.py for details on the parameters.') parser.parse_args() parse() g = ts.TSXML('data') g.prepare_data_xml( 'date.mli.par', proc='GAMMA', #ref_x_lim = [{1},{2}], ref_y_lim=[{3},{4}], inc = 21., cohth=0.10, unwfmt='GRD', corfmt='GRD', chgendian='True', endianlist=['UNW','COR']) g.writexml('data.xml') g = ts.TSXML('params') g.prepare_sbas_xml(nvalid=1, netramp=True, demerr=False, uwcheck=False, regu=True, masterdate='{5}', filt=1.0) g.writexml('sbas.xml') ############################################################ # Program is part of GIAnT v1.0 # # Copyright 2012, by the California Institute of Technology# # Contact: earthdef@gps.caltech.edu # ############################################################ ''' ``` <font face='Calibri' size='3'><b>Set the master date and create a script for creating XML files for use with the SBAS processing chain: </b></font> ``` #files = [f for f in os.listdir(datadirectory) if f.endswith('_unw_phase.flt')] #master_date = min([files[i][0:8] for i in range(len(files))], key=int) master_date = '20161119' prepxml_SBAS_Template = ''' #!/usr/bin/env python """Example script for creating XML files for use with the SBAS processing chain. This script is supposed to be copied to the working directory and modified as needed.""" import tsinsar as ts import argparse import numpy as np def parse(): parser= argparse.ArgumentParser(description='Preparation of XML files for setting up the processing chain. Check tsinsar/tsxml.py for details on the parameters.') parser.parse_args() parse() g = ts.TSXML('data') g.prepare_data_xml( 'date.mli.par', proc='GAMMA', #ref_x_lim = [{1},{2}], ref_y_lim=[{3},{4}], inc = 21., cohth=0.10, unwfmt='GRD', corfmt='GRD', chgendian='True', endianlist=['UNW','COR']) g.writexml('data.xml') g = ts.TSXML('params') g.prepare_sbas_xml(nvalid=1, netramp=True, demerr=False, uwcheck=False, regu=True, masterdate='{5}', filt=1.0) g.writexml('sbas.xml') ############################################################ # Program is part of GIAnT v1.0 # # Copyright 2012, by the California Institute of Technology# # Contact: earthdef@gps.caltech.edu # ############################################################ ''' with open(os.path.join(path,'prepxml_SBAS.py'), 'w') as fid: fid.write(prepxml_SBAS_Template.format(path,ref_x_lim[0],ref_x_lim[1],ref_y_lim[0],ref_y_lim[1],master_date)) ``` <font face='Calibri'><font size='3'>To create a new 'sbas.xml' and 'data.xml' file, we would modify the above code to give new parameters and to write to the appropriate folder (e.g., to change the time filter from 1 year to none and to write to the directory in which we are working; 'filt=1.0' -> 'filt=0.0'; and 'os.path.join(path,'prepxml_SBAS.py') -> 'prepxml_SBAS.py' OR '%cd ~' into your home directory). Then we would run it below. </font></font> <font face='Calibri' size='4'> <b>2.4 Run prepxml_SBAS.py </b> </font> <br> <font face='Calibri' size='3'> Here we run <b>prepxml_SBAS.py</b> to create the 2 needed files</font> - data.xml - sbas.xml <font face='Calibri' size='3'> To use MinTS, we would run <b>prepxml_MinTS.py</b> to create</font> - data.xml - mints.xml <font face='Calibri' size='3'> These files are needed by <b>PrepIgramStack.py</b>. <br> We must first switch to the GIAnT folder in which <b>prepxml_SBAS.py</b> is contained, then call it. Otherwise, <b>prepxml_SBAS.py</b> will not be able to find the file 'date.mli.par', which holds necessary processing information. <br><br> <b>Create a variable holding the general path to the GIAnT code base and download GIAnT from the `asf-jupyter-data-west` S3 bucket, if not present.</b> <br> GIAnT is no longer supported (Python 2). This unofficial version of GIAnT has been partially ported to Python 3 to run this notebook. Only the portions of GIAnT used in this notebook have been tested. </font> ``` giant_path = "/home/jovyan/.local/GIAnT/SCR" if not os.path.exists("/home/jovyan/.local/GIAnT"): download_path = 's3://asf-jupyter-data-west/GIAnT_5_21.zip' output_path = f"/home/jovyan/.local/{os.path.basename(download_path)}" !aws --region=us-west-2 --no-sign-request s3 cp $download_path $output_path if os.path.isfile(output_path): !unzip $output_path -d /home/jovyan/.local/ os.remove(output_path) ``` <font face='Calibri' size='3'><b>Run prepxml_SBAS.py and check the output to confirm that your input values are correct:</b></font> ``` # !python $giant_path/prepxml_SBAS.py # this has already been done. data.xml and sbas.xml already exist ``` <font face='Calibri' size='3'><b>Make sure the two requisite xml files (data.xml and sbas.xml) were produced after running prepxml_SBAS.py.</b></font> <br><br> <font face='Calibri' size='3'><b>Display the contents of data.xml:</b></font> ``` if os.path.exists('data.xml'): !cat data.xml ``` <font face='Calibri' size='3'><b>Display the contents of sbas.xml:</b></font> ``` if os.path.exists('sbas.xml'): !cat sbas.xml ``` <font face='Calibri'><font size='4'><b>2.5 Create userfn.py</b></font> <br> <font size='3'>Before running the next piece of code, <b>PrepIgramStack.py</b>, we must create a python file called <b>userfn.py</b>. This file maps the interferogram dates to a physical file on disk. This python file must be in our working directory, <b>/GIAnT</b>. We can create this file from within the notebook using python. <br><br> <b>Again, this step has already been preformed and is unnecessary, but the code is provided as an example.</b></font> ``` userfnTemplate = """ #!/usr/bin/env python import os def makefnames(dates1, dates2, sensor): dirname = '{0}' root = os.path.join(dirname, dates1+'-'+dates2) #unwname = root+'_unw_phase.flt' # for potentially disruptive default values kept. unwname = root+'_unw_phase_no_default.flt' # for potentially disruptive default values removed. corname = root+'_corr.flt' return unwname, corname """ with open('userfn.py', 'w') as fid: fid.write(userfnTemplate.format(path)) ``` <font face='Calibri'><font size='5'><b>3. Run GIAnT</b></font> <br> <font size='3'>We have now created all of the necessary files to run GIAnT. The full GIAnT process requires 3 function calls. - PrepIgramStack.py - After PrepIgramStack.py, we will actually start running GIAnT. - ProcessStack.py - SBASInvert.py - SBASxval.py - This 4th function call is not necessary and we will skip it, but provides some error estimation that can be useful. <font face='Calibri' size='4'> <b>3.1 Run PrepIgramStack.py </b> </font> <br> <font face='Calibri' size='3'> Here we would run <b>PrepIgramStack.py</b> to create the files for GIAnT. This would read in the input data and the files we previously created and output an HDF5 file. As we do not actually need to call this, it is currently set up to display some help information.<br> Inputs: - ifg.list - data.xml - sbas.xml - interferograms - coherence files Outputs: - RAW-STACK.h5 - PNG previews under 'GIAnT/Figs/Igrams' </font> <br> <font size='3'><b>Display some help information for PrepIgramStack.py:</b></font> ``` !python $giant_path/PrepIgramStack.py -h ``` <font size='3'><b>Run PrepIgramStack.py (in our case, this has already been done):</b></font> ``` #!python $giant_path/PrepIgramStack.py ``` <hr> <font face='Calibri'><font size='3'>PrepIgramStack.py creates a file called 'RAW-STACK.h5'. <br><br> <b>Verify that RAW-STACK.h5 is an HDF5 file as required by the rest of GIAnT.</b></font> ``` raw_h5 = f"{stack_path}/RAW-STACK.h5" if not h5py.is_hdf5(raw_h5): print(f"Not an HDF5 file: {raw_h5}") else: print(f"Confirmed: {raw_h5} is an HDF5 file.") ``` <font face='Calibri' size='4'> <b>3.2 Run ProcessStack.py </b> </font> <br> <font face='Calibri' size='3'> This seems to be an optional step. Does atmospheric corrections and estimation of orbit residuals. <br> Inputs: - HDF5 files from PrepIgramStack.py, RAW-STACK.h5 - data.xml - sbas.xml - GPS Data (optional; we don't have this) - Weather models (downloaded automatically) Outputs: - HDF5 files, PROC-STACK.h5 These files are then fed into SBAS. </font> <br><br> <font face='Calibri' size='3'><b>Display the help information for ProcessStack.py:</b></font> ``` !python $giant_path/ProcessStack.py -h ``` <font face='Calibri' size='3'><b>Run ProcessStack.py:</b></font> ``` !python $giant_path/ProcessStack.py ``` <hr> <font face='Calibri'><font size='3'>ProcessStack.py creates a file called 'PROC-STACK.h5'. <br><br> <b>Verify that PROC-STACK.h5 is an HDF5 file as required by the rest of GIAnT:</b></font> ``` proc_h5 = f"{stack_path}/PROC-STACK.h5" if not h5py.is_hdf5(proc_h5): print(f"Not an HDF5 file: {proc_h5}") else: print(f"Confirmed: {proc_h5} is an HDF5 file.") ``` <font face='Calibri' size='4'> <b>3.3 Run SBASInvert.py </b></font> <br> <font face='Calibri' size='3'> Actually do the time series. Inputs - HDF5 file, PROC-STACK.h5 - data.xml - sbas.xml Outputs - HDF5 file: LS-PARAMS.h5 <b>Display the help information for SBASInvert.py:</b> </font> ``` !python $giant_path/SBASInvert.py -h ``` <font face='Calibri' size='3'><b>Run SBASInvert.py:</b></font> ``` !python $giant_path/SBASInvert.py ``` <hr> <font face='Calibri'><font size='3'>SBASInvert.py creates a file called 'LS-PARAMS.h5'. <br><br> <b>Verify that LS-PARAMS.h5 is an HDF5 file as required by the rest of GIAnT:</b></font> ``` params_h5 = f"{stack_path}/LS-PARAMS.h5" if not h5py.is_hdf5(params_h5): print(f"Not an HDF5 file: {params_h5}") else: print(f"Confirmed: {params_h5} is an HDF5 file.") ``` <font face='Calibri' size='4'> <b>3.4 Run SBASxval.py </b></font> <br> <font face='Calibri' size='3'> Get an uncertainty estimate for each pixel and epoch using a Jacknife test. We are skipping this function as we won't be doing anything with its output and it takes a significant amount of time to run relative to the other GIAnT functions. Inputs: - HDF5 files, PROC-STACK.h5 - data.xml - sbas.xml Outputs: - HDF5 file, LS-xval.h5 <br> <b>Display the help information for SBASxval.py:</b></font> ``` #!python $giant_path/SBASxval.py -h ``` <font face='Calibri' size='3'><b>Run SBASxval.py:</b></font> ``` #!python $giant_path/SBASxval.py ``` <hr> <font face='Calibri'><font size='3'>SBASxval.py creates a file called 'LS-xval.h5'. <br><br> <b>Verify that LS-xval.h5 is an HDF5 file as required by the rest of GIAnT:</b></font> ``` ''' xval_h5 = f"{stack_path}/LS-xval.h5" if not h5py.is_hdf5(xval_h5): print(f"Not an HDF5 file: {xval_h5}") else: print(f"Confirmed: {xval_h5} is an HDF5 file.") ''' ``` <font face='Calibri' size='5'><b>4. Data Visualization</b></font> <br> <font face='Calibri' size='3'>Now we visualize the data. This is largely copied from Lab 4. <br><br> <b>Create a directory in which to store our plots and move into it:</b></font> ``` plot_dir = f"{path}/plots" if not os.path.exists(plot_dir): os.makedirs(plot_dir) if os.path.exists(plot_dir): os.chdir(plot_dir) print(f"Current Working Directory: {os.getcwd()}") ``` <font face='Calibri' size='3'><b>Load the stack produced by GIAnT and read it into an array so we can manipulate and display it:</b></font> ``` f = h5py.File(params_h5, 'r') ``` <font face='Calibri' size='3'><b>List all groups ('key's) within the HDF5 file that has been loaded into the object 'f'</b></font> ``` print("Keys: %s" %f.keys()) ``` <font face='Calibri' size='3'>Details on what each of these keys means can be found in the GIAnT documentation. For now, the only keys with which we are concerned are <b>'recons'</b> (the filtered time series of each pixel) and <b>'dates'</b> (the dates of acquisition). It is important to note that the dates are given in a type of Julian Day number called Rata Die number. This will have to be converted later, but this can easily be done via one of several different methods in Python.</font> <br><br> <font face='Calibri' size='3'><b>Get our data from the stack:</b></font> ``` data_cube = f['recons'][()] ``` <font face='Calibri' size='3'><b>Get the dates for each raster from the stack:</b></font> ``` dates = list(f['dates']) # these dates appear to be given in Rata Die style: floor(Julian Day Number - 1721424.5). if data_cube.shape[0] is not len(dates): print('Problem:') print('Number of rasters in data_cube: ',data_cube.shape[0]) print('Number of dates: ',len(dates)) ``` <font face='Calibri' size='3'><b>Plot and save amplitude image with transparency determined by alpha (SierraNegra-dBScaled-AmplitudeImage.png):</b></font> ``` plt.rcParams.update({'font.size': 14}) radar_tiff = f"{path}/20161119-20170106_amp.tiff" radar=gdal.Open(radar_tiff) im_radar = radar.GetRasterBand(1).ReadAsArray() radar = None dbplot = np.ma.log10(im_radar) vmin=np.percentile(dbplot,3) vmax=np.percentile(dbplot,97) fig = plt.figure(figsize=(18,10)) # Initialize figure with a size ax1 = fig.add_subplot(111) # 221 determines: 2 rows, 2 plots, first plot ax1.imshow(dbplot, cmap='gray',vmin=vmin,vmax=vmax,alpha=1); plt.title('Example dB-scaled SAR Image for Ifgrm 20161119-20170106') plt.grid() plt.savefig('SierraNegra-dBScaled-AmplitudeImage.png',dpi=200,transparent='false') ``` <font face='Calibri' size='3'><b>Display and save an overlay of the clipped deformation map and amplitude image (SierraNegra-DeformationComposite.png):</b></font> ``` # We will define a short function that can plot an overaly of our radar image and deformation map. def defNradar_plot(deformation, radar): fig = plt.figure(figsize=(18, 10)) ax = fig.add_subplot(111) vmin = np.percentile(radar, 3) vmax = np.percentile(radar, 97) ax.imshow(radar, cmap='gray', vmin=vmin, vmax=vmax) fin_plot = ax.imshow(deformation, cmap='RdBu', vmin=-50.0, vmax=50.0, alpha=0.75) fig.colorbar(fin_plot, fraction=0.24, pad=0.02) ax.set(title="Integrated Defo [mm] Overlain on Clipped db-Scaled Amplitude Image") plt.grid() # Get deformation map and radar image we wish to plot deformation = data_cube[data_cube.shape[0]-1] # Call function to plot an overlay of our deformation map and radar image. defNradar_plot(deformation, dbplot) plt.savefig('SierraNegra-DeformationComposite.png', dpi=200, transparent='false') ``` <font face='Calibri' size='3'><b>Convert from Rata Die number (similar to Julian Day number) contained in 'dates' to Gregorian date:</b></font> ``` tindex = [] for d in dates: tindex.append(date.fromordinal(int(d))) ``` <font face='Calibri' size='3'><b>Create an animation of the deformation</b></font> ``` %%capture fig = plt.figure(figsize=(14, 8)) ax = fig.add_subplot(111) ax.axis('off') vmin=np.percentile(data_cube.flatten(), 5) vmax=np.percentile(data_cube.flatten(), 95) im = ax.imshow(data_cube[0], cmap='RdBu', vmin=-50.0, vmax=50.0) ax.set_title("Animation of Deformation Time Series - Sierra Negra, Galapagos") fig.colorbar(im) plt.grid() def animate(i): ax.set_title("Date: {}".format(tindex[i])) im.set_data(data_cube[i]) ani = matplotlib.animation.FuncAnimation(fig, animate, frames=data_cube.shape[0], interval=400) ``` <font face="Calibri" size="3"><b>Configure matplotlib's RC settings for the animation:</b></font> ``` rc('animation', embed_limit=10.0**9) ``` <font face="Calibri" size="3"><b>Create a javascript animation of the time-series running inline in the notebook:</b></font> ``` HTML(ani.to_jshtml()) ``` <font face="Calibri" size="3"><b>Save the animation as a 'gif' file (SierraNegraDeformationTS.gif):</b></font> ``` ani.save('SierraNegraDeformationTS.gif', writer='pillow', fps=2) ``` <font face='Calibri'><font size='5'><b>5. Alter the time filter parameter</b></font><br> <font size='3'>Looking at the video above, you may notice that the deformation has a very smoothed appearance. This may be because of our time filter which is currently set to 1 year ('filt=1.0' in the prepxml_SBAS.py code). Let's repeat the lab from there with 2 different time filters. <br>First, using no time filter ('filt=0.0') and then using a 1 month time filter ('filt=0.082'). Change the output file name for anything you want saved (e.g., 'SierraNegraDeformationTS.gif' to 'YourDesiredFileName.gif'). Otherwise, it will be overwritten. <br><br>How did these changes affect the output time series?<br>How might we figure out the right filter length?<br>What does this say about the parameters we select? <font face='Calibri'><font size='5'><b>6. Clear data (optional)</b></font> <br> <font size='3'>This lab has produced a large quantity of data. If you look at this notebook in your home directory, it should now be ~13 MB. This can take a long time to load in a Jupyter Notebook. It may be useful to clear the cell outputs. <br>To clear the cell outputs, go Cell->All Output->Clear. This will clear the outputs of the Jupyter Notebook and restore it to its original size of ~60 kB. This will not delete any of the files we have created. </font> </font> <font face="Calibri" size="2"> <i>GEOS 657-Lab9-InSARTimeSeriesAnalysis.ipynb - Version 1.2.0 - April 2021 <br> <b>Version Changes:</b> <ul> <li>from osgeo import gdal</li> <li>namespace asf_notebook</li> </ul> </i> </font>
github_jupyter
# Linear regression ## Problem Build a model and predict pricing of the apartment rent in Graz based on data in the ad ## Goals - Manually write linear regression algorithm - Gradient descent function - Cost function implementation - Normal equation function - Feature enumeration and normalization function - Use libraries and compare it with manual result ## Data description | Feature | Variable Type | Variable | Value Type | |---------|--------------|---------------|------------| | Area | Objective Feature | area | float (square meters) | | Rooms number | Objective Feature | rooms | string | | Zip code | Objective Feature | zip | string | | District | Objective Feature | district | string | | Is the ad private| Objective Feature | is_private | boolean | | Is the flat in the city center | Objective Feature | center | boolean | | Pricing of the ad | Target Variable | price | float | ## Read data from pickle ``` import pandas as pd import matplotlib.pyplot as plt import numpy as np import matplotlib %matplotlib inline %config InlineBackend.figure_format = 'retina' matplotlib.rcParams.update({'font.size': 16}) df = pd.read_pickle('apartmetns.pkl') ``` ## Data selection ``` # We will make a copy of the dataset X = df.loc[:, ~df.columns.isin(['price', 'advertiser', 'link-href', 'is_private', 'zip', 'district'])] y = df['price'] X y.head() ``` ## Categorical features enumeration ``` def cats_to_codes(df, feature, ordered=None): return dict(df[feature].value_counts().astype('category').cat.codes) def codes_to_cats(feature, code_dict): return {value:key for key, value in code_dict.items()} ``` zip_codes = cats_to_codes(X, 'zip') X.zip = X.zip.map(zip_codes) ``` X.head() rooms_codes = cats_to_codes(X, 'rooms') X.rooms = X.rooms.map(rooms_codes) X.head() ``` district_codes = cats_to_codes(X, 'district') X.district = X.district.map(district_codes) ``` #X.is_private = X.is_private.astype('int') X.center = X.center.astype('int') X ``` ## Feature normalization ``` mean_X = np.mean(X) std_X = np.std(X) def normalize(x, mean, std): return (x - mean)/std for feat in X.columns: X[feat] = (X[feat] - mean_X[feat])/std_X[feat] X ``` ## First hyphotesis $$ h_\theta(x) = \theta_0 x_0 + \theta_1 x_1 + \theta_2 x_2 + \theta_3 x_3 + \theta_4 x_4 + \theta_5 x_5 + \theta_6 x_6 $$ ### Adding of intercept term x0 ``` X.insert(loc=0, column='x0', value=np.ones(len(X))) X ``` ### Conversion of X and y to numpy arrays ``` X = X.to_numpy() y = y.to_numpy().reshape((-1, 1)) X.shape, y.shape def computeCost(X, y, theta): m = len(y) J = 1/(2*m) * np.sum(np.power(np.subtract(X.dot(theta), y), 2)) return J theta = np.zeros((X.shape[1], 1)) theta.shape computeCost(X, y, theta) def gradientDescent(X, y, theta, alpha, num_iters): m = len(y) J_history = np.zeros((num_iters, 1)) for i in range(num_iters): error = X.dot(theta) - y theta = theta - (alpha/m) * X.T.dot(error) J_history[i] = computeCost(X, y, theta) return theta, J_history new_theta, J_history = gradientDescent(X, y, theta, 0.1, 100) computeCost(X, y, new_theta) plt.figure(figsize=(10,7)); alphas = np.linspace(0.001, 0.3, 5) for a in alphas: _, J_history = gradientDescent(X, y, theta, a, 50) plt.plot(np.arange(J_history.shape[0]), J_history, label=(r'$\alpha$ = {:1.3f}'.format(a) \ + '\n J = {}'.format(int(J_history[-1][0])))) plt.legend() plt.xlabel('Number of iterations'); plt.ylabel(r'Cost function J($\theta$)') plt.title("Gradient descent"); ``` ### Quick test for hyphothesis ``` new_theta, J_history = gradientDescent(X, y, theta, 0.1, 100) new_theta mean_X, std_X f = [44, 1, 1] x = [] for i in range(mean_X.shape[0]): x.append(normalize(f[i], mean_X[i], std_X[i])) x = [1,] + x x = np.array(x) new_theta.T.dot(x) ``` Looks valid ## Model validation ``` from sklearn.model_selection import train_test_split train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1) def predict(X, theta): return X.dot(theta) theta new_theta, _ = gradientDescent(train_X, train_y, theta, 0.1, 100) val_predictions = predict(val_X, new_theta) print(val_predictions[:10]) print(val_y[:10]) ``` ### Calculation of the Mean Absolute Error in Validation Data ``` from sklearn.metrics import mean_absolute_error val_mae = mean_absolute_error(val_y, val_predictions) val_mae ``` ## Normal equation $$ \theta = (X^T X)^{-1}X^T \bar{y} $$ ``` def normalEqn(X, y): m = len(y) theta = np.linalg.pinv(X.T.dot(X)).dot(X.T).dot(y) return theta norm_theta = normalEqn(X, y) ``` Should be more accurate ``` train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1) val_predictions = predict(val_X, norm_theta) val_mae = mean_absolute_error(val_y, val_predictions) val_mae ``` ## Predicting prices usign scikit linear regressor ``` X X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=1) from sklearn.linear_model import LinearRegression regressor = LinearRegression(fit_intercept=False, n_jobs=-1) regressor.fit(X_train, y_train) print('Weight coefficients: ', regressor.coef_) y_pred_train = regressor.predict(X_test) y_pred_train[0:10] val_mae = mean_absolute_error(y_test, y_pred_train) val_mae error = y_pred_train.sum() / y_test.sum() - 1 print("Percentage error: {:.2f}%".format(error*100)) from sklearn.linear_model import RidgeCV model = RidgeCV(fit_intercept=False, cv=5) model.fit(X_train, y_train) model.best_score_ model.alpha_ model.coef_ error = model.predict(X_test).sum() / y_test.sum() - 1 print("Percentage error: {:.2f}%".format(error*100)) from sklearn.linear_model import LassoCV model = LassoCV(fit_intercept=False, cv=5) model.fit(X_train, y_train.reshape(-1)) model.coef_ error = model.predict(X_test).sum() / y_test.sum() - 1 print("Percentage error: {:.2f}%".format(error*100)) ```
github_jupyter
# List Comprehensions Complete the following set of exercises to solidify your knowledge of list comprehensions. ``` import os; ``` #### 1. Use a list comprehension to create and print a list of consecutive integers starting with 1 and ending with 50. ``` lst = [i for i in range(1,51)] print(lst) ``` #### 2. Use a list comprehension to create and print a list of even numbers starting with 2 and ending with 200. ``` lst = [i for i in range(2,202) if i >= 2] for num in lst: if num % 2 == 0: print(num, end = " ") ``` #### 3. Use a list comprehension to create and print a list containing all elements of the 10 x 4 array below. ``` a = [[0.84062117, 0.48006452, 0.7876326 , 0.77109654], [0.44409793, 0.09014516, 0.81835917, 0.87645456], [0.7066597 , 0.09610873, 0.41247947, 0.57433389], [0.29960807, 0.42315023, 0.34452557, 0.4751035 ], [0.17003563, 0.46843998, 0.92796258, 0.69814654], [0.41290051, 0.19561071, 0.16284783, 0.97016248], [0.71725408, 0.87702738, 0.31244595, 0.76615487], [0.20754036, 0.57871812, 0.07214068, 0.40356048], [0.12149553, 0.53222417, 0.9976855 , 0.12536346], [0.80930099, 0.50962849, 0.94555126, 0.33364763]]; lst = [] for i in range(len(a[0])): lst_row = [] for row in a: lst_row.append(row[i]) lst.append(lst_row) print(lst) a = [[0.84062117, 0.48006452, 0.7876326 , 0.77109654], [0.44409793, 0.09014516, 0.81835917, 0.87645456], [0.7066597 , 0.09610873, 0.41247947, 0.57433389], [0.29960807, 0.42315023, 0.34452557, 0.4751035 ], [0.17003563, 0.46843998, 0.92796258, 0.69814654], [0.41290051, 0.19561071, 0.16284783, 0.97016248], [0.71725408, 0.87702738, 0.31244595, 0.76615487], [0.20754036, 0.57871812, 0.07214068, 0.40356048], [0.12149553, 0.53222417, 0.9976855 , 0.12536346], [0.80930099, 0.50962849, 0.94555126, 0.33364763]]; a = [int(i) for i in a] print ("Modified list is : " + str(a)) ``` #### 4. Add a condition to the list comprehension above so that only values greater than or equal to 0.5 are printed. ``` lst = [[0.84062117, 0.44409793, 0.7066597, 0.29960807, 0.17003563, 0.41290051, 0.71725408, 0.20754036, 0.12149553, 0.80930099], [0.48006452, 0.09014516, 0.09610873, 0.42315023, 0.46843998, 0.19561071, 0.87702738, 0.57871812, 0.53222417, 0.50962849], [0.7876326, 0.81835917, 0.41247947, 0.34452557, 0.92796258, 0.16284783, 0.31244595, 0.07214068, 0.9976855, 0.94555126], [0.77109654, 0.87645456, 0.57433389, 0.4751035, 0.69814654, 0.97016248, 0.76615487, 0.40356048, 0.12536346, 0.33364763]] lst_new = [ x for x in range(lst) if x >= 0.5] print(lst_new) lst = [i for i in range(lst) if i >= 0.5] print(lst) ``` #### 5. Use a list comprehension to create and print a list containing all elements of the 5 x 2 x 3 array below. ``` b = [[[0.55867166, 0.06210792, 0.08147297], [0.82579068, 0.91512478, 0.06833034]], [[0.05440634, 0.65857693, 0.30296619], [0.06769833, 0.96031863, 0.51293743]], [[0.09143215, 0.71893382, 0.45850679], [0.58256464, 0.59005654, 0.56266457]], [[0.71600294, 0.87392666, 0.11434044], [0.8694668 , 0.65669313, 0.10708681]], [[0.07529684, 0.46470767, 0.47984544], [0.65368638, 0.14901286, 0.23760688]]]; lst_new = [] # done in a long format- # for i in range(len(b[0])): # don't have to describe range with 0 for i in b: # lst_row = [] for row in i: lst_new.append(row) # lst.append(lst_row) print(lst_new) # or in a shorthand way- lst_new_2 = [row for i in b for row in i] print(lst_new_2) ``` #### 6. Add a condition to the list comprehension above so that the last value in each subarray is printed, but only if it is less than or equal to 0.5. ``` b = [[[0.55867166, 0.06210792, 0.08147297], [0.82579068, 0.91512478, 0.06833034]], [[0.05440634, 0.65857693, 0.30296619], [0.06769833, 0.96031863, 0.51293743]], [[0.09143215, 0.71893382, 0.45850679], [0.58256464, 0.59005654, 0.56266457]], [[0.71600294, 0.87392666, 0.11434044], [0.8694668 , 0.65669313, 0.10708681]], [[0.07529684, 0.46470767, 0.47984544], [0.65368638, 0.14901286, 0.23760688]]]; for x in b: for y in x: if y[-1] <= .5: print(y[-1]) ``` #### 7. Use a list comprehension to select and print the names of all CSV files in the */data* directory. ``` import csv import os file_list = [x for x in os.listdir('./data') if x.endswith('.csv')] data_sets = [pd.read_csv(os.path.join('./data', x)) for x in file_list] data = pd.concat(data_sets, axis=0) ``` ### Bonus Try to solve these katas using list comprehensions. **Easy** - [Insert values](https://www.codewars.com/kata/invert-values) - [Sum Square(n)](https://www.codewars.com/kata/square-n-sum) - [Digitize](https://www.codewars.com/kata/digitize) - [List filtering](https://www.codewars.com/kata/list-filtering) - [Arithmetic list](https://www.codewars.com/kata/541da001259d9ca85d000688) **Medium** - [Multiples of 3 or 5](https://www.codewars.com/kata/514b92a657cdc65150000006) - [Count of positives / sum of negatives](https://www.codewars.com/kata/count-of-positives-slash-sum-of-negatives) - [Categorize new member](https://www.codewars.com/kata/5502c9e7b3216ec63c0001aa) **Advanced** - [Queue time counter](https://www.codewars.com/kata/queue-time-counter)
github_jupyter
# [SOLUTION] Attention Basics In this notebook, we look at how attention is implemented. We will focus on implementing attention in isolation from a larger model. That's because when implementing attention in a real-world model, a lot of the focus goes into piping the data and juggling the various vectors rather than the concepts of attention themselves. We will implement attention scoring as well as calculating an attention context vector. ## Attention Scoring ### Inputs to the scoring function Let's start by looking at the inputs we'll give to the scoring function. We will assume we're in the first step in the decoding phase. The first input to the scoring function is the hidden state of decoder (assuming a toy RNN with three hidden nodes -- not usable in real life, but easier to illustrate): ``` dec_hidden_state = [5,1,20] ``` Let's visualize this vector: ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn as sns # Let's visualize our decoder hidden state plt.figure(figsize=(1.5, 4.5)) sns.heatmap(np.transpose(np.matrix(dec_hidden_state)), annot=True, cmap=sns.light_palette("purple", as_cmap=True), linewidths=1) ``` Our first scoring function will score a single annotation (encoder hidden state), which looks like this: ``` annotation = [3,12,45] #e.g. Encoder hidden state # Let's visualize the single annotation plt.figure(figsize=(1.5, 4.5)) sns.heatmap(np.transpose(np.matrix(annotation)), annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1) ``` ### IMPLEMENT: Scoring a Single Annotation Let's calculate the dot product of a single annotation. NumPy's [dot()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) is a good candidate for this operation ``` def single_dot_attention_score(dec_hidden_state, enc_hidden_state): # TODO: return the dot product of the two vectors return np.dot(dec_hidden_state, enc_hidden_state) single_dot_attention_score(dec_hidden_state, annotation) ``` ### Annotations Matrix Let's now look at scoring all the annotations at once. To do that, here's our annotation matrix: ``` annotations = np.transpose([[3,12,45], [59,2,5], [1,43,5], [4,3,45.3]]) ``` And it can be visualized like this (each column is a hidden state of an encoder time step): ``` # Let's visualize our annotation (each column is an annotation) ax = sns.heatmap(annotations, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1) ``` ### IMPLEMENT: Scoring All Annotations at Once Let's calculate the scores of all the annotations in one step using matrix multiplication. Let's continue to us the dot scoring method <img src="images/scoring_functions.png" /> To do that, we'll have to transpose `dec_hidden_state` and [matrix multiply](https://docs.scipy.org/doc/numpy/reference/generated/numpy.matmul.html) it with `annotations`. ``` def dot_attention_score(dec_hidden_state, annotations): # TODO: return the product of dec_hidden_state transpose and enc_hidden_states return np.matmul(np.transpose(dec_hidden_state), annotations) attention_weights_raw = dot_attention_score(dec_hidden_state, annotations) attention_weights_raw ``` Looking at these scores, can you guess which of the four vectors will get the most attention from the decoder at this time step? ## Softmax Now that we have our scores, let's apply softmax: <img src="images/softmax.png" /> ``` def softmax(x): x = np.array(x, dtype=np.float128) e_x = np.exp(x) return e_x / e_x.sum(axis=0) attention_weights = softmax(attention_weights_raw) attention_weights ``` Even when knowing which annotation will get the most focus, it's interesting to see how drastic softmax makes the end score become. The first and last annotation had the respective scores of 927 and 929. But after softmax, the attention they'll get is 0.119 and 0.880 respectively. # Applying the scores back on the annotations Now that we have our scores, let's multiply each annotation by its score to proceed closer to the attention context vector. This is the multiplication part of this formula (we'll tackle the summation part in the latter cells) <img src="images/Context_vector.png" /> ``` def apply_attention_scores(attention_weights, annotations): # TODO: Multiple the annotations by their weights return attention_weights * annotations applied_attention = apply_attention_scores(attention_weights, annotations) applied_attention ``` Let's visualize how the context vector looks now that we've applied the attention scores back on it: ``` # Let's visualize our annotations after applying attention to them ax = sns.heatmap(applied_attention, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1) ``` Contrast this with the raw annotations visualized earlier in the notebook, and we can see that the second and third annotations (columns) have been nearly wiped out. The first annotation maintains some of its value, and the fourth annotation is the most pronounced. # Calculating the Attention Context Vector All that remains to produce our attention context vector now is to sum up the four columns to produce a single attention context vector ``` def calculate_attention_vector(applied_attention): return np.sum(applied_attention, axis=1) attention_vector = calculate_attention_vector(applied_attention) attention_vector # Let's visualize the attention context vector plt.figure(figsize=(1.5, 4.5)) sns.heatmap(np.transpose(np.matrix(attention_vector)), annot=True, cmap=sns.light_palette("Blue", as_cmap=True), linewidths=1) ``` Now that we have the context vector, we can concatenate it with the hidden state and pass it through a hidden layer to produce the the result of this decoding time step.
github_jupyter
<a href="https://colab.research.google.com/github/Sudhir22/bert/blob/master/Pre_trained_BERT_contextualized_word_embeddings.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` !rm -rf bert !git clone https://github.com/google-research/bert %tensorflow_version 1.x import sys sys.path.append('bert/') from __future__ import absolute_import from __future__ import division from __future__ import print_function import codecs import collections import json import re import os import pprint import numpy as np import tensorflow as tf import modeling import tokenization tf.__version__ assert 'COLAB_TPU_ADDR' in os.environ, 'ERROR: Not connected to a TPU runtime; please see the first cell in this notebook for instructions!' TPU_ADDRESS = 'grpc://' + os.environ['COLAB_TPU_ADDR'] print('TPU address is', TPU_ADDRESS) from google.colab import auth auth.authenticate_user() with tf.Session(TPU_ADDRESS) as session: print('TPU devices:') pprint.pprint(session.list_devices()) # Upload credentials to TPU. with open('/content/adc.json', 'r') as f: auth_info = json.load(f) tf.contrib.cloud.configure_gcs(session, credentials=auth_info) # Now credentials are set for all future sessions on this TPU. # Available pretrained model checkpoints: # uncased_L-12_H-768_A-12: uncased BERT base model # uncased_L-24_H-1024_A-16: uncased BERT large model # cased_L-12_H-768_A-12: cased BERT large model BERT_MODEL = 'multi_cased_L-12_H-768_A-12' #@param {type:"string"} BERT_PRETRAINED_DIR = 'gs://cloud-tpu-checkpoints/bert/' + BERT_MODEL print('***** BERT pretrained directory: {} *****'.format(BERT_PRETRAINED_DIR)) !gsutil ls $BERT_PRETRAINED_DIR LAYERS = [-1,-2,-3,-4] NUM_TPU_CORES = 8 MAX_SEQ_LENGTH = 87 BERT_CONFIG = BERT_PRETRAINED_DIR + '/bert_config.json' CHKPT_DIR = BERT_PRETRAINED_DIR + '/bert_model.ckpt' VOCAB_FILE = BERT_PRETRAINED_DIR + '/vocab.txt' INIT_CHECKPOINT = BERT_PRETRAINED_DIR + '/bert_model.ckpt' BATCH_SIZE = 128 class InputExample(object): def __init__(self, unique_id, text_a, text_b=None): self.unique_id = unique_id self.text_a = text_a self.text_b = text_b class InputFeatures(object): """A single set of features of data.""" def __init__(self, unique_id, tokens, input_ids, input_mask, input_type_ids): self.unique_id = unique_id self.tokens = tokens self.input_ids = input_ids self.input_mask = input_mask self.input_type_ids = input_type_ids def input_fn_builder(features, seq_length): """Creates an `input_fn` closure to be passed to TPUEstimator.""" all_unique_ids = [] all_input_ids = [] all_input_mask = [] all_input_type_ids = [] for feature in features: all_unique_ids.append(feature.unique_id) all_input_ids.append(feature.input_ids) all_input_mask.append(feature.input_mask) all_input_type_ids.append(feature.input_type_ids) def input_fn(params): """The actual input function.""" batch_size = params["batch_size"] num_examples = len(features) # This is for demo purposes and does NOT scale to large data sets. We do # not use Dataset.from_generator() because that uses tf.py_func which is # not TPU compatible. The right way to load data is with TFRecordReader. d = tf.data.Dataset.from_tensor_slices({ "unique_ids": tf.constant(all_unique_ids, shape=[num_examples], dtype=tf.int32), "input_ids": tf.constant( all_input_ids, shape=[num_examples, seq_length], dtype=tf.int32), "input_mask": tf.constant( all_input_mask, shape=[num_examples, seq_length], dtype=tf.int32), "input_type_ids": tf.constant( all_input_type_ids, shape=[num_examples, seq_length], dtype=tf.int32), }) d = d.batch(batch_size=batch_size, drop_remainder=False) return d return input_fn def model_fn_builder(bert_config, init_checkpoint, layer_indexes, use_tpu, use_one_hot_embeddings): """Returns `model_fn` closure for TPUEstimator.""" def model_fn(features, labels, mode, params): # pylint: disable=unused-argument """The `model_fn` for TPUEstimator.""" unique_ids = features["unique_ids"] input_ids = features["input_ids"] input_mask = features["input_mask"] input_type_ids = features["input_type_ids"] model = modeling.BertModel( config=bert_config, is_training=False, input_ids=input_ids, input_mask=input_mask, token_type_ids=input_type_ids, use_one_hot_embeddings=use_one_hot_embeddings) if mode != tf.estimator.ModeKeys.PREDICT: raise ValueError("Only PREDICT modes are supported: %s" % (mode)) tvars = tf.trainable_variables() scaffold_fn = None (assignment_map, initialized_variable_names) = modeling.get_assignment_map_from_checkpoint( tvars, init_checkpoint) if use_tpu: def tpu_scaffold(): tf.train.init_from_checkpoint(init_checkpoint, assignment_map) return tf.train.Scaffold() scaffold_fn = tpu_scaffold else: tf.train.init_from_checkpoint(init_checkpoint, assignment_map) tf.logging.info("**** Trainable Variables ****") for var in tvars: init_string = "" if var.name in initialized_variable_names: init_string = ", *INIT_FROM_CKPT*" tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape, init_string) all_layers = model.get_all_encoder_layers() predictions = { "unique_id": unique_ids, } for (i, layer_index) in enumerate(layer_indexes): predictions["layer_output_%d" % i] = all_layers[layer_index] output_spec = tf.contrib.tpu.TPUEstimatorSpec( mode=mode, predictions=predictions, scaffold_fn=scaffold_fn) return output_spec return model_fn def convert_examples_to_features(examples, seq_length, tokenizer): """Loads a data file into a list of `InputBatch`s.""" features = [] for (ex_index, example) in enumerate(examples): tokens_a = tokenizer.tokenize(example.text_a) tokens_b = None if example.text_b: tokens_b = tokenizer.tokenize(example.text_b) if tokens_b: # Modifies `tokens_a` and `tokens_b` in place so that the total # length is less than the specified length. # Account for [CLS], [SEP], [SEP] with "- 3" _truncate_seq_pair(tokens_a, tokens_b, seq_length - 3) else: # Account for [CLS] and [SEP] with "- 2" if len(tokens_a) > seq_length - 2: tokens_a = tokens_a[0:(seq_length - 2)] # The convention in BERT is: # (a) For sequence pairs: # tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP] # type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1 # (b) For single sequences: # tokens: [CLS] the dog is hairy . [SEP] # type_ids: 0 0 0 0 0 0 0 # # Where "type_ids" are used to indicate whether this is the first # sequence or the second sequence. The embedding vectors for `type=0` and # `type=1` were learned during pre-training and are added to the wordpiece # embedding vector (and position vector). This is not *strictly* necessary # since the [SEP] token unambiguously separates the sequences, but it makes # it easier for the model to learn the concept of sequences. # # For classification tasks, the first vector (corresponding to [CLS]) is # used as as the "sentence vector". Note that this only makes sense because # the entire model is fine-tuned. tokens = [] input_type_ids = [] tokens.append("[CLS]") input_type_ids.append(0) for token in tokens_a: tokens.append(token) input_type_ids.append(0) tokens.append("[SEP]") input_type_ids.append(0) if tokens_b: for token in tokens_b: tokens.append(token) input_type_ids.append(1) tokens.append("[SEP]") input_type_ids.append(1) input_ids = tokenizer.convert_tokens_to_ids(tokens) # The mask has 1 for real tokens and 0 for padding tokens. Only real # tokens are attended to. input_mask = [1] * len(input_ids) # Zero-pad up to the sequence length. while len(input_ids) < seq_length: input_ids.append(0) input_mask.append(0) input_type_ids.append(0) assert len(input_ids) == seq_length assert len(input_mask) == seq_length assert len(input_type_ids) == seq_length if ex_index < 5: tf.logging.info("*** Example ***") tf.logging.info("unique_id: %s" % (example.unique_id)) tf.logging.info("tokens: %s" % " ".join( [tokenization.printable_text(x) for x in tokens])) tf.logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids])) tf.logging.info("input_mask: %s" % " ".join([str(x) for x in input_mask])) tf.logging.info( "input_type_ids: %s" % " ".join([str(x) for x in input_type_ids])) features.append( InputFeatures( unique_id=example.unique_id, tokens=tokens, input_ids=input_ids, input_mask=input_mask, input_type_ids=input_type_ids)) return features def _truncate_seq_pair(tokens_a, tokens_b, max_length): """Truncates a sequence pair in place to the maximum length.""" # This is a simple heuristic which will always truncate the longer sequence # one token at a time. This makes more sense than truncating an equal percent # of tokens from each, since if one sequence is very short then each token # that's truncated likely contains more information than a longer sequence. while True: total_length = len(tokens_a) + len(tokens_b) if total_length <= max_length: break if len(tokens_a) > len(tokens_b): tokens_a.pop() else: tokens_b.pop() def read_sequence(input_sentences): examples = [] unique_id = 0 for sentence in input_sentences: line = tokenization.convert_to_unicode(sentence) examples.append(InputExample(unique_id=unique_id, text_a=line)) unique_id += 1 return examples def get_features(input_text, dim=768): # tf.logging.set_verbosity(tf.logging.INFO) layer_indexes = LAYERS bert_config = modeling.BertConfig.from_json_file(BERT_CONFIG) tokenizer = tokenization.FullTokenizer( vocab_file=VOCAB_FILE, do_lower_case=True) is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2 tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(TPU_ADDRESS) run_config = tf.contrib.tpu.RunConfig( cluster=tpu_cluster_resolver, tpu_config=tf.contrib.tpu.TPUConfig( num_shards=NUM_TPU_CORES, per_host_input_for_training=is_per_host)) examples = read_sequence(input_text) features = convert_examples_to_features( examples=examples, seq_length=MAX_SEQ_LENGTH, tokenizer=tokenizer) unique_id_to_feature = {} for feature in features: unique_id_to_feature[feature.unique_id] = feature model_fn = model_fn_builder( bert_config=bert_config, init_checkpoint=INIT_CHECKPOINT, layer_indexes=layer_indexes, use_tpu=True, use_one_hot_embeddings=True) # If TPU is not available, this will fall back to normal Estimator on CPU # or GPU. estimator = tf.contrib.tpu.TPUEstimator( use_tpu=True, model_fn=model_fn, config=run_config, predict_batch_size=BATCH_SIZE, train_batch_size=BATCH_SIZE) input_fn = input_fn_builder( features=features, seq_length=MAX_SEQ_LENGTH) # Get features for result in estimator.predict(input_fn, yield_single_examples=True): unique_id = int(result["unique_id"]) feature = unique_id_to_feature[unique_id] output = collections.OrderedDict() for (i, token) in enumerate(feature.tokens): layers = [] for (j, layer_index) in enumerate(layer_indexes): layer_output = result["layer_output_%d" % j] layer_output_flat = np.array([x for x in layer_output[i:(i + 1)].flat]) layers.append(layer_output_flat) output[token] = sum(layers)[:dim] return output embeddings = get_features(["This is a test"]) print(embeddings) ```
github_jupyter
# RDF graph processing against the integrated POIs #### Auxiliary function to format SPARQL query results as a data frame: ``` import pandas as pds def sparql_results_frame(qres): cols = qres.vars out = [] for row in qres: item = [] for c in cols: item.append(row[c]) out.append(item) pds.set_option('display.max_colwidth', 0) return pds.DataFrame(out, columns=cols) ``` #### Create an **RDF graph** with the triples resulting from data integration: ``` from rdflib import Graph,URIRef g = Graph() g.parse('./output/integrated.nt', format="nt") # Get graph size (in number of statements) len(g) ``` #### Number of statements per predicate: ``` # SPARQL query is used to retrieve the results from the graph qres = g.query( """SELECT ?p (COUNT(*) AS ?cnt) { ?s ?p ?o . } GROUP BY ?p ORDER BY DESC(?cnt)""") # display unformatted query results #for row in qres: # print("%s %s" % row) # display formatted query results sparql_results_frame(qres) ``` #### Identify POIs having _**name**_ similar to a user-specified one: ``` # SPARQL query is used to retrieve the results from the graph qres = g.query( """PREFIX slipo: <http://slipo.eu/def#> PREFIX provo: <http://www.w3.org/ns/prov#> SELECT DISTINCT ?poiURI ?title WHERE { ?poiURI slipo:name ?n . ?n slipo:nameValue ?title . FILTER regex(?title, "^Achilleio", "i") } """) # display query results sparql_results_frame(qres) ``` #### **Fusion action** regarding a specific POI: ``` # SPARQL query is used to retrieve the results from the graph qres = g.query( """PREFIX slipo: <http://slipo.eu/def#> PREFIX provo: <http://www.w3.org/ns/prov#> SELECT ?prov ?defaultAction ?conf WHERE { ?poiURI provo:wasDerivedFrom ?prov . ?poiURI slipo:name ?n . ?n slipo:nameValue ?title . ?poiURI slipo:address ?a . ?a slipo:street ?s . ?prov provo:default-fusion-action ?defaultAction . ?prov provo:fusion-confidence ?conf . FILTER regex(?title, "Achilleio", "i") } """) print("Query returned %d results." % len(qres) ) # display query results sparql_results_frame(qres) ``` #### **Pair of original POIs** involved in this fusion: ``` # SPARQL query is used to retrieve the results from the graph qres = g.query( """PREFIX slipo: <http://slipo.eu/def#> PREFIX provo: <http://www.w3.org/ns/prov#> SELECT ?leftURI ?rightURI ?conf WHERE { <http://www.provbook.org/d494ddbd-9a98-39b0-bec9-0477636c42f7> provo:left-uri ?leftURI . <http://www.provbook.org/d494ddbd-9a98-39b0-bec9-0477636c42f7> provo:right-uri ?rightURI . <http://www.provbook.org/d494ddbd-9a98-39b0-bec9-0477636c42f7> provo:fusion-confidence ?conf . } """) print("Query returned %d results." % len(qres)) # display pair of POI URIs along with the fusion confidence sparql_results_frame(qres) ``` #### Values per attribute **before and after fusion** regarding this POI: ``` # SPARQL query is used to retrieve the results from the graph qres = g.query( """PREFIX slipo: <http://slipo.eu/def#> PREFIX provo: <http://www.w3.org/ns/prov#> SELECT DISTINCT ?valLeft ?valRight ?valFused WHERE { ?poiURI provo:wasDerivedFrom <http://www.provbook.org/d494ddbd-9a98-39b0-bec9-0477636c42f7> . ?poiURI provo:appliedAction ?action . ?action provo:attribute ?attr . ?action provo:left-value ?valLeft . ?action provo:right-value ?valRight . ?action provo:fused-value ?valFused . } """) print("Query returned %d results." % len(qres)) # print query results sparql_results_frame(qres) ``` # POI Analytics #### Once integrated POI data has been saved locally, analysis can be perfomed using tools like **pandas** _DataFrames_, **geopandas** _GeoDataFrames_ or other libraries. #### Unzip exported CSV file with the results of data integration: ``` import os import zipfile with zipfile.ZipFile('./output/corfu-integrated-pois.zip','r') as zip_ref: zip_ref.extractall("./output/") os.rename('./output/points.csv', './output/corfu_pois.csv') ``` #### Load CSV data in a _DataFrame_: ``` import pandas as pd pois = pd.read_csv('./output/corfu_pois.csv', delimiter='|', error_bad_lines=False) # Geometries in the exported CSV file are listed in Extended Well-Known Text (EWKT) # Since shapely does not support EWKT, update the geometry by removing the SRID value from EWKT pois['the_geom'] = pois['the_geom'].apply(lambda x: x.split(';')[1]) pois.head() ``` #### Create a _GeoDataFrame_: ``` import geopandas from shapely import wkt pois['the_geom'] = pois['the_geom'].apply(wkt.loads) gdf = geopandas.GeoDataFrame(pois, geometry='the_geom') ``` #### Display the location of the exported POIs on a **simplified plot** using _matplotlib_: ``` %matplotlib inline import matplotlib.pyplot as plt world = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres')) # Restrict focus to Germany: ax = world[world.name == 'Greece'].plot( color='white', edgecolor='black') # Plot the contents of the GeoDataFrame in blue dots: gdf.plot(ax=ax, color='blue') plt.show() ```
github_jupyter
# Keras Intro: Shallow Models Keras Documentation: https://keras.io In this notebook we explore how to use Keras to implement 2 traditional Machine Learning models: - **Linear Regression** to predict continuous data - **Logistic Regression** to predict categorical data ## Linear Regression ``` %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np ``` ### 0. Load data ``` df = pd.read_csv('../data/weight-height.csv') df.head() df.plot(kind='scatter', x='Height', y='Weight', title='Weight and Height in adults') ``` ### 1. Create Train/Test split ``` from sklearn.model_selection import train_test_split X = df[['Height']].values y = df['Weight'].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=0) ``` ### 2. Train Linear Regression Model ``` from keras.models import Sequential from keras.layers import Dense from keras.optimizers import Adam, SGD model = Sequential() model.add(Dense(1, input_shape=(1,))) model.summary() model.compile(Adam(lr=0.9), 'mean_squared_error') model.fit(X_train, y_train, epochs=40) ``` ### 3. Evaluate Model Performance ``` from sklearn.metrics import r2_score y_train_pred = model.predict(X_train).ravel() y_test_pred = model.predict(X_test).ravel() print("The R2 score on the Train set is:\t{:0.3f}".format(r2_score(y_train, y_train_pred))) print("The R2 score on the Test set is:\t{:0.3f}".format(r2_score(y_test, y_test_pred))) df.plot(kind='scatter', x='Height', y='Weight', title='Weight and Height in adults') plt.plot(X_test, y_test_pred, color='red') W, B = model.get_weights() W B ``` # Classification ### 0. Load Data ``` df = pd.read_csv('../data/user_visit_duration.csv') df.head() df.plot(kind='scatter', x='Time (min)', y='Buy') ``` ### 1. Create Train/Test split ``` X = df[['Time (min)']].values y = df['Buy'].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=0) ``` ### 2. Train Logistic Regression Model ``` model = Sequential() model.add(Dense(1, input_shape=(1,), activation='sigmoid')) model.summary() model.compile(SGD(lr=0.5), 'binary_crossentropy', metrics=['accuracy']) model.fit(X_train, y_train, epochs=40) ax = df.plot(kind='scatter', x='Time (min)', y='Buy', title='Purchase behavior VS time spent on site') t = np.linspace(0, 4) ax.plot(t, model.predict(t), color='orange') plt.legend(['model', 'data']) ``` ### 3. Evaluate Model Performance #### Accuracy ``` from sklearn.metrics import accuracy_score y_train_pred = model.predict_classes(X_train) y_test_pred = model.predict_classes(X_test) print("The train accuracy score is {:0.3f}".format(accuracy_score(y_train, y_train_pred))) print("The test accuracy score is {:0.3f}".format(accuracy_score(y_test, y_test_pred))) ``` #### Confusion Matrix & Classification Report ``` from sklearn.metrics import confusion_matrix confusion_matrix(y_test, y_test_pred) def pretty_confusion_matrix(y_true, y_pred, labels=["False", "True"]): cm = confusion_matrix(y_true, y_pred) pred_labels = ['Predicted '+ l for l in labels] df = pd.DataFrame(cm, index=labels, columns=pred_labels) return df pretty_confusion_matrix(y_test, y_test_pred, ['Not Buy', 'Buy']) from sklearn.metrics import classification_report print(classification_report(y_test, y_test_pred)) ``` ## Exercise You've just been hired at a real estate investment firm and they would like you to build a model for pricing houses. You are given a dataset that contains data for house prices and a few features like number of bedrooms, size in square feet and age of the house. Let's see if you can build a model that is able to predict the price. In this exercise we extend what we have learned about linear regression to a dataset with more than one feature. Here are the steps to complete it: 1. Load the dataset ../data/housing-data.csv - create 2 variables called X and y: X shall be a matrix with 3 columns (sqft,bdrms,age) and y shall be a vector with 1 column (price) - create a linear regression model in Keras with the appropriate number of inputs and output - split the data into train and test with a 20% test size, use `random_state=0` for consistency with classmates - train the model on the training set and check its accuracy on training and test set - how's your model doing? Is the loss decreasing? - try to improve your model with these experiments: - normalize the input features: - divide sqft by 1000 - divide age by 10 - divide price by 100000 - use a different value for the learning rate of your model - use a different optimizer - once you're satisfied with training, check the R2score on the test set ``` # Load the dataset ../data/housing-data.csv df = pd.read_csv('../data/housing-data.csv') df.head() df.columns # create 2 variables called X and y: # X shall be a matrix with 3 columns (sqft,bdrms,age) # and y shall be a vector with 1 column (price) X = df[['sqft', 'bdrms', 'age']].values y = df['price'].values # create a linear regression model in Keras # with the appropriate number of inputs and output model = Sequential() model.add(Dense(1, input_shape=(3,))) model.compile(Adam(lr=0.8), 'mean_squared_error') # split the data into train and test with a 20% test size X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) # train the model on the training set and check its accuracy on training and test set # how's your model doing? Is the loss decreasing? model.fit(X_train, y_train, epochs=50) # check the R2score on training and test set (probably very bad) y_train_pred = model.predict(X_train) y_test_pred = model.predict(X_test) print("The R2 score on the Train set is:\t{:0.3f}".format(r2_score(y_train, y_train_pred))) print("The R2 score on the Test set is:\t{:0.3f}".format(r2_score(y_test, y_test_pred))) # try to improve your model with these experiments: # - normalize the input features with one of the rescaling techniques mentioned above # - use a different value for the learning rate of your model # - use a different optimizer df['sqft1000'] = df['sqft']/1000.0 df['age10'] = df['age']/10.0 df['price100k'] = df['price']/1e5 X = df[['sqft1000', 'bdrms', 'age10']].values y = df['price100k'].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) model = Sequential() model.add(Dense(1, input_dim=3)) model.compile(Adam(lr=0.1), 'mean_squared_error') model.fit(X_train, y_train, epochs=50) # once you're satisfied with training, check the R2score on the test set y_train_pred = model.predict(X_train) y_test_pred = model.predict(X_test) print("The R2 score on the Train set is:\t{:0.3f}".format(r2_score(y_train, y_train_pred))) print("The R2 score on the Test set is:\t{:0.3f}".format(r2_score(y_test, y_test_pred))) ```
github_jupyter
# 分类和逻辑回归 Classification and Logistic Regression 引入科学计算和绘图相关包: ``` import numpy as np from sklearn import linear_model, datasets import matplotlib.pyplot as plt import seaborn as sns sns.set_style('whitegrid') %matplotlib inline ``` 分类和回归的唯一区别在于,分类问题中我们希望预测的目标变量 $y$ 只会取少数几个离散值。本节我们将主要关注**二元分类 binary classification**问题,$y$ 在二元分类中只会取 $0, 1$ 两个值。$0$ 也被称为**反类 negative class**,$1$ 也被称为**正类 positive class**,它们有时也使用符号 $-, +$ 来标识。给定 $x^{(i)}$,对应的 $y^{(i)}$ 也被称为训练样本的**标签 label**。 本节包括以下内容: 1. 逻辑回归 Logistic regression 2. 感知器 The perceptron learning algorithm 3. 牛顿法:最大化 $\ell(\theta)$ 的另一种算法 Newton's Method: Another algorithm for maximizing $\ell(\theta)$ ## 1. 逻辑回归 Logistic Regression 在逻辑回归中,我们的假设函数 $h_\theta(x)$ 的形式为: $$ h_\theta(x) = g(\theta^Tx) = \frac{1}{1+e^{-\theta^Tx}}, $$ 其中 $$ g(z) = \frac{1}{1+e^{-z}} $$ 被称为**logistic函数**或**sigmoid函数**,$g(z)$ 的形式如下: ``` x = np.arange(-10., 10., 0.2) y = 1 / (1 + np.e ** (-x)) plt.plot(x, y) plt.title(' Logistic Function ') plt.show() ``` 当 $z \rightarrow \infty$ 时,$g(z) \rightarrow 1$;当 $z \rightarrow -\infty$ 时,$g(z) \rightarrow 0$;$g(z)$ 的值域为 $(0, 1)$。我们保留令 $x_0 = 1$ 的习惯,$\theta^Tx = \theta_0 + \sum_{j=1}^n \theta_jx_j$。 之后在讲广义线性模型时,我们会介绍sigmoid函数的由来。暂时我们只是将其作为给定条件。sigmoid函数的导数有一个十分有用的性质,后续在推导时会用到: $$ \begin{split} g'(z) &= \frac{d}{dz}\frac{1}{1+e^{-z}} \\ &= \frac{1}{(1+e^{-z})^2}e^{-z} \\ &= \frac{1}{(1+e^{-z})} \cdot (1 - \frac{1}{(1+e^{-z})}) \\ &= g(z) \cdot (1-g(z)) \end{split} $$ 在线性回归的概率诠释中,根据一定的假设,我们通过最大似然估计法计算 $\theta$。类似地,在逻辑回归中,我们也采用同样的策略,假设: $$ P(y=1|x;\theta) = h_\theta(x) $$ $$ P(y=0|x;\theta) = 1 - h_\theta(x) $$ 这两个假设可以合并为: $$ P(y|x;\theta) = (h_\theta(x))^y(1-h_\theta(x))^{1-y} $$ 继续假设 $m$ 个训练样本相互独立,似然函数因此可以写成: $$ \begin{split} L(\theta) & = p(y|X; \theta) \\ & = \prod_{i=1}^{m} p(y^{(i)}|x^{(i)}; \theta) \\ & = \prod_{i=1}^{m} (h_\theta(x^{(i)}))^{y^{(i)}}(1-h_\theta(x^{(i)}))^{1-y^{(i)}} \end{split} $$ 相应的对数似然函数可以写成: $$ \begin{split} \ell(\theta) &= logL(\theta) \\ &= \sum_{i=1}^m (y^{(i)}logh(x^{(i)})+(1-y^{(i)})log(1-h(x^{(i)}))) \end{split} $$ 为了求解对数似然函数的最大值,我们可以采用梯度上升的算法 $\theta = \theta + \alpha\nabla_\theta\ell(\theta)$,其中偏导为: $$ \begin{split} \frac{\partial}{\partial\theta_j}\ell(\theta) &= \sum(y\frac{1}{g(\theta^Tx)} - (1-y)\frac{1}{1-g(\theta^Tx)})\frac{\partial}{\partial\theta_j}g(\theta^Tx) \\ &= \sum(y\frac{1}{g(\theta^Tx)} - (1-y)\frac{1}{1-g(\theta^Tx)})g(\theta^Tx)(1-g(\theta^Tx))\frac{\partial}{\partial\theta_j}\theta^Tx \\ &= \sum(y(1-g(\theta^Tx))-(1-y)g(\theta^Tx))x_j \\ &= \sum(y-h_\theta(x))x_j \end{split} $$ 而对于每次迭代只使用单个样本的随机梯度上升算法而言: $$ \theta_j = \theta_j + \alpha(y^{(i)}-h_\theta(x^{(i)}))x^{(i)}_j = \theta_j - \alpha(h_\theta(x^{(i)}) - y^{(i)})x_j^{(i)}$$ 可以看到,除去假设函数 $h_\theta(x)$ 本身不同之外,逻辑回归和线性回归的梯度更新是十分类似的。广义线性模型将会解释这里的“巧合”。 ## 2. 感知器 The Perceptron Learning Algorithm 在逻辑回归中,我们通过sigmoid函数,使得最终的目标变量落在 $(0, 1)$ 的区间内,并假设目标变量的值就是其为正类的概率。 设想我们使目前变量严格地取 $0$ 或 $1$: $$ g(z) =\left\{ \begin{aligned} 1 & , z \geq 0 \\ 0 & , z < 0 \end{aligned} \right. $$ 和之前一样,我们令 $h_\theta(x) = g(\theta^Tx)$,并根据以下规则更新: $$ \theta_j = \theta_j + \alpha(y^{(i)} - h_\theta(x^{(i)}))x_j^{(i)} $$ 这个算法被称为**感知器学习算法 perceptron learning algorithm**。 感知器在上世纪60年代一直被视作大脑中单个神经元的粗略模拟。但注意,尽管感知器和逻辑回归的形式非常相似,但由于 $g(z)$ 无法使用概率假设来描述,因而也无法使用最大似然估计法进行参数估计。实际上感知器和线性模型是完全不同的算法类型,它是神经网络算法的起源,之后我们会回到这个话题。 ## 3. 牛顿法:最大化 $\ell(\theta)$ 的另一种算法 Newton's Method: Another algorithm for maximizing $\ell(\theta)$ 回到逻辑回归,为了求解其对数似然函数的最大值,除了梯度上升算法外,这里介绍通过牛顿法进行求解。 牛顿法主要用来求解方程的根。设想有一个函数 $f: \mathbb{R} \rightarrow \mathbb{R}$,我们希望找到一个值 $\theta$ 使得 $f(\theta)=0$。牛顿法的迭代过程如下: $$ \theta = \theta - \frac{f(\theta)}{f'(\theta)} $$ ``` f = lambda x: x ** 2 f_prime = lambda x: 2 * x improve_x = lambda x: x - f(x) / f_prime(x) x = np.arange(0, 3, 0.2) x0 = 2 tangent0 = lambda x: f_prime(x0) * (x - x0) + f(x0) x1 = improve_x(x0) tangent1 = lambda x: f_prime(x1) * (x - x1) + f(x1) plt.plot(x, f(x), label="y=x^2") plt.plot(x, np.zeros_like(x), label="x axis") plt.plot(x, tangent0(x), label="y=4x-4") plt.plot(x, tangent1(x), label="y=2x-1") plt.legend(loc="best") plt.show() ``` 牛顿法的迭代过程可以非常直观地用上图解释,我们需要求解 $y = x^2$ 的根,函数如蓝线所示。假设我们设置的初始点 $x_0 = 2$,对这个点求导做切线,如绿线所示,绿线和 $x$ 轴的交点就是我们第一轮迭代的结果 $x_1 = 1$。继续这个过程,得到切线用红色表示,红线与 $x$ 轴为第二轮迭代结果 $x_2 = 0.5$。重复迭代可以到达 $x=0$; 对于我们的对数似然函数来说,要求解其最大值,也就是要求解 $\ell'(\theta) = 0$: $$ \theta = \theta - \frac{\ell'(\theta)}{\ell''(\theta)} $$ 注意这里的充要关系,在最大值点一阶导数必然为0,反之则未必,所以我们求解的,实际上也可能是局部/全局最小值点,或者鞍点。 最后,在逻辑回归中,$\theta$ 是一个向量,所以我们需要据此扩展牛顿法。牛顿法在高维空间中的,也称为**牛顿-拉弗森法 Newton-Raphson method**: $$ \theta = \theta - H^{-1}\nabla_\theta\ell(\theta) $$ 这里,$\nabla_\theta\ell(\theta)$ 是 $\ell(\theta)$ 针对向量 $\theta$ 的偏导。$H$ 是一个 $n \times n$ 的矩阵(加上截距项,实际是 $n+1 \times n+1$),称为**海森矩阵 Hessian Matrix**: $$ H_{ij} = \frac{\partial^2\ell(\theta)}{\partial\theta_i\partial\theta_j} $$ 通常牛顿法会比(批量)梯度下降在更短的迭代次数内收敛。同时,一次牛顿法迭代过程,由于要对海森矩阵求逆,会比梯度下降的一次迭代慢。只要 $n$ 的值不至于过大,牛顿法总的来说会比梯度下降收敛快得多。 应用牛顿法解最大似然函数,也被称为**费舍尔评分 Fisher's scoring**。
github_jupyter
``` import pandas as pd import numpy as np import pickle import matplotlib.pyplot as plt from scipy import stats import tensorflow as tf import seaborn as sns from pylab import rcParams from sklearn import metrics from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler %matplotlib inline sns.set(style='whitegrid', palette='muted', font_scale=1.5) rcParams['figure.figsize'] = 14, 8 RANDOM_SEED = 42 df=pd.read_csv('TreeData.csv') # df.head(22) df.info() N_TIME_STEPS = 250 N_FEATURES = 128 #128 step = 10 # 20 segments = [] for i in range(0, len(df) - N_TIME_STEPS, step): ch = [] for j in range(0, N_FEATURES): ch.append(df.iloc[:, j].values[i: i + N_TIME_STEPS]) segments.append(ch) labels = [] for i in range(0, len(df) - N_TIME_STEPS, step): label = stats.mode(df['Label'][i: i + N_TIME_STEPS])[0][0] labels.append(label) labelsl = np.asarray(pd.get_dummies(labels), dtype = np.float32) #print(labelsl) reshaped_segments = np.asarray(segments, dtype= np.float32).reshape(-1, N_TIME_STEPS, N_FEATURES) X_train, X_test, y_train, y_test = train_test_split( reshaped_segments, labelsl, test_size=0.2, random_state=RANDOM_SEED) print(np.array(segments).shape, reshaped_segments.shape, labelsl[0], len(X_train), len(X_test)) ``` # Building the model ``` N_CLASSES = 2 N_HIDDEN_UNITS = 64 # https://medium.com/@curiousily/human-activity-recognition-using-lstms-on-android-tensorflow-for-hackers-part-vi-492da5adef64 def create_LSTM_model(inputs): W = { 'hidden': tf.Variable(tf.random_normal([N_FEATURES, N_HIDDEN_UNITS])), 'output': tf.Variable(tf.random_normal([N_HIDDEN_UNITS, N_CLASSES])) } biases = { 'hidden': tf.Variable(tf.random_normal([N_HIDDEN_UNITS], mean=1.0)), 'output': tf.Variable(tf.random_normal([N_CLASSES])) } X = tf.transpose(inputs, [1, 0, 2]) X = tf.reshape(X, [-1, N_FEATURES]) hidden = tf.nn.relu(tf.matmul(X, W['hidden']) + biases['hidden']) hidden = tf.split(hidden, N_TIME_STEPS, 0) # Stack 2 LSTM layers lstm_layers = [tf.contrib.rnn.BasicLSTMCell(N_HIDDEN_UNITS, forget_bias=1.0) for _ in range(2)] lstm_layers = tf.contrib.rnn.MultiRNNCell(lstm_layers) outputs, _ = tf.contrib.rnn.static_rnn(lstm_layers, hidden, dtype=tf.float32) # Get output for the last time step lstm_last_output = outputs[-1] return tf.matmul(lstm_last_output, W['output']) + biases['output'] tf.reset_default_graph() X = tf.placeholder(tf.float32, [None, N_TIME_STEPS, N_FEATURES], name="input") Y = tf.placeholder(tf.float32, [None, N_CLASSES]) pred_Y = create_LSTM_model(X) pred_softmax = tf.nn.softmax(pred_Y, name="y_") L2_LOSS = 0.0015 l2 = L2_LOSS * \ sum(tf.nn.l2_loss(tf_var) for tf_var in tf.trainable_variables()) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits = pred_Y, labels = Y)) + l2 LEARNING_RATE = 0.0025 optimizer = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE).minimize(loss) correct_pred = tf.equal(tf.argmax(pred_softmax, 1), tf.argmax(Y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, dtype=tf.float32)) ``` # Training ``` N_EPOCHS = 50 # 50 BATCH_SIZE = 1024 # 1024 # https://medium.com/@curiousily/human-activity-recognition-using-lstms-on-android-tensorflow-for-hackers-part-vi-492da5adef64 saver = tf.train.Saver() history = dict(train_loss=[], train_acc=[], test_loss=[], test_acc=[]) sess=tf.InteractiveSession() sess.run(tf.global_variables_initializer()) train_count = len(X_train) for i in range(1, N_EPOCHS + 1): for start, end in zip(range(0, train_count, BATCH_SIZE), range(BATCH_SIZE, train_count + 1,BATCH_SIZE)): sess.run(optimizer, feed_dict={X: X_train[start:end], Y: y_train[start:end]}) _, acc_train, loss_train = sess.run([pred_softmax, accuracy, loss], feed_dict={ X: X_train, Y: y_train}) _, acc_test, loss_test = sess.run([pred_softmax, accuracy, loss], feed_dict={ X: X_test, Y: y_test}) history['train_loss'].append(loss_train) history['train_acc'].append(acc_train) history['test_loss'].append(loss_test) history['test_acc'].append(acc_test) # if i != 1 and i % 10 != 0: # continue print(f'epoch: {i} test accuracy: {acc_test} loss: {loss_test}') predictions, acc_final, loss_final = sess.run([pred_softmax, accuracy, loss], feed_dict={X: X_test, Y: y_test}) print() print(f'final results: accuracy: {acc_final} loss: {loss_final}') ``` # Evaluation ``` # https://medium.com/@curiousily/human-activity-recognition-using-lstms-on-android-tensorflow-for-hackers-part-vi-492da5adef64 plt.figure(figsize=(12, 8)) plt.plot(np.array(history['train_loss']), "r--", label="Train loss") plt.plot(np.array(history['train_acc']), "g--", label="Train accuracy") plt.plot(np.array(history['test_loss']), "r-", label="Test loss") plt.plot(np.array(history['test_acc']), "g-", label="Test accuracy") plt.title("Training session's progress over iterations") plt.legend(loc='upper right', shadow=True) plt.ylabel('Training Progress (Loss or Accuracy values)') plt.xlabel('Training Epoch') plt.ylim(0) plt.show() ``` # Saving Model ``` import os file_info = [N_HIDDEN_UNITS, BATCH_SIZE, N_EPOCHS] dirname = os.path.dirname("nhid-{}_bat-{}_nepoc-{}/dumps/".format(*file_info)) if not os.path.exists(dirname): os.makedirs(dirname) dirname = os.path.dirname("nhid-{}_bat-{}_nepoc-{}/logs/".format(*file_info)) if not os.path.exists(dirname): os.makedirs(dirname) pickle.dump(predictions, open("nhid-{}_bat-{}_nepoc-{}/dumps/predictions.p".format(*file_info), "wb")) pickle.dump(history, open("nhid-{}_bat-{}_nepoc-{}/dumps/history.p".format(*file_info), "wb")) tf.train.write_graph(sess.graph, "nhid-{}_bat-{}_nepoc-{}/logs".format(*file_info), 'har.pbtxt') saver.save(sess, 'nhid-{}_bat-{}_nepoc-{}/logs/har.ckpt'.format(*file_info)) writer = tf.summary.FileWriter('nhid-{}_bat-{}_nepoc-{}/logs'.format(*file_info)) writer.add_graph(sess.graph) ```
github_jupyter
# VAST 2017 MC-1 ## Задание Природный заповедник Бунсонг Лекагуль используется местными жителями и туристами для однодневных поездок, ночевок в кемпингах, а иногда и просто для доступа к основным магистралям на противоположных сторонах заповедника. Входные кабинки заповедника контролируются с целью получения дохода, а также мониторинга использования. Транспортные средства, въезжающие в заповедник и выезжающие из него, должны платить пошлину в зависимости от количества осей (личный автомобиль, развлекательный прицеп, полуприцеп и т.д.). Это создает поток данных с отметками времени входа / выхода и типом транспортного средства. Есть также другие места в части, которые регистрируют трафик, проходящий через заповедник. Путешествуя по различным частям заповедника, Митч заметил странное поведение транспортных средств, которое, по его мнению, не соответствует видам посетителей парка, которых он ожидал. Если бы Митч мог каким-то образом анализировать поведение автомобилей в парке с течением времени, это сможет помочь ему в его расследовании. ### Пример исходных данных ### Необходимые импорты ``` import sqlite3 import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans from SOM import SOM import seaborn as sb ``` ## Подготовка данных С помощью языка программирования Python распарсили данные из таблицы, затем объединили в группы сенсоры по Car-Id, тем самым нашли все датчики, которые проехало данное ТС, его путь. ``` data_set = open("Data/Lekagul Sensor Data.csv", "r") data = data_set.readlines() data_set.close() traces = [] gates = set() for line in data: args = line.split(";") gates.add(args[3]) traces.append(args) gates = sorted(gates) groupedTraces = [] for t in traces: if t[1] in groupedTraces: groupedTraces[t[1]].append(t) else: groupedTraces[t[1]] = [t] ``` Вычленяем типа автомобиля из списка путей. ``` target = [] for x in groupedTraces: target.append(groupedTraces[x][1][2]) targets = [] for rec in target: if rec == '2P': targets.append('7') else: targets.append(str(rec)) print(targets) ``` Отнормировали датчики по общему количеству пройденных датчиков (если транспортное средство никак не взаимодействовало с сенсором - присваиваем значение 0). В итоге получаем вектор, где координатами являются сенсоры со значениями [0, 1]. ``` vectors = [] for name, gt in groupedTraces.items(): groupGates = np.zeros(len(gates)) for t in gt: groupGates[gates.index(t[3])] += 1 vector = [] for rec in groupGates: vector.append(str(rec)) # vectors.write(";") vectors.append(vector) for vector in vectors: print(vector) ``` ## Заключение ### SOM По итогу работы с алгоритмом SOM можно выделить следующие плюсы и минусы. Плюсы: 1. Алгоритм является инструментом снижения размерности и визуализации данных; 2. Отличная видимость кластеров на карте отображения сработавших нейронов совместно с точками данных; 3. Отличная видимость аномалий при любых параметрах, что позволило нам быстро найти ответ на главный вопрос челленджа; 4. Сравнивая интуитивное разбиение и разбиение алгоритмом k-means, получили небольшую погрешность. Это говорит о том, что карта SOM вполне подходит для определения кластеров в данных. Незначительные минусы: ресурсозатратность, что подразумевает длительную работу алгоритма; ограниченный выбор форм визуализации результата работы алгоритма (SOM-карта и U-матрица). Лучшая репрезентативность визуализации была достигнута при параметрах размерность карты: 20х20, количество поколений: 10000, инициализация: PCA. Мы смогли выявить аномалию заметив, что 4-осный грузовик проходят по мрашрутам грузовиков рейнджеров, несмотря на то, что у него нет на это разрешения. В визуализации в раскраске грузовиков рейнджеров отмечены вкрапления цвета 4-осных грузовиков.
github_jupyter
# HOMEWORK NOTES - __Do the assignment or don't.__ - If you run into issues, ask questions. - If you run into issues at the last minue, explain what the issue is in your homework. - __Read the instructions.__ - For example, In this assignment the rules were to load *more than 10 variables, stored across multiple files*, and *Organize/munge the data into a single Pandas DataFrame*. Some of you did not do this. - __If you use a custom dataset, you MUST include it in your submission.__ - The more feedback you can give when editing code, the better off we all are. # It's a good idea to import things at the top of the file. This isn't strictly necessary, but it will make your life better in the long run, as it will help you know what packages you need installed in order to run something. ``` import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import numpy as np import itertools %matplotlib inline ``` # Start by reading in files ``` series_file = 'bls/bls_series.csv' records_file = 'bls/bls_records.csv' series = pd.read_csv(series_file) records = pd.read_csv(records_file, parse_dates=[2]) print( records.head() ) print( type( records['blsid'][0] )) records.sort_values('period', inplace=True) # functionally equal to: records = records.sort_values('period') print( records.head() ) print(series.head()) ``` # Merge Files This is where the magic happens. ``` df = records.merge(series, on='blsid', how='inner') # df = records.merge(series, left_on='blsid', right_on='blsid') # df = pd.merge( left=records, right=series, left_on='blsid', right_on='blsid') df.sample(n=5)['title'].values print( len( df['title'].str.contains('unemployment') ) ) print( len( df['title'] ) ) df.loc[ df['title'].str.contains('unemployment'),'title' ].unique() selected_columns = df['title'].str.contains('seasonally adjusted - unemployment$') & ~df['title'].str.contains('not seasonally') print ( df.loc[ selected_columns, 'title'].sample(n=5)) df.loc[selected_columns].groupby('period')['value'].mean().plot() # using non-pandas plot function... f, axarr = plt.subplots(1,2) axarr[0].plot( df.loc[selected_columns].groupby('period')['value'].mean(), 'r' ) axarr[1].plot( df.loc[selected_columns].groupby('period')['value'].sum(), 'b') # using seaborn lineplot sns.lineplot( x='period', y='value', data=df.loc[selected_columns]) # using seaborn regplot sns.relplot(x="period", y="value", hue="title", kind="line", legend="full", data=df.loc[selected_columns] ) # regplot sorted by title... sns.relplot(x="period", y="value", hue="title", kind="line", legend="full", data=df.loc[selected_columns].sort_values('title') ) # select unenplotment *rate* selected_columns_rate = df['title'].str.contains('seasonally adjusted - unemployment rate$') & ~df['title'].str.contains('not seasonally') sns.relplot(x="period", y="value", hue="title", kind="line", legend="full", data=df.loc[selected_columns_rate].sort_values('title') ) # for s in df.loc[selected_columns_rate, 'title'].unique(): # print( s ) # in order to get matplotlib to deal with dates, correctly. import matplotlib.dates as mdates sns.set_style('white') # get a list of variables to use... state_unemployment_vars = df.loc[ df['title'].str.contains(', not seasonally adjusted - unemployment rate$') ] state_unemployment_vars = state_unemployment_vars.sort_values('title', ascending=False) # get a list of bslids for the unenployment rate state_unenployment_ids = state_unemployment_vars['blsid'].unique() # create a pivoted dataframe of unemployment. unemployment_df = df.pivot(index='period', columns='blsid', values='value').copy() # select a colormap my_cmap = sns.cubehelix_palette(n_colors=10, as_cmap=True ) # here is where we use mdates to set the x axis. xlim = mdates.date2num([min(unemployment_df.index), max(unemployment_df.index)]) # create figure f,ax = plt.subplots(1,1,figsize=(10,10)) # # create axis. ax.imshow( unemployment_df[state_unenployment_ids].transpose(), extent=[xlim[0], xlim[1], 0,len(state_unenployment_ids)], aspect='auto', interpolation='none', origin='lower', cmap=my_cmap, vmin=2, vmax=13) # set the x-axis to be a date. ax.xaxis_date() # get the list of titles my_titles = state_unemployment_vars['title'].unique() # turn the list of titles into a list of states. my_states = [title.replace(', not seasonally adjusted - unemployment rate','') for title in my_titles] # # make sure every row has a y-tick. ax.set_yticks([x+.5 for x in range(0,len(state_unenployment_ids))]) # # set the name of the y_ticks to states. ax.set_yticklabels( my_states ) ; # custom legend # x-ticks on big plot ```
github_jupyter
``` import pandas as pd import json import seaborn as sns import matplotlib.pyplot as plt pd.set_option('display.max_columns', 50) ``` So in the last time we're able to "softly" label ~54% of the dataset based on 2 features which are `male_items` and `female_items`. However, other features might also contribute to the gender, however other features is not clear in a way that we can intepret, we don't even have labels! This is where deep learning shines. Later, we will build a representation model on contrastive learning "softly" labeled data. Then use that to predict the other half uncertainty. For now we're preparing some useful features for the model. # 1. Load data ``` with open("clean_data.csv", "r") as f: df = pd.read_csv(f) with open("partial_labels.csv","r") as f: partial_labels_df = pd.read_csv(f) partial_labels_df # Merge df with label main_df = pd.merge(df, partial_labels_df, how="right", left_on="customer_id", right_on="customer_id") # Drop redundant key label after merged main_df.orders main_df.keys() ``` # 2. Feature engineering ``` # skip visualize due to categorical object skip_columns = ['is_newsletter_subscriber','customer_id','female_flag'] top_n = 5 i = 0 for column in main_df: if i == top_n: break elif column in skip_columns: continue print(f"Column {column}") plt.figure() main_df.boxplot([column]) i += 1 ``` From few boxplots we can see that the outliers deviates from the mean a lot. This is due to the nature of accumulative values. For example: - returns is accumulative value since because it rises after customer return an order but do not decrease by any mean. - this kind of feature is not useful because it does not reflect users buying habit. - to reflect accurately the user habit we should relate it to the another "time span" variable. For example, returns per order, or returns per items, etc. Here we introduce some features normalized with the "time span" variable (for example `orders` or `items`) to reflect correctly the users behaviors ``` from utils.data_composer import feature_engineering main_df = feature_engineering(main_df) # Get engineered data engineered_df = main_df.iloc[:,33:] # Also append column "devices" and "coupon_discount_applied" into engineered_df = pd.concat([engineered_df, main_df.loc[:,["coupon_discount_applied","devices","customer_id"]]],axis=1) engineered_df.describe() engineered_df # df_ = df.copy() # df_ = df_.drop(["days_since_first_order","customer_id","days_since_last_order"], axis=1) sns.set(rc={'figure.figsize':(11.7,8.27)}) corr = engineered_df.drop("female_flag",axis=1).corr() ax = sns.heatmap( corr, vmin=-1, vmax=1, center=0, cmap=sns.diverging_palette(20, 220, n=200), square=True ) ax.set_xticklabels( ax.get_xticklabels(), rotation=90, horizontalalignment='right' ); c = corr s = c.unstack() so = s.sort_values(kind="quicksort", ascending=False, key=lambda value: value.abs()) so.head(20) # Remove self correlated for index in so.index: if index[0] == index[1]: so.drop(index,inplace=True) # Remove duplicate pair already_paired = [] for index in so.index: if index[0] == index[1]: so.drop(index,inplace=True) elif (index[1], index[0]) in already_paired: so.drop(index, inplace=True) already_paired.append(index) so.head(32) s.sort_values(kind="quicksort", ascending=False) s.index[0] ``` Okay so we have most data scaled between 0 and 1. That's a great signal. For other column, outlier might be a problem. But no worry, we will use RobustScaler to scale it down without being suffered from largely deviated outliers. ``` engineered_df.to_csv("modeling_data.csv",index=False) ```
github_jupyter
``` from __future__ import absolute_import, division, print_function import numpy, os, pandas import tensorflow from tensorflow import keras print(tensorflow.__version__) AmesHousing = pandas.read_excel('../data/AmesHousing.xls') AmesHousing.head(10) cd .. from libpy import NS_dp from sklearn.model_selection import train_test_split ``` We use our own function to clean Data ``` df = NS_dp.clean_Ames_Housing(AmesHousing) data, labels = df.iloc[ : , 2: ].drop( columns=[ "SalePrice" ] ), df[ "SalePrice" ] train_data, test_data, train_labels, test_labels = train_test_split(data, labels, test_size=0.2) from libpy import FS # train_data, train_labels, test_data, test_labels = FS.feature_select(df) print("Training set: {}".format(train_data.shape)) # 1607 examples, ** features print("Testing set: {}".format(test_data.shape)) # 1071 examples, 13 features train_data.sample(10) ``` ### Model building Here we created a neural network of our own. Trained it and ecaluted it's score to to measure performance. Later we plot our results. ### Train a model ``` from libpy import KR ``` Here we use default 64 node Neural Network ``` model = KR.build_model(train_data) model.summary() history, model = KR.train_model( model, train_data, train_labels ) ``` ### Plot Here we plot our model performance. ``` from matplotlib import pyplot from libpy import DNN_plot DNN_plot.plot_history(history) ``` We shown our Training vs Validation loss. Here we used tf.losses.mean_squared_error (mse) as loss aprameter and mean_absolute_error (mae) to plot our Training performaance In below, we calculate the mae score to find test accuracy, aka our model's accuracy ``` [loss, mae] = model.evaluate(test_data, test_labels, verbose=0) print("Testing set Mean Abs Error: ${:7.2f}".format( mae )) test_predictions = model.predict(test_data).flatten() ``` Here we can see the Regression model ``` DNN_plot.plot_predict( test_labels, test_predictions ) ``` We can check how much error we get ``` DNN_plot.plot_predict_error(test_labels, test_predictions) ``` ### Experiment Depth of Neural Network We want to check with increase of Hidden layer, does our model performs better? We increased up to 7 hidden layer ``` from libpy import CV depths = [] scores_mae = [] for i in range( 7 ): model = KR.build_model(train_data, depth=i) history, model = KR.train_model( model, train_data, train_labels ) model.summary() DNN_plot.plot_history(history) [loss, mae] = model.evaluate(test_data, test_labels, verbose=0) print("Testing set Mean Abs Error: ${:7.2f}".format( mae )) test_predictions = model.predict(test_data).flatten() DNN_plot.plot_predict( test_labels, test_predictions ) depths.append( i+2 ) scores_mae.append( mae) CV.plot_any( depths, scores_mae, xlabel='Depth', ylabel='Mean Abs Error [1000$]' ) ``` ### Experiment Overfitting In this part, we try multiple neural network model, with various node number to check Overfitting vs Underfitting ``` model_16 = KR.build_model(train_data, units=16) history_16, model_16 = KR.train_model( model_16, train_data, train_labels ) model_16.summary() loss, acc = model_16.evaluate( train_data, train_labels ) print("Trained model, accuracy: {:5.2f}%".format( acc )) model_32 = KR.build_model(train_data, units=32) history_32, model_32 = KR.train_model( model_32, train_data, train_labels ) model_32.summary() loss, acc = model_32.evaluate( train_data, train_labels ) print("Trained model, accuracy: {:5.2f}%".format( acc)) model_48 = KR.build_model(train_data, units=48) history_48, model_48 = KR.train_model( model_48, train_data, train_labels ) model_48.summary() loss, acc = model_48.evaluate( train_data, train_labels ) print("Trained model, accuracy: {:5.2f}%".format( acc)) model_64 = KR.build_model( train_data, units=64 ) history_64, model_64 = KR.train_model( model_64, train_data, train_labels ) model_64.summary() loss, acc = model_64.evaluate( train_data, train_labels ) print("Trained model, accuracy: {:5.2f}%".format( acc)) model_128 = KR.build_model( train_data, units=128) history_128, model_128 = KR.train_model( model_128, train_data, train_labels ) model_128.summary() loss, acc = model_128.evaluate( train_data, train_labels ) print("Trained model, accuracy: {:5.2f}%".format( acc)) model_512 = KR.build_model(train_data, units=512) history_512, model_512 = KR.train_model( model_512, train_data, train_labels ) model_512.summary() loss, acc = model_512.evaluate( train_data, train_labels ) print("Trained model, accuracy: {:5.2f}%".format( acc)) DNN_plot.plot_compare_history( [ ('history_16', history_16 ), ('history_32', history_32 ), ('history_48', history_48 ), ('history_64', history_64 ), ('history_128', history_128 ), ('history_512', history_512 ) ] ) ``` In our case, Validation and Training loss corrosponds to each other, all the models does not face any Overfitting or Underfitting. As we used EarlyStopping to stop training when val_loss stops to update. At the same time we used keras.regularizers.l2 to regularize our model.
github_jupyter
# Elevation indices Here we assume that flow directions are known. We read the flow direction raster data, including meta-data, using [rasterio](https://rasterio.readthedocs.io/en/latest/) and parse it to a pyflwdir `FlwDirRaster` object, see earlier examples for more background. ``` # import pyflwdir, some dependencies and convenience methods import numpy as np import rasterio import pyflwdir # local convenience methods (see utils.py script in notebooks folder) from utils import quickplot, plt # data specific quick plot method # read and parse flow direciton data with rasterio.open("rhine_d8.tif", "r") as src: flwdir = src.read(1) crs = src.crs extent = np.array(src.bounds)[[0, 2, 1, 3]] flw = pyflwdir.from_array( flwdir, ftype="d8", transform=src.transform, latlon=crs.is_geographic, cache=True, ) # read elevation data with rasterio.open("rhine_elv0.tif", "r") as src: elevtn = src.read(1) ``` ## height above nearest drain (HAND) The [hand()](reference.rst#pyflwdir.FlwdirRaster.hand) method uses drainage-normalized topography and flowpaths to delineate the relative vertical distances (drop) to the nearest river (drain) as a proxy for the potential extent of flooding ([Nobre et al. 2016](https://doi.org/10.1002/hyp.10581)). The pyflwdir implementation requires stream mask `drain` and elevation raster `elevtn`. The stream mask is typically determined based on a threshold on [upstream_area()](reference.rst#pyflwdir.FlwdirRaster.upstream_area) or [stream_order()](reference.rst#pyflwdir.FlwdirRaster.stream_order), but can also be set from rasterizing a vector stream file. ``` # first we derive the upstream area map uparea = flw.upstream_area("km2") # HAND based on streams defined by a minimal upstream area of 1000 km2 hand = flw.hand(drain=uparea > 1000, elevtn=elevtn) # plot ax = quickplot(title="Height above nearest drain (HAND)") im = ax.imshow( np.ma.masked_equal(hand, -9999), extent=extent, cmap="gist_earth_r", alpha=0.5, vmin=0, vmax=150, ) fig = plt.gcf() cax = fig.add_axes([0.82, 0.37, 0.02, 0.12]) fig.colorbar(im, cax=cax, orientation="vertical") cax.set_ylabel("HAND [m]") plt.savefig("hand.png") ``` ## Floodplains The [floodplains()](reference.rst#pyflwdir.FlwdirRaster.floodplains) method delineates geomorphic floodplain boundaries based on a power-law relation between upstream area and a maximum HAND contour as developed by [Nardi et al (2019)](http://www.doi.org/10.1038/sdata.2018.309). Here, streams are defined based on a minimum upstream area threshold `upa_min` and floodplains on the scaling parameter `b` of the power-law relationship. ``` floodplains = flw.floodplains(elevtn=elevtn, uparea=uparea, upa_min=1000) # plot floodmap = (floodplains, -1, dict(cmap="Blues", alpha=0.5, vmin=0)) ax = quickplot( raster=floodmap, title="Geomorphic floodplains", filename="flw_floodplain" ) ```
github_jupyter
# Pure Python evaluation of vector norms Generate a list of random floats of a given dimension (dim), and store its result in the variable `vec`. ``` # This is used for plots and numpy %pylab inline #import random dim = int(1000) # YOUR CODE HERE #vec = [] #[vec.append(random.random() for i in range(dim))] #print(vec) vec = random.random(dim) vec #raise NotImplementedError() from numpy.testing import * assert_equal(type(vec), list) assert_equal(len(vec), dim) for ob in vec: assert_equal(type(ob), float) ``` Write a function that evaluates the $l_p$ norm of a vector in $R^d$. We remind: $$ \|v \|_{p} := \left(\sum_i (v_i)^p\right)^{1/p} $$ the function should take as arguments a `list`, containing your $R^d$ vector, and a number `p` in the range $[1, \infty]$, indicating the exponent of the norm. **Note:** an infinite float number is given by `float("inf")`. Throw an assertion (look it up on google!) if the exponent is not in the range you expect. ``` def p_norm(vector,p): # YOUR CODE HERE raise NotImplementedError() assert_equal(p_norm(range(10),1), 45.0) assert_equal(p_norm([3,4], 2), 5.0) assert_equal(p_norm([-1,-.5,.5], float("inf")), 1) assert_raises(AssertionError, p_norm, [2,3], 0) assert_raises(AssertionError, p_norm, [2,3], -1) ``` # Playing with condition numbers In this exercise you will have to figure out what are the optimal values of the stepping interval when approximating derivatives using the finite difference method. See here_ for a short introduction on how to run these programs on SISSA machines. ## 1. Finite differences Write a program to compute the finite difference (`FD`) approximation of the derivative of a function `f`, computed at point `x`, using a stepping of size `h`. Recall the definition of approximate derivative: $$ FD(f,x,h) := \frac{f(x+h)-f(x)}{h} $$ ``` def FD(f, x, h): # YOUR CODE HERE raise NotImplementedError() assert_equal(FD(lambda x: x, 0, .125), 1.0) ``` ## 2. Compute FD Evaluate this function for the derivative of `sin(x)` evaluated at `x=1`, for values of `h` equal to `1e-i`, with `i=0,...,20`. Store the values of the finite differences in the list `fd1`. ``` # YOUR CODE HERE raise NotImplementedError() assert_equal(len(fd1), 21) expected = [0.067826442017785205, 0.49736375253538911, 0.53608598101186899, 0.5398814803603269, 0.54026023141862112, 0.54029809850586474, 0.54030188512133037, 0.54030226404044868, 0.54030229179602429, 0.54030235840940577, 0.54030224738710331, 0.54030113716407868, 0.54034554608506369, 0.53956838996782608, 0.53290705182007514, 0.55511151231257827, 0.0, 0.0, 0.0, 0.0, 0.0] assert_almost_equal(fd1,expected,decimal=4) ``` ## 3. Error plots Plot the error, defined as `abs(FD-cos(1.0))` where `FD` is your approximation, in `loglog` format and explain what you see. A good way to emphasize the result is to give the option `'-o'` to the plot command. ``` # YOUR CODE HERE raise NotImplementedError() ``` YOUR ANSWER HERE ## 4. Error plots base 2 Repeate step 2 and 3 above, but using powers of `2` instead of powers of `10`, i.e., using `h` equal to `2**(-i)` for `i=1,...,60`. Do you see differences? How do you explain these differences? Shortly comment. A good way to emphasize the result is to give the option `'-o'` to the plot command. YOUR ANSWER HERE YOUR ANSWER HERE ## 5. Central Finite Differences Write a function that computes the central finite difference approximation (`CFD`), defined as $$ CFD(f,x,h) := \frac{f(x+h)-f(x-h)}{2h} $$ ``` def CFD(f, x, h): # YOUR CODE HERE raise NotImplementedError() assert_equal(CFD(lambda x: x**2, 0.0, .5), 0.0) assert_equal(CFD(lambda x: x**2, 1.0, .5), 2.0) ``` ## 6. Error plots for CFD Repeat steps 2., 3. and 4. and explain what you see. What is the *order* of the approximation 1. and what is the order of the approximation 5.? What's the order of the cancellation errors? ``` # YOUR CODE HERE raise NotImplementedError() ``` YOUR ANSWER HERE # Numpy Numpy provides a very powerful array container. The first line of this ipython notebook has imported all of numpy functionalities in your notebook, just as if you typed:: from numpy import * Create a numpy array whith entries that range form 0 to 64. Use the correct numpy function to do so. Call it `x`. ``` # YOUR CODE HERE raise NotImplementedError() assert_equal(type(x), ndarray) assert_equal(len(x), 64) for i in xrange(64): assert_equal(x[i], float(i)) ``` Reshape the one dimensional array, to become a 4 rows 2 dimensional array, let numpy evaluate the correct number of culumns. Call it `y`. ``` # YOUR CODE HERE raise NotImplementedError() assert_equal(shape(y), (4,16)) ``` Get the following *slices* of `y`: * All the rows and the first three colums. Name it `sl1`. * All the colums and the first three rows. Name it `sl2`. * Third to sixth (included) columns and all the rows. Name it `sl3`. * The last three columns and all the rows. Name it `sl4`. ``` # YOUR CODE HERE raise NotImplementedError() assert_equal(sl1,[[0,1,2],[16,17,18],[32,33,34],[48,49,50]]) assert_equal(sl2,[[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15],[16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31],[32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47]]) assert_equal(sl3,[[3,4,5,6],[19,20,21,22],[35,36,37,38],[51,52,53,54]]) assert_equal(sl4,[[13,14,15],[29,30,31],[45,46,47],[61,62,63]]) ``` Now reshape the array, as if you wanted to feed it to a fortran routine. Call it `z`. ``` # YOUR CODE HERE raise NotImplementedError() ``` Comment on the result, what has changed with respect to `y`? YOUR ANSWER HERE Set the fourth element of `x` to 666666, and print `x`, `y`, `z`. Comment on the result ``` # YOUR CODE HERE raise NotImplementedError() ``` YOUR ANSWER HERE ## Arrays and Matrices Define 2 arrays, `A` of dimensions (2,3) and `B` of dimension (3,4). * Perform the operation `C = A.dot(B)`. Comment the result, or the error you get. ``` # YOUR CODE HERE raise NotImplementedError() assert_equal(A.shape,(2,3)) assert_equal(B.shape,(3,4)) assert_equal(C.shape,(2,4)) expected = sum(A[1,:]*B[:,2]) assert_equal(C[1,2],expected) ``` YOUR ANSWER HERE * Perform the operation `C = A*(B)`. Comment the result, or the error you get. ``` C = A*B ``` YOUR ANSWER HERE * Convert A and B, from arrays to matrices and perform `A*B`. Comment the result. ``` # YOUR CODE HERE raise NotImplementedError() assert_equal(type(A),numpy.matrixlib.defmatrix.matrix) assert_equal(type(B),numpy.matrixlib.defmatrix.matrix) assert_equal(type(C),numpy.matrixlib.defmatrix.matrix) assert_equal(A.shape,(2,3)) assert_equal(B.shape,(3,4)) assert_equal(C.shape,(2,4)) expected = sum(A[1,:]*B[:,2]) assert_equal(C[1,2],expected) ``` YOUR ANSWER HERE # Playing with polynomials The polynomial `(1-x)^6` can be expanded to:: x^6 - 6*x^5 + 15*x^4 - 20*x^3 + 15*x^2 - 6*x + 1 The two forms above are equivalent from a mathematical point of view, but may yield different results in a computer machine. Compute and plot the values of this polynomial, using each of the two forms, for 101 equally spaced points in the interval `[0.995,1.005]`, i.e., with a spacing of 0.0001 (use linspace). Can you explain this behavior? ``` # YOUR CODE HERE raise NotImplementedError() ``` YOUR ANSWER HERE **Playing with interpolation in python** 1. Given a set of $n+1$ points $x_i$ as input (either a list of floats, or a numpy array of floats), construct a function `lagrange_basis(xi,i,x)` that returns the $i$-th Lagrange polynomial associated to $x_i$, evaluated at $x$. The $i$-th Lagrange polynomial is defined as polynomial of degree $n$ such that $l_i(x_j) = \delta_{ij}$, where $\delta$ is one if $i == j$ and zero otherwise. Recall the mathematical definition of the $l_i(x)$ polynomials: $$ l_i(x) := \prod_{j=0, j\neq i}^{n} \frac{x-x_j}{x_i-x_j} $$ ``` def lagrange_basis(xi, i, x): # YOUR CODE HERE raise NotImplementedError() x = linspace(0,1,5) d = 3 xi = linspace(0,1,d) assert_equal(list(lagrange_basis(xi, 0, x)),[1.0, 0.375, -0.0, -0.125, 0.0]) assert_equal(list(lagrange_basis(xi, 1, x)),[0.0, 0.75, 1.0, 0.75, -0.0]) assert_equal(list(lagrange_basis(xi, 2, x)),[-0.0, -0.125, 0.0, 0.375, 1.0]) assert_raises(AssertionError, lagrange_basis, xi, -1, x) assert_raises(AssertionError, lagrange_basis, xi, 10, x) ``` Construct the function `lagrange_interpolation(xi,g)` that, given the set of interpolation points `xi` and a function `g`, it returns **another function** that when evaluated at **x** returns the Lagrange interpolation polynomial of `g` defined as $$ \mathcal{L} g(x) := \sum_{i=0}^n g(x_i) l_i(x) $$ You could use this function as follows:: Lg = lagrange_interpolation(xi, g) xi = linspace(0,1,101) plot(x, g(x)) plot(x, Lg(x)) plot(xi, g(xi), 'or') ``` def lagrange_interpolation(xi,f): # YOUR CODE HERE raise NotImplementedError() # Check for polynomials. This should be **exact** g = lambda x: x**3+x**2 xi = linspace(0,1,4) Lg = lagrange_interpolation(xi, g) x = linspace(0,1,1001) assert p_norm(g(x) - Lg(x),float('inf')) < 1e-15, 'This should be zero...' ```
github_jupyter
# Numpy ``` import numpy as np ``` ## Create numpy arrays ``` np.array([1, 2]).shape np.array([ [1, 2], [3, 4] ]).shape np.array([ [1, 2], [3, 4], [5, 6] ]).shape np.zeros((2, 2)) np.ones((2, 2)) np.full((2, 2), 5) np.eye(3) ``` ## Generate data ``` np.random.random() np.random.randint(0, 10) lower_bound_value = 0 upper_bound_value = 100 num_rows = 1000 num_cols = 50 A = np.random.randint(lower_bound_value, upper_bound_value, size=(num_rows, num_cols)) A A.shape A.min() A.max() v = np.random.uniform(size=4) v np.random.choice(v) np.random.choice(10, size=(3, 3)) np.random.normal(size=4) # gaussian (normal) distribution, mean = 0 and variance = 1 np.random.randn(2, 3) ``` ## Numpy operations ``` array = np.array([ [1, 2, 3], [4, 5, 6], [7, 8, 9] ]) array array[:] array[0] array[2] array[:, 0] array[:, 2:] array[1, 1] array[-1, -1] array_2 = np.concatenate([array, np.array([ [10, 11, 12] ])]) array_2 array_2[0, 0] = 0 array_2 ``` ## Vectors, Matrices arithmetic and linear systems ``` array_1 = np.array([1, 2]) array_1 array_2 = np.array([3, 4]) array_2 array_1 + array_2 array_1 - array_2 array_1 * 2 array_1 ** 3 array_1 * array_2 np.dot(array_1, array_2) mat_1 = np.array([ [1, 2, 3], [4, 5, 6] ]) mat_1 mat_1.T mat_2 = np.array([ [1, 2, 3], [4, 5, 6] ]) mat_2 mat_1 * 10 mat_1 * mat_2 np.dot(mat_1, mat_2.T) mat_1 @ mat_2.T np.linalg.inv(np.eye(3)) mat_3 = np.matrix([ [1, 2], [3, 4] ]) mat_3 np.linalg.det(mat_3) np.linalg.inv(mat_3) np.linalg.inv(mat_3).dot(mat_3) np.trace(mat_3) np.diag(mat_3) np.diag([1, 4]) ``` $$ a^T b = \vert\vert a \vert\vert \vert\vert b \vert\vert \cos(\theta) $$ $$ \cos \theta_{ab} = \frac{a^T b}{ \vert\vert a \vert\vert \vert\vert b \vert\vert} $$ $$ \vert\vert a \vert\vert = \sqrt{ \sum_{d=1}^{D} a^2_{d} } $$ ``` a = np.array([1, 2]) b = np.array([3, 4]) a_mag = np.sqrt((a * a).sum()) a_mag np.linalg.norm(a) cos_theta = a.dot(b) / (np.linalg.norm(a) * np.linalg.norm(b)) cos_theta angle = np.arccos(cos_theta) angle ``` ## Eigen vectors and eigen values ``` A = np.matrix([ [1, 2], [3, 4] ]) eig_values, eig_vectors = np.linalg.eig(A) eig_values eig_vectors eig_vectors[:, 0] * eig_values[0] A @ eig_vectors[:, 0] # not true true because of numerical precision, need to use np.allclose eig_vectors[:, 0] * eig_values[0] == A @ eig_vectors[:, 0] np.allclose(eig_vectors[:, 0] * eig_values[0], A @ eig_vectors[:, 0]) # check all np.allclose(eig_vectors @ np.diag(eig_values), A @ eig_vectors) ``` ## Broadcasting perform arithmetic operations on different shaped arrays smaller array is broadcast across the larger array to ensure shape consistency Rules: * one dimension (either column or row) should have the same dimension for both arrays * the lower dimension array should be a 1d array ``` mat_1 = np.arange(20).reshape(5, 4) mat_1 mat_2 = np.arange(5) # 1x5 cannot be added to mat_1 mat_3 = mat_2.reshape(5, 1) mat_3 mat_1 + mat_3 mat_1 * mat_3 ``` ## Solve equations ``` mat_1 = np.array([ [2, 1], [1, -1] ]) mat_1 array = np.array([4, -1]) array %%time np.linalg.inv(mat_1).dot(array) %%time np.linalg.solve(mat_1, array) inv_mat_1 = np.linalg.inv(mat_1) inv_mat_1 inv_mat_1.dot(array) mat_1 = np.array([ [1, 2, 3], [4, 5, 2], [2, 8, 5] ]) mat_1 array = np.array([5, 10, 15]) array np.linalg.solve(mat_1, array) ``` ## Statistical operations ``` mat_1 = np.array([ [1, 2, 3, 4], [3, 4, 5, 6], [7, 8, 9, 6], [12, 7, 10, 9], [2, 11, 8, 10] ]) mat_1 mat_1.sum() np.sum(mat_1) mat_1.sum(axis=0) # column wise sum mat_1.sum(axis=1) # row wise sum mat_1.mean() mat_1.mean(axis=0) mat_1.mean(axis=1) np.median(mat_1) np.median(mat_1, axis=0) np.std(mat_1, axis=1) np.std(mat_1) # percentile: value below which a given percentage of observations can be found percentile = [25, 50, 75] for p in percentile: print(f'Percentile {p}: {np.percentile(mat_1, p, axis=1)}') R = np.random.randn(10_000) R.mean() R.var() R.std() np.sqrt(R.var()) R = np.random.randn(10_000, 3) R.mean(axis=0).shape R.mean(axis=1).shape np.cov(R).shape np.cov(R.T) np.cov(R, rowvar=False) ```
github_jupyter
# Visualize Urban Heat Islands (UHI) in Toulouse - France #### <br> Data from meteo stations can be downloaded on the French open source portal https://www.data.gouv.fr/ ``` %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import string from glob import glob from matplotlib.dates import DateFormatter from ipyleaflet import Map, Marker, basemaps, basemap_to_tiles from ipywidgets import Layout met_files_folder = 'station-meteo' # Folder where to put meteo data files legend_file = 'stations-meteo-en-place.csv' # File listing all meteo stations start_date = '2019-06-27' end_date = '2019-06-27' toulouse_center = (43.60426, 1.44367) default_zoom = 12 ``` #### <br> Parse file listing all met stations ``` leg = pd.read_csv(legend_file, sep=';') def get_legend(id): return leg.loc[leg['FID']==id]['Nom_Station'].values[0] def get_lon(id): return leg.loc[leg['FID']==id]['x'].values[0] def get_lat(id): return leg.loc[leg['FID']==id]['y'].values[0] ``` #### <br> Build a Pandas dataframe from a met file ``` def get_table(file): df = pd.read_csv(file, sep=';') df.columns = list(string.ascii_lowercase)[:17] df['id'] = df['b'] df['annee'] = df['e'] + 2019 df['heure'] = (df['f'] - 1) * 15 // 60 df['minute'] = 1 + (df['f'] - 1) * 15 % 60 df = df.loc[df['g'] > 0] # temperature field null df['temperature'] = df['g'] - 50 + df['h'] / 10 df['pluie'] = df['j'] * 0.2 # auget to mm df['vent_dir'] = df['k'] * 2 df['vent_force'] = df['l'] # en dessous de 80 pareil au dessus / 2 ? df['pression'] = df['m'] + 900 df['vent_max_dir'] = df['n'] * 22.5 df['vent_max_force'] = df['o'] # en dessous de 80 pareil au dessus / 2 ? df['pluie_plus_intense'] = df['p'] * 0.2 df['date'] = df['annee'].map(str) + '-' + df['c'].map(str) + '-' + df['d'].map(str) \ + ':' + df['heure'].map(str) + '-' + df['minute'].map(str) df['date'] = pd.to_datetime(df['date'], format='%Y-%m-%d:%H-%M') df = df[['date','id','temperature','pression','pluie','pluie_plus_intense','vent_dir', \ 'vent_force','vent_max_dir','vent_max_force']] df.set_index('date', inplace=True) df = df.loc[start_date:end_date] return df ``` #### <br> Parse met files (in met_files_folder) ``` table_list = [] for file in glob(met_files_folder + '/*.csv'): table_list.append(get_table(file)) tables = [table for table in table_list if not table.empty] legs = [get_legend(table['id'].iloc[0]) for table in tables] lats = [get_lat(table['id'].iloc[0]) for table in tables] longs = [get_lon(table['id'].iloc[0]) for table in tables] print('Number of meteo stations with available recordings for this time period: {}'.format(len(legs))) print(legs) ``` #### <br> Plot all met stations around Toulouse ``` m = Map(center=toulouse_center, zoom=default_zoom, layout=Layout(width='100%', height='500px')) for i in range(len(legs)): m.add_layer(Marker(location=(lats[i], longs[i]), draggable=False, title=legs[i])) m ``` #### <br> Plot temperature chart for all met stations ``` ax = tables[0]['temperature'].plot(grid=True, figsize=[25,17]) for i in range(1, len(tables)): tables[i]['temperature'].plot(grid=True, ax=ax) ax.legend(legs) ax.xaxis.set_major_formatter(DateFormatter('%H:%M')) ax.set_xlabel('Temperatures of ' + start_date) plt.savefig('temperatures.png') ```
github_jupyter
Note: range sliders and range selectors are available in version 1.9.7+ Run pip install plotly --upgrade to update your Plotly version ``` import plotly plotly.__version__ ``` ## Basic Range Slider and Range Selectors ``` from cswd import query_adjusted_pricing OHLCV = ['open','high','low','close','volume'] df = query_adjusted_pricing('000001','2007-10-1','2009-4-1',OHLCV,True) import plotly.plotly as py import plotly.graph_objs as go from datetime import datetime trace = go.Scatter(x=df.index, y=df.high) data = [trace] layout = dict( title='带滑块和选择器的时间序列', xaxis=dict( rangeselector=dict( buttons=list([ dict(count=1, label='1w', step='week', stepmode='backward'), dict(count=6, label='6m', step='month', stepmode='backward'), dict(count=1, label='YTD', step='year', stepmode='todate'), dict(count=1, label='1y', step='year', stepmode='backward'), dict(step='all') ]) ), rangeslider=dict(), type='date' ) ) fig = dict(data=data, layout=layout) py.iplot(fig) ``` ## Range Slider with Vertically Stacked Subplots ``` import plotly.plotly as py import plotly.graph_objs as go trace1 = go.Scatter( x = ["2013-01-15", "2013-01-29", "2013-02-26", "2013-04-19", "2013-07-02", "2013-08-27", "2013-10-22", "2014-01-20", "2014-05-05", "2014-07-01", "2015-02-09", "2015-04-13", "2015-05-13", "2015-06-08", "2015-08-05", "2016-02-25"], y = ["8", "3", "2", "10", "5", "5", "6", "8", "3", "3", "7", "5", "10", "10", "9", "14"], name = "var0", text = ["8", "3", "2", "10", "5", "5", "6", "8", "3", "3", "7", "5", "10", "10", "9", "14"], yaxis = "y", ) trace2 = go.Scatter( x = ["2015-04-13", "2015-05-13", "2015-06-08", "2015-08-05", "2016-02-25"], y = ["53.0", "69.0", "89.0", "41.0", "41.0"], name = "var1", text = ["53.0", "69.0", "89.0", "41.0", "41.0"], yaxis = "y2", ) trace3 = go.Scatter( x = ["2013-01-29", "2013-02-26", "2013-04-19", "2013-07-02", "2013-08-27", "2013-10-22", "2014-01-20", "2014-04-09", "2014-05-05", "2014-07-01", "2014-09-30", "2015-02-09", "2015-04-13", "2015-06-08", "2016-02-25"], y = ["9.6", "4.6", "2.7", "8.3", "18", "7.3", "3", "7.5", "1.0", "0.5", "2.8", "9.2", "13", "5.8", "6.9"], name = "var2", text = ["9.6", "4.6", "2.7", "8.3", "18", "7.3", "3", "7.5", "1.0", "0.5", "2.8", "9.2", "13", "5.8", "6.9"], yaxis = "y3", ) trace4 = go.Scatter( x = ["2013-01-29", "2013-02-26", "2013-04-19", "2013-07-02", "2013-08-27", "2013-10-22", "2014-01-20", "2014-04-09", "2014-05-05", "2014-07-01", "2014-09-30", "2015-02-09", "2015-04-13", "2015-06-08", "2016-02-25"], y = ["6.9", "7.5", "7.3", "7.3", "6.9", "7.1", "8", "7.8", "7.4", "7.9", "7.9", "7.6", "7.2", "7.2", "8.0"], name = "var3", text = ["6.9", "7.5", "7.3", "7.3", "6.9", "7.1", "8", "7.8", "7.4", "7.9", "7.9", "7.6", "7.2", "7.2", "8.0"], yaxis = "y4", ) trace5 = go.Scatter( x = ["2013-02-26", "2013-07-02", "2013-09-26", "2013-10-22", "2013-12-04", "2014-01-02", "2014-01-20", "2014-05-05", "2014-07-01", "2015-02-09", "2015-05-05"], y = ["290", "1078", "263", "407", "660", "740", "33", "374", "95", "734", "3000"], name = "var4", text = ["290", "1078", "263", "407", "660", "740", "33", "374", "95", "734", "3000"], yaxis = "y5", ) data = go.Data([trace1, trace2, trace3, trace4, trace5]) # style all the traces for k in range(len(data)): data[k].update( { "type": "scatter", "hoverinfo": "name+x+text", "line": {"width": 0.5}, "marker": {"size": 8}, "mode": "lines+markers", "showlegend": False } ) layout = { "annotations": [ { "x": "2013-06-01", "y": 0, "arrowcolor": "rgba(63, 81, 181, 0.2)", "arrowsize": 0.3, "ax": 0, "ay": 30, "text": "state1", "xref": "x", "yanchor": "bottom", "yref": "y" }, { "x": "2014-09-13", "y": 0, "arrowcolor": "rgba(76, 175, 80, 0.1)", "arrowsize": 0.3, "ax": 0, "ay": 30, "text": "state2", "xref": "x", "yanchor": "bottom", "yref": "y" } ], "dragmode": "zoom", "hovermode": "x", "legend": {"traceorder": "reversed"}, "margin": { "t": 100, "b": 100 }, "shapes": [ { "fillcolor": "rgba(63, 81, 181, 0.2)", "line": {"width": 0}, "type": "rect", "x0": "2013-01-15", "x1": "2013-10-17", "xref": "x", "y0": 0, "y1": 0.95, "yref": "paper" }, { "fillcolor": "rgba(76, 175, 80, 0.1)", "line": {"width": 0}, "type": "rect", "x0": "2013-10-22", "x1": "2015-08-05", "xref": "x", "y0": 0, "y1": 0.95, "yref": "paper" } ], "xaxis": { "autorange": True, "range": ["2012-10-31 18:36:37.3129", "2016-05-10 05:23:22.6871"], "rangeslider": { "autorange": True, "range": ["2012-10-31 18:36:37.3129", "2016-05-10 05:23:22.6871"] }, "type": "date" }, "yaxis": { "anchor": "x", "autorange": True, "domain": [0, 0.2], "linecolor": "#673ab7", "mirror": True, "range": [-60.0858369099, 28.4406294707], "showline": True, "side": "right", "tickfont": {"color": "#673ab7"}, "tickmode": "auto", "ticks": "", "titlefont": {"color": "#673ab7"}, "type": "linear", "zeroline": False }, "yaxis2": { "anchor": "x", "autorange": True, "domain": [0.2, 0.4], "linecolor": "#E91E63", "mirror": True, "range": [29.3787777032, 100.621222297], "showline": True, "side": "right", "tickfont": {"color": "#E91E63"}, "tickmode": "auto", "ticks": "", "titlefont": {"color": "#E91E63"}, "type": "linear", "zeroline": False }, "yaxis3": { "anchor": "x", "autorange": True, "domain": [0.4, 0.6], "linecolor": "#795548", "mirror": True, "range": [-3.73690396239, 22.2369039624], "showline": True, "side": "right", "tickfont": {"color": "#795548"}, "tickmode": "auto", "ticks": "", "title": "mg/L", "titlefont": {"color": "#795548"}, "type": "linear", "zeroline": False }, "yaxis4": { "anchor": "x", "autorange": True, "domain": [0.6, 0.8], "linecolor": "#607d8b", "mirror": True, "range": [6.63368032236, 8.26631967764], "showline": True, "side": "right", "tickfont": {"color": "#607d8b"}, "tickmode": "auto", "ticks": "", "title": "mmol/L", "titlefont": {"color": "#607d8b"}, "type": "linear", "zeroline": False }, "yaxis5": { "anchor": "x", "autorange": True, "domain": [0.8, 1], "linecolor": "#2196F3", "mirror": True, "range": [-685.336803224, 3718.33680322], "showline": True, "side": "right", "tickfont": {"color": "#2196F3"}, "tickmode": "auto", "ticks": "", "title": "mg/Kg", "titlefont": {"color": "#2196F3"}, "type": "linear", "zeroline": False } } fig = go.Figure(data=data, layout=layout) py.iplot(fig) ```
github_jupyter
``` import rainbowhat as rh from enum import Enum import subprocess import re import time import itertools class RGBColors(Enum): RED = (50, 0, 0) YELLOW = (50, 50, 0) PINK = (50, 10, 12) GREEN = (0, 50, 0) PURPLE = (50, 0, 50) ORANGE = (50, 22, 0) BLUE = (0, 0, 50) def run_rainbow(it): for pixel, color in it: rh.rainbow.clear() rh.rainbow.set_pixel(pixel, *(color.value)) rh.rainbow.show() time.sleep(0.1) def open_rainbow(): colours = list(RGBColors) for pixel in itertools.chain(reversed(range(4)),range(4)): rh.rainbow.clear() rh.rainbow.set_pixel(pixel, *(colours[pixel].value)) rh.rainbow.set_pixel(-pixel+6, *(colours[-pixel+6].value)) rh.rainbow.show() time.sleep(0.4) rh.rainbow.clear() rh.rainbow.show() def right_to_left(): run_rainbow(zip(range(7), RGBColors)) def left_to_right(): run_rainbow(reversed(tuple(zip(range(7), RGBColors)))) rh.rainbow.clear() rh.display.clear() for idx in range(1, 11): rh.display.print_str(f'{idx}') rh.display.show() if (idx % 2) == 1: left_to_right() else: right_to_left() rh.rainbow.clear() rh.rainbow.show() rh.display.print_str('DONE') rh.display.show() TEMP_EXTRACTOR = re.compile(r"[^=]+=(\d+(?:\.\d+))'C\n$") def get_temp(real_temp): cpu_temp_str = subprocess.check_output("vcgencmd measure_temp", shell=True).decode("utf-8") temp = rh.weather.temperature() cpu_temp = float(TEMP_EXTRACTOR.match(cpu_temp_str)[1]) # print(temp, cpu_temp) FACTOR = -(cpu_temp - temp)/(real_temp - temp) print('FACTOR', FACTOR) FACTOR = 0.6976542896954581 return temp - ((cpu_temp - temp)/FACTOR) @rh.touch.A.press() def touch_a(channel): rh.lights.rgb(1, 0, 0) rh.display.clear() rh.display.print_str("TEMP") rh.display.show() time.sleep(1) rh.display.print_float(get_temp(20.5)) rh.display.show() open_rainbow() def release(channel): rh.display.clear() rh.display.show() rh.rainbow.clear() rh.rainbow.show() rh.lights.rgb(0, 0, 0) @rh.touch.B.press() def touch_b(channel): rh.lights.rgb(0, 1, 0) rh.display.clear() rh.display.print_str(" B ") rh.display.show() time.sleep(0.5) right_to_left() @rh.touch.C.press() def touch_c(channel): rh.lights.rgb(0, 0, 1) rh.display.clear() rh.display.print_str(" C") rh.display.show() time.sleep(0.5) left_to_right() rh.touch.A.release()(release) rh.touch.B.release()(release) rh.touch.C.release()(release) ```
github_jupyter
## Agenda 1. Make sure everyone has the necesary programs: - Pyhton - Jupyter - Github 2. Intro tp Git 3. Clone Titanic book 4. Start cleaning data ## 5 basic steps to Machine Learning: #### 1. Data gathering/cleaning/feature development 2. Model Selection 3. Fitting the model 4. Make predicitons 5. Validate the model ``` #just sticking with Pandas today to manipulate and clean the Titanic dataset import pandas as pd # you will need to map to where your files are located train = pd.read_csv(######train.csv') test = pd.read_csv(#######test.csv') train.head() test.head() # created a dataframe to hold passenger ID and eventually our submission Submission = pd.DataFrame() Submission['PassengerId'] = test['PassengerId'] # we are going to remove the target (survived) from train and combine train and test so that are feature is consistent # must be careful when doing this to not drop rows from either set during cleaning y = train.Survived combined = pd.concat([train.drop(['Survived'], axis=1),test]) print(y.shape) print(combined.shape) ``` #### We are going to go through each feature one by one and develop our final train and test features ``` combined.info() combined.head() # passenger ID is just an incremented count of passengers, no value so lets drop it combined = combined.drop('PassengerId', axis= 1) combined.head(20) # at first glance Name appears to have no value, but notice the title within the name ex. Mrs, Mr, Master, etc # these may be valuable as it may signify crew or socioeconomic class # We need to split out the Title within the Name columnm, we can name it Title combined['Title'] = combined['Name'].map(lambda name:name.split(',')[1].split('.')[0].strip()) # and we can drop Name combined = combined.drop('Name', axis=1) combined.head(20) # one last action to perform on the Name or now Title, we need to create dummy variables so our value is numeric combined = pd.get_dummies(combined, columns=['Title'], drop_first=False) combined.head() # and we must do the same thing to Sex combined = pd.get_dummies(combined, columns=['Sex'], drop_first=True) combined.head() # overall age is fine but we are missing values, we need 1309 and we only have 1046. We will do a simple imputation # We are also missing 1 fare value, and 2 Embarked values # We will do a slightly different imputation for each combined['Age'] = combined.Age.fillna(combined.Age.mean()) combined['Fare'] = combined.Fare.fillna(combined.Age.median()) combined['Embarked'] = combined.Embarked.fillna(combined.Embarked.ffill()) # and we need to create dummy variables for embarked combined = pd.get_dummies(combined, columns=['Embarked'], drop_first=False) combined.head() combined.info() # we just have ticket and cabin left # lets first look at the first 20 tickets and see if there may be any value there: combined.Ticket[:20] # there doesn't appear to be anything worth extracting so let's drop Ticket combined = combined.drop('Ticket', axis=1) combined.head() # Finally we are left with cabin, lets take a look at the first 50 combined.Cabin[:50] # looks like there are a lot of missing values, but they might mean they didn't have a cabin and the significance of A, B, C # might be something of value. Let's parse it out and assign the NAN a U for unknown combined['Cabin'] = combined.Cabin.fillna('U') combined['Cabin'] = combined['Cabin'].map(lambda c : c[0]) # Finally apply dummy variables to Cabin combined = pd.get_dummies(combined, columns=['Cabin'], drop_first=False) combined.head() combined.info() # we are good to go. No missing values and everthing is numeric # lets put the test and train set back together # remebering the length of the train set y.shape train = combined.iloc[:891] test = combined.iloc[891:] print(train.shape) print(test.shape) ``` ### We will pick up from here at our next meeting with model selction and introduction to pipelines ``` from sklearn.svm import LinearSVC from sklearn.ensemble import RandomForestClassifier X_train = train.values X_test = test.values y = y.values clf = RandomForestClassifier() clf.fit(X_train, y) Submission['Survived'] = clf.predict(X_test) Submission.head() # you need to map the submission file to your computer Submission.to_csv(##submission.csv', index=False, header=True) ```
github_jupyter
# Kernel density estimation ``` # Import all libraries needed for the exploration # General syntax to import specific functions in a library: ##from (library) import (specific library function) from pandas import DataFrame, read_csv # General syntax to import a library but no functions: ##import (library) as (give the library a nickname/alias) import matplotlib.pyplot as plt import seaborn as sns import pandas as pd #this is how we usually import pandas import numpy as np #this is how we usually import numpy import sys #only needed to determine Python version number import matplotlib #only needed to determine Matplotlib version number import tables # pytables is needed to read and write hdf5 files import openpyxl # is used to read and write MS Excel files import xgboost import math from scipy.stats import pearsonr from sklearn.linear_model import LinearRegression from sklearn import tree, linear_model from sklearn.model_selection import cross_validate, cross_val_score from sklearn.model_selection import train_test_split from sklearn.metrics import explained_variance_score from sklearn.neighbors import KernelDensity from scipy.stats import gaussian_kde from statsmodels.nonparametric.kde import KDEUnivariate from statsmodels.nonparametric.kernel_density import KDEMultivariate # Enable inline plotting %matplotlib inline # Supress some warnings: import warnings warnings.filterwarnings('ignore') print('Python version ' + sys.version) print('Pandas version ' + pd.__version__) print('Numpy version ' + np.__version__) print('Matplotlib version ' + matplotlib.__version__) print('Seaborn version ' + sns.__version__) ``` ## Training data ``` data = pd.read_csv('../data/train.csv') ``` ### Explore the data ``` # Check the number of data points in the data set print('No observations:', len(data)) # Check the number of features in the data set print('No variables:', len(data.columns)) # Check the data types print(data.dtypes.unique()) data.shape data.columns for i, col in enumerate(data.columns, start=0): print(i, col) # We may have some categorical features, let's check them data.select_dtypes(include=['O']).columns.tolist() # Check any number of columns with NaN print(data.isnull().any().sum(), ' / ', len(data.columns)) # Check number of data points with any NaN print(data.isnull().any(axis=1).sum(), ' / ', len(data)) ``` ### Select features and targets ``` features = data.iloc[:,9:-1].columns.tolist() target = data.iloc[:,-1].name all_lh_features = [ 'CSF', 'CC_Posterior', 'CC_Mid_Posterior', 'CC_Central', 'CC_Mid_Anterior', 'CC_Anterior', 'EstimatedTotalIntraCranialVol', 'Left-Lateral-Ventricle', 'Left-Inf-Lat-Vent', 'Left-Cerebellum-White-Matter', 'Left-Cerebellum-Cortex', 'Left-Thalamus-Proper', 'Left-Caudate', 'Left-Putamen', 'Left-Pallidum', 'Left-Hippocampus', 'Left-Amygdala', 'Left-Accumbens-area', 'Left-VentralDC', 'Left-vessel', 'Left-choroid-plexus', 'Left-WM-hypointensities', 'Left-non-WM-hypointensities', 'lhCortexVol', 'lhCerebralWhiteMatterVol', 'lhSurfaceHoles', 'lh.aparc.thickness', 'lh_bankssts_thickness', 'lh_caudalanteriorcingulate_thickness', 'lh_caudalmiddlefrontal_thickness', 'lh_cuneus_thickness', 'lh_entorhinal_thickness', 'lh_fusiform_thickness', 'lh_inferiorparietal_thickness', 'lh_inferiortemporal_thickness', 'lh_isthmuscingulate_thickness', 'lh_lateraloccipital_thickness', 'lh_lateralorbitofrontal_thickness', 'lh_lingual_thickness', 'lh_medialorbitofrontal_thickness', 'lh_middletemporal_thickness', 'lh_parahippocampal_thickness', 'lh_paracentral_thickness', 'lh_parsopercularis_thickness', 'lh_parsorbitalis_thickness', 'lh_parstriangularis_thickness', 'lh_pericalcarine_thickness', 'lh_postcentral_thickness', 'lh_posteriorcingulate_thickness', 'lh_precentral_thickness', 'lh_precuneus_thickness', 'lh_rostralanteriorcingulate_thickness', 'lh_rostralmiddlefrontal_thickness', 'lh_superiorfrontal_thickness', 'lh_superiorparietal_thickness', 'lh_superiortemporal_thickness', 'lh_supramarginal_thickness', 'lh_frontalpole_thickness', 'lh_temporalpole_thickness', 'lh_transversetemporal_thickness', 'lh_insula_thickness', 'lh_MeanThickness_thickness' ] data_lh = data[all_lh_features] data_lh.describe().T dropcolumns = [ 'EstimatedTotalIntraCranialVol', 'CSF', 'CC_Posterior', 'CC_Mid_Posterior', 'CC_Central', 'CC_Mid_Anterior', 'CC_Anterior' ] df_lh = data_lh.drop(dropcolumns, axis=1) df_lh target from sklearn.neighbors import KernelDensity from scipy.stats import gaussian_kde from statsmodels.nonparametric.kde import KDEUnivariate from statsmodels.nonparametric.kernel_density import KDEMultivariate def kde_scipy(x, x_grid, bandwidth=0.2, **kwargs): """Kernel Density Estimation with Scipy""" # Note that scipy weights its bandwidth by the covariance of the # input data. To make the results comparable to the other methods, # we divide the bandwidth by the sample standard deviation here. kde = gaussian_kde(x, bw_method=bandwidth / x.std(ddof=1), **kwargs) return kde.evaluate(x_grid) def kde_statsmodels_u(x, x_grid, bandwidth=0.2, **kwargs): """Univariate Kernel Density Estimation with Statsmodels""" kde = KDEUnivariate(x) kde.fit(bw=bandwidth, **kwargs) return kde.evaluate(x_grid) def kde_statsmodels_m(x, x_grid, bandwidth=0.2, **kwargs): """Multivariate Kernel Density Estimation with Statsmodels""" kde = KDEMultivariate(x, bw=bandwidth * np.ones_like(x), var_type='c', **kwargs) return kde.pdf(x_grid) def kde_sklearn(x, x_grid, bandwidth=0.2, **kwargs): """Kernel Density Estimation with Scikit-learn""" kde_skl = KernelDensity(bandwidth=bandwidth, **kwargs) kde_skl.fit(x[:, np.newaxis]) # score_samples() returns the log-likelihood of the samples log_pdf = kde_skl.score_samples(x_grid[:, np.newaxis]) return np.exp(log_pdf) kde_funcs = [kde_statsmodels_u, kde_statsmodels_m, kde_scipy, kde_sklearn] kde_funcnames = ['Statsmodels-U', 'Statsmodels-M', 'Scipy', 'Scikit-learn'] print "Package Versions:" import sklearn; print " scikit-learn:", sklearn.__version__ import scipy; print " scipy:", scipy.__version__ import statsmodels; print " statsmodels:", statsmodels.__version__ ``` ### Discretization of Age variable Quantile-based discretization function. Discretize variable into equal-sized buckets based on rank or based on sample quantiles. For example 1000 values for 10 quantiles would produce a Categorical object indicating quantile membership for each data point. ``` pd.qcut(data['Age'], 8).head(1) ``` #### Columns with missing values ``` def missing(dff): print (round((dff.isnull().sum() * 100/ len(dff)),4).sort_values(ascending=False)) missing(df_lh) ``` #### How to remove columns with too many missing values in Python https://stackoverflow.com/questions/45515031/how-to-remove-columns-with-too-many-missing-values-in-python ``` def rmissingvaluecol(dff,threshold): l = [] l = list(dff.drop(dff.loc[:,list((100*(dff.isnull().sum()/len(dff.index))>=threshold))].columns, 1).columns.values) print("# Columns having more than %s percent missing values:"%threshold,(dff.shape[1] - len(l))) print("Columns:\n",list(set(list((dff.columns.values))) - set(l))) return l #Here threshold is 10% which means we are going to drop columns having more than 10% of missing values rmissingvaluecol(data,10) # Now create new dataframe excluding these columns l = rmissingvaluecol(data,10) data1 = data[l] # missing(data[features]) ``` #### Correlations between features and target ``` correlations = {} for f in features: data_temp = data1[[f,target]] x1 = data_temp[f].values x2 = data_temp[target].values key = f + ' vs ' + target correlations[key] = pearsonr(x1,x2)[0] data_correlations = pd.DataFrame(correlations, index=['Value']).T data_correlations.loc[data_correlations['Value'].abs().sort_values(ascending=False).index] ``` #### We can see that the top 5 features are the most correlated features with the target "Age" ``` y = data.loc[:,['lh_insula_thickness','rh_insula_thickness',target]].sort_values(target, ascending=True).values x = np.arange(y.shape[0]) %matplotlib inline plt.subplot(3,1,1) plt.plot(x,y[:,0]) plt.title('lh_insula_thickness and rh_insula_thickness vs Age') plt.ylabel('lh_insula_thickness') plt.subplot(3,1,2) plt.plot(x,y[:,1]) plt.ylabel('rh_insula_thickness') plt.subplot(3,1,3) plt.plot(x,y[:,2],'r') plt.ylabel("Age") plt.show() ``` ### Predicting Age ``` # Train a simple linear regression model regr = linear_model.LinearRegression() new_data = data[features] X = new_data.values y = data.Age.values X_train, X_test, y_train, y_test = train_test_split(X, y ,test_size=0.2) regr.fit(X_train, y_train) print(regr.predict(X_test)) regr.score(X_test,y_test) # Calculate the Root Mean Squared Error print("RMSE: %.2f" % math.sqrt(np.mean((regr.predict(X_test) - y_test) ** 2))) # Let's try XGboost algorithm to see if we can get better results xgb = xgboost.XGBRegressor(n_estimators=100, learning_rate=0.08, gamma=0, subsample=0.75, colsample_bytree=1, max_depth=7) traindf, testdf = train_test_split(X_train, test_size = 0.3) xgb.fit(X_train,y_train) predictions = xgb.predict(X_test) print(explained_variance_score(predictions,y_test)) ``` ### This is worse than a simple regression model We can use `.describe()` to calculate simple **descriptive statistics** for the dataset (rounding to 3 decimals): ``` new_data.describe().round(3).T ``` Computing the **pairwise correlation of columns** (features). Method could be ‘pearson’ (default), ‘kendall’, or ‘spearman’. ``` new_data.corr().round(2) new_data.describe() ``` Splitting the object (iris DataFrame) **into groups** (species) ``` grouped = data.groupby('Sex') grouped.groups ``` Describe the group-wise `PetalLength` summary statistics ``` print('Age:') grouped['Age'].describe() ``` Iterating through the grouped data is very natural ``` for name, group in grouped: print(name,':') print(group.describe().round(2).head(3)) ``` **Group-wise feature correlations** ``` data.groupby('Age').corr().round(3) ``` DataFrame has an `assign()` method that allows you to easily create new columns that are potentially derived from existing columns. ``` iris.assign(sepal_ratio = iris['SepalWidth'] / iris['SepalLength']).head().round(3) ``` In the example above, we inserted a precomputed value. <br> We can also pass in a function of one argument to be evaluated on the DataFrame being assigned to. ``` iris.assign(sepal_ratio = lambda x: (x['SepalWidth'] / x['SepalLength'])).head().round(3) ``` `assign` always returns a copy of the data, leaving the original DataFrame untouched, e.g. ``` iris.head(2) ``` Passing a callable, as opposed to an actual value to be inserted, is useful when you don’t have a reference to the DataFrame at hand. This is common when using assign`` in a chain of operations. For example, we can limit the DataFrame to just those observations with a Sepal Length greater than 5, calculate the ratio, and plot: ``` (iris.query('SepalLength > 5') .assign(SepalRatio = lambda x: x.SepalWidth / x.SepalLength, PetalRatio = lambda x: x.PetalWidth / x.PetalLength) .plot(kind='scatter', x='SepalRatio', y='PetalRatio')) ``` ### Classification *Organizing data as X and y before classification* ``` from sklearn.preprocessing import LabelEncoder # dfX5Y = pd.read_csv('../results/02_X5Y.csv', sep=',') # print(dfX5Y.info()) # print(dfX5Y.describe()) # dfX5Y # Featuer importance XGBoost: # X = df.loc[:, ['CC_Mid_Anterior_w3', 'BrainSegVol-to-eTIV_w3', 'CSF_w2']] # Top three important features # Featuer importance RF (Strrop_3): X = df.loc[:, ['BrainSegVol-to-eTIV_w3', 'CC_Mid_Anterior_w3', 'ic04-ic02']] # Top three important features # Featuer importance RF predicrting Stroop_1_R_3: # X = df.loc[:, ['ic09-ic06', 'ic10-ic01', 'ic05-ic03']] # Top three important features # Featuer importance RF predicrting Stroop_2_R_3: # X = df.loc[:, ['WM-hypointensities_w3', 'ic17-ic04', 'Left-vessel_w3']] # Top three important features # X = df.loc[:, ['BrainSegVol-to-eTIV_w3', 'ic04-ic02']] # Two important features # X = df.loc[:, ['BrainSegVol-to-eTIV_w3', 'CC_Mid_Anterior_w3']] # Top two important features Y = df.loc[:, ['Stroop_3_cat']] y = Y.as_matrix().ravel() np.unique(y) X.columns from sklearn.ensemble import VotingClassifier from sklearn.pipeline import Pipeline from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler from sklearn.neural_network import MLPClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.svm import SVC from xgboost import XGBClassifier from sklearn.model_selection import LeaveOneOut from sklearn.model_selection import KFold from sklearn.model_selection import StratifiedKFold from sklearn.model_selection import cross_val_score from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score from sklearn import preprocessing # X = dfX5Y.loc[:, dfX5Y.columns != 'grp'] # Top five important connections # X = dfX5Y.loc[:, ['ic09-ic02', 'ic04-ic01']] # Top two important connections # X = df.loc[:, ['LatVent_w2', 'LatVent_w3', 'ic09-ic02', 'ic04-ic01']] # X = df.loc[:, ['LatVent_w3', 'ic09-ic02']] # X = df.loc[:, ['LatVent_w2', 'LatVent_w3']] # Y = df.loc[:, ['Stroop_3_cat']] # X = df.loc[:, ['BrainSegVol-to-eTIV_w3', 'CC_Mid_Anterior_w3', 'ic04-ic02']] # Y = df.loc[:, ['Stroop_3_cat']] # y = Y.as_matrix().ravel() rs = 42 # random_state (42) hls = 3 # MLP hidden layer size (3 or 4) # https://stackoverflow.com/questions/37659970/how-does-sklearn-compute-the-precision-score-metric myaverage = 'weighted' # For multilabel classification 'micro', 'macro', 'samples', 'weighted' # see: https://stackoverflow.com/questions/37659970/how-does-sklearn-compute-the-precision-score-metric # http://scikit-learn.org/stable/modules/neural_networks_supervised.html # Class MLPClassifier implements a multi-layer perceptron (MLP) algorithm that # trains using Backpropagation. # So what about size of the hidden layer(s)--how many neurons? # There are some empirically-derived rules-of-thumb, of these, the most # commonly relied on is 'the optimal size of the hidden layer is usually # between the size of the input and size of the output layers'. # Jeff Heaton, author of Introduction to Neural Networks in Java offers a few more. # # In sum, for most problems, one could probably get decent performance (even without # a second optimization step) by setting the hidden layer configuration using j # ust two rules: # (i) number of hidden layers equals one; and # (ii) the number of neurons in that layer is the mean of the neurons in the # input and output layers. # Compute the precision # The precision is the ratio tp / (tp + fp) where tp is the number of true positives and # fp the number of false positives. The precision is intuitively the ability of the # classifier not to label as positive a sample that is negative. # Compute the recall # The recall is the ratio tp / (tp + fn) where tp is the number of true positives and # fn the number of false negatives. The recall is intuitively the ability of the # classifier to find all the positive samples. # Compute the F1 score, also known as balanced F-score or F-measure # The F1 score can be interpreted as a weighted average of the precision and recall, # where an F1 score reaches its best value at 1 and worst score at 0. # The relative contribution of precision and recall to the F1 score are equal. # The formula for the F1 score is: # F1 = 2 * (precision * recall) / (precision + recall) # In the multi-class and multi-label case, this is the weighted average of the F1 score of each class. pipe_clf1 = Pipeline([ ('scl', StandardScaler()), #('pca', PCA(n_components=2)), ('clf1', LogisticRegression(C=1., solver='saga', n_jobs=1, multi_class='multinomial', random_state=rs))]) pipe_clf2 = Pipeline([ ('scl', StandardScaler()), #('pca', PCA(n_components=2)), ('clf2', MLPClassifier(hidden_layer_sizes=(hls, ), # =(100, ) ; =(4, ) activation='relu', solver='adam', alpha=0.0001, batch_size='auto', learning_rate='constant', learning_rate_init=0.001, power_t=0.5, max_iter=5000, shuffle=True, random_state=rs, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08))]) # pipe_clf3 = Pipeline([ # ('scl', StandardScaler()), # #('pca', PCA(n_components=2)), # ('clf3', RandomForestClassifier(n_estimators=100, criterion='gini', max_depth=None, # min_samples_split=2, min_samples_leaf=1, # min_weight_fraction_leaf=0.0, max_features='auto', # max_leaf_nodes=None, # min_impurity_split=1e-07, # bootstrap=True, oob_score=False, n_jobs=1, # random_state=rs, verbose=0, warm_start=False, # class_weight=None))]) # pipe_clf3 = Pipeline([ # ('scl', StandardScaler()), # #('pca', PCA(n_components=2)), # ('clf3', GradientBoostingClassifier(init=None, learning_rate=0.05, loss='deviance', # max_depth=None, max_features=None, max_leaf_nodes=None, # min_samples_leaf=1, min_samples_split=2, # min_weight_fraction_leaf=0.0, n_estimators=100, # presort='auto', random_state=rs, subsample=1.0, verbose=0, # warm_start=False) pipe_clf3 = Pipeline([ ('scl', StandardScaler()), #('pca', PCA(n_components=2)), ('clf3', XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=3, min_child_weight=1, missing=None, n_estimators=1000, nthread=-1, objective='multi:softprob', reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=rs, silent=True, subsample=1))]) pipe_clf4 = Pipeline([ ('scl', StandardScaler()), #('pca', PCA(n_components=2)), ('clf4', SVC(C=1.0, probability=True, random_state=rs))]) # ('clf4', SVC(C=1.0, random_state=rs))]) pipe_clf5 = Pipeline([ ('scl', StandardScaler()), #('pca', PCA(n_components=2)), ('clf5', KNeighborsClassifier(n_neighbors=5, weights='uniform', algorithm='kd_tree', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=1))]) pipe_clf_vote = Pipeline([ # ('scl', StandardScaler()), ('clf_vote', VotingClassifier( estimators=[('lr', pipe_clf1), ('mlp', pipe_clf2), ('rf', pipe_clf3), ('svc', pipe_clf4), ('knn', pipe_clf5)], voting = 'soft'))]) # voting = 'hard'))]) scores1_acc, scores2_acc, scores3_acc, scores4_acc, scores5_acc, scores_vote_acc = [], [], [], [], [], [] scores1_pre, scores2_pre, scores3_pre, scores4_pre, scores5_pre, scores_vote_pre = [], [], [], [], [], [] scores1_rec, scores2_rec, scores3_rec, scores4_rec, scores5_rec, scores_vote_rec = [], [], [], [], [], [] scores1_f1, scores2_f1, scores3_f1, scores4_f1, scores5_f1, scores_vote_f1 = [], [], [], [], [], [] n_splits = 10 # k=10 # n_splits = X.shape[0] # i.e. Leave One Out strategy # for train_index, test_index in LeaveOneOut.split(X): k=1 for train_index, test_index in \ StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=rs).split(X,y): print("Fold number:", k) #print("\nTRUE class:\n", list(y[test_index])) X_train, X_test = X.iloc[train_index], X.iloc[test_index] y_train, y_test = y[train_index], y[test_index] #clf1 = LogisticRegression print(" - LogisticRegression") pipe_clf1.fit(X_train, y_train) scores1_acc.append(accuracy_score(y_test, pipe_clf1.predict(X_test))) scores1_pre.append(precision_score(y_test, pipe_clf1.predict(X_test), average=myaverage)) scores1_rec.append(recall_score(y_test, pipe_clf1.predict(X_test), average=myaverage)) scores1_f1.append(f1_score(y_test, pipe_clf1.predict(X_test), average=myaverage)) print(' Precision: %.2f' % (precision_score(y_test, pipe_clf1.predict(X_test), average=myaverage))) print(' Recall: %.2f' % (recall_score(y_test, pipe_clf1.predict(X_test), average=myaverage))) #print("LR predicted:\n", list(pipe_clf1.predict(X_test))) #clf2 = MLPClassifier print(" - MLPClassifier") pipe_clf2.fit(X_train, y_train) scores2_acc.append(accuracy_score(y_test, pipe_clf2.predict(X_test))) scores2_pre.append(precision_score(y_test, pipe_clf2.predict(X_test), average=myaverage)) scores2_rec.append(recall_score(y_test, pipe_clf2.predict(X_test), average=myaverage)) scores2_f1.append(f1_score(y_test, pipe_clf2.predict(X_test), average=myaverage)) print(' Precision: %.2f' % (precision_score(y_test, pipe_clf2.predict(X_test), average=myaverage))) print(' Recall: %.2f' % (recall_score(y_test, pipe_clf2.predict(X_test), average=myaverage))) #print("MLP predicted:\n", list(pipe_clf2.predict(X_test))) #clf3 = RandomForestClassifier #print(" - RandomForestClassifier") #clf3 = XGBoost print(" - XGBoost") pipe_clf3.fit(X_train, y_train) scores3_acc.append(accuracy_score(y_test, pipe_clf3.predict(X_test))) scores3_pre.append(precision_score(y_test, pipe_clf3.predict(X_test), average=myaverage)) scores3_rec.append(recall_score(y_test, pipe_clf3.predict(X_test), average=myaverage)) scores3_f1.append(f1_score(y_test, pipe_clf3.predict(X_test), average=myaverage)) print(' Precision: %.2f' % (precision_score(y_test, pipe_clf3.predict(X_test), average=myaverage))) print(' Recall: %.2f' % (recall_score(y_test, pipe_clf3.predict(X_test), average=myaverage))) #print("RF predicted:\n", list(pipe_clf3.predict(X_test))) #print("XGB predicted:\n", list(pipe_clf3.predict(X_test))) #clf4 = svm.SVC() print(" - svm/SVC") pipe_clf4.fit(X_train, y_train) scores4_acc.append(accuracy_score(y_test, pipe_clf4.predict(X_test))) scores4_pre.append(precision_score(y_test, pipe_clf4.predict(X_test), average=myaverage)) scores4_rec.append(recall_score(y_test, pipe_clf4.predict(X_test), average=myaverage)) scores4_f1.append(f1_score(y_test, pipe_clf4.predict(X_test), average=myaverage)) print(' Precision: %.2f' % (precision_score(y_test, pipe_clf4.predict(X_test), average=myaverage))) print(' Recall: %.2f' % (recall_score(y_test, pipe_clf4.predict(X_test), average=myaverage))) #print("SVM predicted:\n", list(pipe_clf4.predict(X_test))) #clf5 = KNeighborsClassifier print(" - KNN") pipe_clf5.fit(X_train, y_train) scores5_acc.append(accuracy_score(y_test, pipe_clf5.predict(X_test))) scores5_pre.append(precision_score(y_test, pipe_clf5.predict(X_test), average=myaverage)) scores5_rec.append(recall_score(y_test, pipe_clf5.predict(X_test), average=myaverage)) scores5_f1.append(f1_score(y_test, pipe_clf5.predict(X_test), average=myaverage)) #print("KNN predicted:\n", list(pipe_clf5.predict(X_test))) #clf_vote = VotingClassifier print(" - VotingClassifier") pipe_clf_vote.fit(X_train, y_train) scores_vote_acc.append(accuracy_score(y_test, pipe_clf_vote.predict(X_test))) scores_vote_pre.append(precision_score(y_test, pipe_clf_vote.predict(X_test), average=myaverage)) scores_vote_rec.append(recall_score(y_test, pipe_clf_vote.predict(X_test), average=myaverage)) scores_vote_f1.append(f1_score(y_test, pipe_clf_vote.predict(X_test), average=myaverage)) print(' Precision: %.2f' % (precision_score(y_test, pipe_clf_vote.predict(X_test), average=myaverage))) print(' Recall: %.2f' % (recall_score(y_test, pipe_clf_vote.predict(X_test), average=myaverage))) k=k+1 print('\nPredictors:') print('X.columns = %s' % list(X.columns)) print('\nOutcome:') print(pd.qcut(df['Stroop_3_R_3'], 3).head(0)) print(np.unique(y)) print('\nSome hyperparameters:') print("MLP hidden_layer_size = %d" % (hls)) print("random_state = %d" % (rs)) print("score average = '%s'" % (myaverage)) print("\nLR : CV accuracy = %.3f +-%.3f (k=%d)" % (np.mean(scores1_acc), np.std(scores1_acc), n_splits)) print("MLP: CV accuracy = %.3f +-%.3f (k=%d)" % (np.mean(scores2_acc), np.std(scores2_acc), n_splits)) # print("RF : CV accuracy = %.3f +-%.3f (k=%d)" % (np.mean(scores3_acc), np.std(scores3_acc), n_splits)) print("XGB : CV accuracy = %.3f +-%.3f (k=%d)" % (np.mean(scores3_acc), np.std(scores3_acc), n_splits)) print("SVM: CV accuracy = %.3f +-%.3f (k=%d)" % (np.mean(scores4_acc), np.std(scores4_acc), n_splits)) print("KNN: CV accuracy = %.3f +-%.3f (k=%d)" % (np.mean(scores5_acc), np.std(scores5_acc), n_splits)) print("Voting: CV accuracy = %.3f +-%.3f (k=%d)" % (np.mean(scores_vote_acc), np.std(scores_vote_acc), n_splits)) print("\nLR : CV precision = %.3f +-%.3f (k=%d)" % (np.mean(scores1_pre), np.std(scores1_pre), n_splits)) print("MLP: CV precision = %.3f +-%.3f (k=%d)" % (np.mean(scores2_pre), np.std(scores2_pre), n_splits)) print("XGB : CV precision = %.3f +-%.3f (k=%d)" % (np.mean(scores3_pre), np.std(scores3_pre), n_splits)) print("SVM: CV precision = %.3f +-%.3f (k=%d)" % (np.mean(scores4_pre), np.std(scores4_pre), n_splits)) print("KNN: CV precision = %.3f +-%.3f (k=%d)" % (np.mean(scores5_pre), np.std(scores5_pre), n_splits)) print("Voting: CV precision = %.3f +-%.3f (k=%d)" % (np.mean(scores_vote_pre), np.std(scores_vote_pre), n_splits)) print("\nLR : CV recall = %.3f +-%.3f (k=%d)" % (np.mean(scores1_rec), np.std(scores1_rec), n_splits)) print("MLP: CV recall = %.3f +-%.3f (k=%d)" % (np.mean(scores2_rec), np.std(scores2_rec), n_splits)) print("XGB : CV recall = %.3f +-%.3f (k=%d)" % (np.mean(scores3_rec), np.std(scores3_rec), n_splits)) print("SVM: CV recall = %.3f +-%.3f (k=%d)" % (np.mean(scores4_rec), np.std(scores4_rec), n_splits)) print("KNN: CV recall = %.3f +-%.3f (k=%d)" % (np.mean(scores5_rec), np.std(scores5_rec), n_splits)) print("Voting: CV recall = %.3f +-%.3f (k=%d)" % (np.mean(scores_vote_rec), np.std(scores_vote_rec), n_splits)) print("\nLR : CV F1-score = %.3f +-%.3f (k=%d)" % (np.mean(scores1_f1), np.std(scores1_f1), n_splits)) print("MLP: CV F1-score = %.3f +-%.3f (k=%d)" % (np.mean(scores2_f1), np.std(scores2_f1), n_splits)) print("XGB : CV F1-score = %.3f +-%.3f (k=%d)" % (np.mean(scores3_f1), np.std(scores3_f1), n_splits)) print("SVM: CV F1-score = %.3f +-%.3f (k=%d)" % (np.mean(scores4_f1), np.std(scores4_f1), n_splits)) print("KNN: CV F1-score = %.3f +-%.3f (k=%d)" % (np.mean(scores5_f1), np.std(scores5_f1), n_splits)) print("Voting: CV F1-score = %.3f +-%.3f (k=%d)" % (np.mean(scores_vote_f1), np.std(scores_vote_f1), n_splits)) ```
github_jupyter
``` import gevent import random import pandas as pd import numpy as np import math import time import functools as ft import glob, os, sys import operator as op import shelve import ipywidgets as widgets from ipywidgets import interact, interact_manual #from pandas.api.types import is_numeric_dtypen() from pathlib import Path from itertools import combinations, product, permutations from sqlalchemy.engine import create_engine from datetime import datetime from ast import literal_eval from scipy import stats from scipy.stats.mstats import gmean from pythonds.basic.stack import Stack from pythonds.trees.binaryTree import BinaryTree from collections import defaultdict import collections from typing import List, Set, Tuple from sklearn.metrics import classification_report, confusion_matrix from scipy import sparse #!pip install pythonds ``` ``` # STEP-1: CHOOSE YOUR CORPUS # TODO: get working with list of corpora #corpora = ['mipacq','i2b2','fairview'] #options for concept extraction include 'fairview', 'mipacq' OR 'i2b2' # cross-system semantic union merge filter for cross system aggregations using custom system annotations file with corpus name and system name using 'ray_test': # need to add semantic type filrering when reading in sys_data #corpus = 'ray_test' #corpus = 'clinical_trial2' corpus = 'fairview' #corpora = ['i2b2','fairview'] # STEP-2: CHOOSE YOUR DATA DIRECTORY; this is where output data will be saved on your machine data_directory = '/mnt/DataResearch/gsilver1/output/' # STEP-3: CHOOSE WHICH SYSTEMS YOU'D LIKE TO EVALUATE AGAINST THE CORPUS REFERENCE SET #systems = ['biomedicus', 'clamp', 'ctakes', 'metamap', 'quick_umls'] #systems = ['biomedicus', 'clamp', 'metamap', 'quick_umls'] #systems = ['biomedicus', 'quick_umls'] #systems = ['biomedicus', 'ctakes', 'quick_umls'] systems = ['biomedicus', 'clamp', 'ctakes', 'metamap'] #systems = ['biomedicus', 'clamp'] #systems = ['ctakes', 'quick_umls', 'biomedicus', 'metamap'] #systems = ['biomedicus', 'metamap'] #systems = ['ray_test'] #systems = ['metamap'] # STEP-4: CHOOSE TYPE OF RUN rtype = 6 # OPTIONS INCLUDE: 1->Single systems; 2->Ensemble; 3->Tests; 4 -> majority vote # The Ensemble can include the max system set ['ctakes','biomedicus','clamp','metamap','quick_umls'] # STEP-5: CHOOSE WHAT TYPE OF ANALYSIS YOU'D LIKE TO RUN ON THE CORPUS analysis_type = 'full' #options include 'entity', 'cui' OR 'full' # STEP-(6A): ENTER DETAILS FOR ACCESSING MANUAL ANNOTATION DATA database_type = 'postgresql+psycopg2' # We use mysql+pymql as default database_username = 'gsilver1' database_password = 'nej123' database_url = 'd0pconcourse001' # HINT: use localhost if you're running database on your local machine #database_name = 'clinical_trial' # Enter database name database_name = 'covid-19' # Enter database name def ref_data(corpus): return corpus + '_all' # Enter the table within the database where your reference data is stored table_name = ref_data(corpus) # STEP-(6B): ENTER DETAILS FOR ACCESSING SYSTEM ANNOTATION DATA def sys_data(corpus, analysis_type): if analysis_type == 'entity': return 'analytical_'+corpus+'.csv' # OPTIONS include 'analytical_cui_mipacq_concepts.csv' OR 'analytical_cui_i2b2_concepts.csv' elif analysis_type in ('cui', 'full'): return 'analytical_'+corpus+'_cui.csv' # OPTIONS include 'analytical_cui_mipacq_concepts.csv' OR 'analytical_cui_i2b2_concepts.csv' system_annotation = sys_data(corpus, analysis_type) # STEP-7: CREATE A DB CONNECTION POOL engine_request = str(database_type)+'://'+database_username+':'+database_password+"@"+database_url+'/'+database_name engine = create_engine(engine_request, pool_pre_ping=True, pool_size=20, max_overflow=30) # STEP-(8A): FILTER BY SEMTYPE filter_semtype = True #False # STEP-(8B): IF STEP-(8A) == True -> GET REFERENCE SEMTYPES def ref_semtypes(filter_semtype, corpus): if filter_semtype: if corpus == 'fairview': semtypes = ['Disorders'] else: pass return semtypes semtypes = ref_semtypes(filter_semtype, corpus) # STEP-9: Set data directory/table for source documents for vectorization src_table = 'sofa' # STEP-10: Specificy match type from {'exact', 'overlap', 'cui' -> kludge for majority} run_type = 'overlap' # for clinical trial, measurement/temoral are single system since no overlap for intersect # STEP-11: Specify expression type for run (TODO: run all at once; make less kludgey) expression_type = 'nested' #'nested_with_singleton' # type of merge expression: nested ((A&B)|C), paired ((A&B)|(C&D)), nested_with_singleton ((A&B)|((C&D)|E)) # -> NB: len(systems) for pair must be >= 4, and for nested_with_singleton == 5; single-> skip merges # STEP-12: Specify type of ensemble: merge or vote ensemble_type = 'merge' # STEP-13: run on negation modifier (TODO: negated entity) modification = None #'negation' ``` ****** TODO -> add majority vote to union for analysis_type = 'full' -> case for multiple labels on same/overlapping span/same system; disambiguate (order by score if exists and select random for ties): done! -> port to command line -----------------------> -> still need to validate that all semtypes in corpus! -> handle case where intersect merges are empty/any confusion matrix values are 0; specificallly on empty df in evaluate method: done! -> case when system annotations empty from semtype filter; print as 0: done! -> trim whitespace on CSV import -> done for semtypes -> eliminate rtype = 1 for expression_type = 'single' -> cross-system semantic union merge on aggregation -> negation: testing -> other modification, such as 'present' -> clean up configuration process -> allow iteration through all corpora and semtypes -> optimize vecorization (remove confusion?) ``` # config class for analysis class AnalysisConfig(): """ Configuration object: systems to use notes by corpus paths by output, gold and system location """ def __init__(self): self = self self.systems = systems self.data_dir = data_directory def corpus_config(self): usys_data = system_annotation ref_data = database_name+'.'+table_name return usys_data, ref_data analysisConf = AnalysisConfig() #usys, ref = analysisConf.corpus_config() class SemanticTypes(object): ''' Filter semantic types based on: https://metamap.nlm.nih.gov/SemanticTypesAndGroups.shtml :params: semtypes list from corpus, system to query :return: list of equivalent system semtypes ''' def __init__(self, semtypes, corpus): self = self # if corpus == 'clinical_trial2': # corpus = 'clinical_trial' # kludge!! # sql = "SELECT st.tui, abbreviation, clamp_name, ctakes_name, biomedicus_name FROM clinical_trial.semantic_groups sg join semantic_types st on sg.tui = st.tui where " + corpus + "_name in ({})"\ # .format(', '.join(['%s' for _ in semtypes])) sql = "SELECT st.tui, abbreviation, clamp_name, ctakes_name FROM semantic_groups sg join semantic_types st on sg.tui = st.tui where group_name in ({})"\ .format(', '.join(['%s' for _ in semtypes])) stypes = pd.read_sql(sql, params=[semtypes], con=engine) if len(stypes['tui'].tolist()) > 0: self.biomedicus_types = set(stypes['tui'].tolist()) self.qumls_types = set(stypes['tui'].tolist()) else: self.biomedicus_types = None self.qumls_types = None if stypes['clamp_name'].dropna(inplace=True) or len(stypes['clamp_name']) == 0: self.clamp_types = None else: self.clamp_types = set(stypes['clamp_name'].tolist()[0].split(',')) if stypes['ctakes_name'].dropna(inplace=True) or len(stypes['ctakes_name']) == 0: self.ctakes_types = None else: self.ctakes_types = set(stypes['ctakes_name'].tolist()[0].split(',')) # # # Kludge for b9 temporal # if stypes['biomedicus_name'].dropna(inplace=True) or len(stypes['biomedicus_name']) > 0: # self.biomedicus_types.update(set(stypes['biomedicus_name'].tolist()[0].split(','))) # #else: # # self.biomedicus_type = None if len(stypes['abbreviation'].tolist()) > 0: self.metamap_types = set(stypes['abbreviation'].tolist()) else: self.metamap_types = None self.reference_types = set(semtypes) def get_system_type(self, system): if system == 'biomedicus': semtypes = self.biomedicus_types elif system == 'ctakes': semtypes = self.ctakes_types elif system == 'clamp': semtypes = self.clamp_types elif system == 'metamap': semtypes = self.metamap_types elif system == 'quick_umls': semtypes = self.qumls_types elif system == 'reference': semtypes = self.reference_types return semtypes # print(SemanticTypes(['Drug'], corpus).get_system_type('biomedicus')) #print(SemanticTypes(['Drug'], corpus).get_system_type('quick_umls')) #print(SemanticTypes(['drug'], corpus).get_system_type('clamp')) #print(SemanticTypes(['Disorders'], 'fairview').get_system_type('clamp')) #semtypes = ['test,treatment'] #semtypes = 'drug,drug::drug_name,drug::drug_dose,dietary_supplement::dietary_supplement_name,dietary_supplement::dietary_supplement_dose' #semtypes = 'demographics::age,demographics::sex,demographics::race_ethnicity,demographics::bmi,demographics::weight' #corpus = 'clinical_trial' #sys = 'quick_umls' # is semantic type in particular system def system_semtype_check(sys, semtype, corpus): st = SemanticTypes([semtype], corpus).get_system_type(sys) if st: return sys else: return None #print(system_semtype_check(sys, semtypes, corpus)) # annotation class for systems class AnnotationSystems(): """ System annotations of interest for UMLS concept extraction NB: ctakes combines all "mentions" annotation types """ def __init__(self): """ annotation base types """ self.biomedicus_types = ["biomedicus.v2.UmlsConcept"] self.clamp_types = ["edu.uth.clamp.nlp.typesystem.ClampNameEntityUIMA"] self.ctakes_types = ["ctakes_mentions"] self.metamap_types = ["org.metamap.uima.ts.Candidate"] self.qumls_types = ["concept_jaccard_score_False"] def get_system_type(self, system): """ return system types """ if system == "biomedicus": view = "Analysis" else: view = "_InitialView" if system == 'biomedicus': types = self.biomedicus_types elif system == 'clamp': types = self.clamp_types elif system == 'ctakes': types = self.ctakes_types elif system == 'metamap': types = self.metamap_types elif system == "quick_umls": types = self.qumls_types return types, view annSys = AnnotationSystems() %reload_ext Cython #%%cython #import numpy as np # access to Numpy from Python layer #import math class Metrics(object): """ metrics class: returns an instance with confusion matrix metrics """ def __init__(self, system_only, gold_only, gold_system_match, system_n, neither = 0): # neither: no sys or manual annotation self = self self.system_only = system_only self.gold_only = gold_only self.gold_system_match = gold_system_match self.system_n = system_n self.neither = neither def get_confusion_metrics(self, corpus = None, test = False): """ compute confusion matrix measures, as per https://stats.stackexchange.com/questions/51296/how-do-you-calculate-precision-and-recall-for-multiclass-classification-using-co """ # cdef: # int TP, FP, FN # double TM TP = self.gold_system_match FP = self.system_only FN = self.gold_only TM = TP/math.sqrt(self.system_n) # TigMetric if not test: if corpus == 'casi': recall = TP/(TP + FN) precision = TP/(TP + FP) F = 2*(precision*recall)/(precision + recall) else: if self.neither == 0: confusion = [[0, self.system_only],[self.gold_only,self.gold_system_match]] else: confusion = [[self.neither, self.system_only],[self.gold_only,self.gold_system_match]] c = np.asarray(confusion) if TP != 0 or FP != 0: precision = TP/(TP+FP) else: precision = 0 if TP != 0 or FN != 0: recall = TP/(TP+FN) else: recall = 0 if precision + recall != 0: F = 2*(precision*recall)/(precision + recall) else: F = 0 # recall = np.diag(c) / np.sum(c, axis = 1) # precision = np.diag(c) / np.sum(c, axis = 0) # #print('Yo!', np.mean(precision), np.mean(recall)) # if np.mean(precision) != 0 and np.mean(recall) != 0: # F = 2*(precision*recall)/(precision + recall) # else: # F = 0 else: precision = TP/(TP+FP) recall = TP/(TP+FN) F = 2*(precision*recall)/(precision + recall) # Tignanelli Metric if FN == 0: TP_FN_R = TP elif FN > 0: TP_FN_R = TP/FN return F, recall, precision, TP, FP, FN, TP_FN_R, TM def df_to_set(df, analysis_type = 'entity', df_type = 'sys', corpus = None): # get values for creation of series of type tuple if 'entity' in analysis_type: if corpus == 'casi': arg = df.case, df.overlap else: arg = df.begin, df.end, df.case elif 'cui' in analysis_type: arg = df.value, df.case elif 'full' in analysis_type: arg = df.begin, df.end, df.value, df.case return set(list(zip(*arg))) #%%cython from __main__ import df_to_set, engine import numpy as np import pandas as pd def get_cooccurences(ref, sys, analysis_type: str, corpus: str): """ get cooccurences between system and reference; exact match; TODO: add relaxed -> done in single system evals during ensemble run """ # cooccurences class Cooccurences(object): def __init__(self): self.ref_system_match = 0 self.ref_only = 0 self.system_only = 0 self.system_n = 0 self.ref_n = 0 self.matches = set() self.false_negatives = set() self.corpus = corpus c = Cooccurences() if c.corpus != 'casi': if analysis_type in ['cui', 'full']: sys = sys.rename(index=str, columns={"note_id": "case", "cui": "value"}) # do not overestimate FP sys = sys[~sys['value'].isnull()] ref = ref[~ref['value'].isnull()] if 'entity' in analysis_type: sys = sys.rename(index=str, columns={"note_id": "case"}) cols_to_keep = ['begin', 'end', 'case'] elif 'cui' in analysis_type: cols_to_keep = ['value', 'case'] elif 'full' in analysis_type: cols_to_keep = ['begin', 'end', 'value', 'case'] sys = sys[cols_to_keep].drop_duplicates() ref = ref[cols_to_keep].drop_duplicates() # matches via inner join tp = pd.merge(sys, ref, how = 'inner', left_on=cols_to_keep, right_on = cols_to_keep) # reference-only via left outer join fn = pd.merge(ref, sys, how = 'left', left_on=cols_to_keep, right_on = cols_to_keep, indicator=True) fn = fn[fn["_merge"] == 'left_only'] tp = tp[cols_to_keep] fn = fn[cols_to_keep] # use for metrics c.matches = c.matches.union(df_to_set(tp, analysis_type, 'ref')) c.false_negatives = c.false_negatives.union(df_to_set(fn, analysis_type, 'ref')) c.ref_system_match = len(c.matches) c.system_only = len(sys) - len(c.matches) # fp c.system_n = len(sys) c.ref_n = len(ref) c.ref_only = len(c.false_negatives) else: sql = "select `case` from test.amia_2019_analytical_v where overlap = 1 and `system` = %(sys.name)s" tp = pd.read_sql(sql, params={"sys.name":sys.name}, con=engine) sql = "select `case` from test.amia_2019_analytical_v where (overlap = 0 or overlap is null) and `system` = %(sys.name)s" fn = pd.read_sql(sql, params={"sys.name":sys.name}, con=engine) c.matches = df_to_set(tp, 'entity', 'sys', 'casi') c.fn = df_to_set(fn, 'entity', 'sys', 'casi') c.ref_system_match = len(c.matches) c.system_only = len(sys) - len(c.matches) c.system_n = len(tp) + len(fn) c.ref_n = len(tp) + len(fn) c.ref_only = len(fn) # sanity check if len(ref) - c.ref_system_match < 0: print('Error: ref_system_match > len(ref)!') if len(ref) != c.ref_system_match + c.ref_only: print('Error: ref count mismatch!', len(ref), c.ref_system_match, c.ref_only) return c def label_vector(doc: str, ann: List[int], labels: List[str]) -> np.array: v = np.zeros(doc) labels = list(labels) for (i, lab) in enumerate(labels): i += 1 # 0 is reserved for no label idxs = [np.arange(a.begin, a.end) for a in ann if a.label == lab] idxs = [j for mask in idxs for j in mask] v[idxs] = i return v # test confusion matrix elements for vectorized annotation set; includes TN # https://kawahara.ca/how-to-compute-truefalse-positives-and-truefalse-negatives-in-python-for-binary-classification-problems/ # def confused(sys1, ann1): # TP = np.sum(np.logical_and(ann1 == 1, sys1 == 1)) # # True Negative (TN): we predict a label of 0 (negative), and the true label is 0. # TN = np.sum(np.logical_and(ann1 == 0, sys1 == 0)) # # False Positive (FP): we predict a label of 1 (positive), but the true label is 0. # FP = np.sum(np.logical_and(ann1 == 0, sys1 == 1)) # # False Negative (FN): we predict a label of 0 (negative), but the true label is 1. # FN = np.sum(np.logical_and(ann1 == 1, sys1 == 0)) # return TP, TN, FP, FN def confused(sys1, ann1): TP = np.sum(np.logical_and(ann1 > 0, sys1 == ann1)) # True Negative (TN): we predict a label of 0 (negative), and the true label is 0. TN = np.sum(np.logical_and(ann1 == 0, sys1 == ann1)) # False Positive (FP): we predict a label of 1 (positive), but the true label is 0. FP = np.sum(np.logical_and(sys1 > 0, sys1 != ann1)) # False Negative (FN): we predict a label of 0 (negative), but the true label is 1. FN = np.sum(np.logical_and(ann1 > 0, sys1 == 0)) return TP, TN, FP, FN @ft.lru_cache(maxsize=None) def vectorized_cooccurences(r: object, analysis_type: str, corpus: str, filter_semtype, semtype = None) -> np.int64: docs = get_docs(corpus) if filter_semtype: ann = get_ref_ann(analysis_type, corpus, filter_semtype, semtype) else: ann = get_ref_ann(analysis_type, corpus, filter_semtype) sys = get_sys_ann(analysis_type, r) #cvals = [] if analysis_type == 'entity': labels = ["concept"] elif analysis_type in ['cui', 'full']: labels = list(set(ann["value"].tolist())) sys2 = list() ann2 = list() s2 = list() a2 = list() for n in range(len(docs)): if analysis_type != 'cui': a1 = list(ann.loc[ann["case"] == docs[n][0]].itertuples(index=False)) s1 = list(sys.loc[sys["case"] == docs[n][0]].itertuples(index=False)) ann1 = label_vector(docs[n][1], a1, labels) sys1 = label_vector(docs[n][1], s1, labels) #TP, TN, FP, FN = confused(sys1, ann1) #cvals.append([TP, TN, FP, FN]) sys2.append(list(sys1)) ann2.append(list(ann1)) else: a = ann.loc[ann["case"] == docs[n][0]]['label'].tolist() s = sys.loc[sys["case"] == docs[n][0]]['label'].tolist() x = [1 if x in a else 0 for x in labels] y = [1 if x in s else 0 for x in labels] # x_sparse = sparse.csr_matrix(x) # y_sparse = sparse.csr_matrix(y) s2.append(y) a2.append(x) #a1 = list(ann.loc[ann["case"] == docs[n][0]].itertuples(index=False)) #s1 = list(sys.loc[sys["case"] == docs[n][0]].itertuples(index=False)) if analysis_type != 'cui': a2 = [item for sublist in ann2 for item in sublist] s2 = [item for sublist in sys2 for item in sublist] report = classification_report(a2, s2, output_dict=True) #print('classification:', report) macro_precision = report['macro avg']['precision'] macro_recall = report['macro avg']['recall'] macro_f1 = report['macro avg']['f1-score'] TN, FP, FN, TP = confusion_matrix(a2, s2).ravel() #return (np.sum(cvals, axis=0), (macro_precision, macro_recall, macro_f1)) return ((TP, TN, FP, FN), (macro_precision, macro_recall, macro_f1)) else: x_sparse = sparse.csr_matrix(a2) y_sparse = sparse.csr_matrix(s2) report = classification_report(x_sparse, y_sparse, output_dict=True) macro_precision = report['macro avg']['precision'] macro_recall = report['macro avg']['recall'] macro_f1 = report['macro avg']['f1-score'] #print((macro_precision, macro_recall, macro_f1)) return ((0, 0, 0, 0), (macro_precision, macro_recall, macro_f1)) def cm_dict(ref_only: int, system_only: int, ref_system_match: int, system_n: int, ref_n: int) -> dict: """ Generate dictionary of confusion matrix params and measures :params: ref_only, system_only, reference_system_match -> sets matches, system_n, reference_n -> counts :return: dictionary object """ if ref_only + ref_system_match != ref_n: print('ERROR!') # get evaluation metrics F, recall, precision, TP, FP, FN, TP_FN_R, TM = Metrics(system_only, ref_only, ref_system_match, system_n).get_confusion_metrics() d = { # 'F1': F[1], # 'precision': precision[1], # 'recall': recall[1], 'F1': F, 'precision': precision, 'recall': recall, 'TP': TP, 'FN': FN, 'FP': FP, 'TP/FN': TP_FN_R, 'n_gold': ref_n, 'n_sys': system_n, 'TM': TM } if system_n - FP != TP: print('inconsistent system n!') return d @ft.lru_cache(maxsize=None) def get_metric_data(analysis_type: str, corpus: str): usys_file, ref_table = AnalysisConfig().corpus_config() systems = AnalysisConfig().systems sys_ann = pd.read_csv(analysisConf.data_dir + usys_file, dtype={'note_id': str}) # sql = "SELECT * FROM " + ref_table #+ " where semtype in('Anatomy', 'Chemicals_and_drugs')" # ref_ann = pd.read_sql(sql, con=engine) sys_ann = sys_ann.drop_duplicates() ref_ann = None return ref_ann, sys_ann #%%cython import pandas as pd from scipy import stats from scipy.stats.mstats import gmean def geometric_mean(metrics): """ 1. Get rank average of F1, TP/FN, TM http://www.datasciencemadesimple.com/rank-dataframe-python-pandas-min-max-dense-rank-group/ https://stackoverflow.com/questions/46686315/in-pandas-how-to-create-a-new-column-with-a-rank-according-to-the-mean-values-o?rq=1 2. Take geomean of rank averages https://stackoverflow.com/questions/42436577/geometric-mean-applied-on-row """ data = pd.DataFrame() metrics['F1 rank']=metrics['F1'].rank(ascending=0,method='average') metrics['TP/FN rank']=metrics['TP/FN'].rank(ascending=0,method='average') metrics['TM rank']=metrics['TM'].rank(ascending=0,method='average') metrics['Gmean'] = gmean(metrics.iloc[:,-3:],axis=1) return metrics def generate_metrics(analysis_type: str, corpus: str, filter_semtype, semtype = None): start = time.time() systems = AnalysisConfig().systems metrics = pd.DataFrame() __, sys_ann = get_metric_data(analysis_type, corpus) c = None for sys in systems: if filter_semtype and semtype: ref_ann = get_ref_ann(analysis_type, corpus, filter_semtype, semtype) else: ref_ann = get_ref_ann(analysis_type, corpus, filter_semtype) system_annotations = sys_ann[sys_ann['system'] == sys].copy() if filter_semtype: st = SemanticTypes([semtype], corpus).get_system_type(sys) if st: system_annotations = sys_ann[sys_ann['semtypes'].isin(st)].copy() else: system_annotations = sys_ann.copy() if (filter_semtype and st) or filter_semtype is False: system = system_annotations.copy() if sys == 'quick_umls': system = system[system.score.astype(float) >= .8] if sys == 'metamap' and modification == None: system = system.fillna(0) system = system[system.score.abs().astype(int) >= 800] system = system.drop_duplicates() ref_ann = ref_ann.rename(index=str, columns={"start": "begin", "file": "case"}) c = get_cooccurences(ref_ann, system, analysis_type, corpus) # get matches, FN, etc. if c.ref_system_match > 0: # compute confusion matrix metrics and write to dictionary -> df # get dictionary of confusion matrix metrics d = cm_dict(c.ref_only, c.system_only, c.ref_system_match, c.system_n, c.ref_n) d['system'] = sys data = pd.DataFrame(d, index=[0]) metrics = pd.concat([metrics, data], ignore_index=True) metrics.drop_duplicates(keep='last', inplace=True) else: print("NO EXACT MATCHES FOR", sys) elapsed = (time.time() - start) print("elapsed:", sys, elapsed) if c: elapsed = (time.time() - start) print(geometric_mean(metrics)) now = datetime.now() timestamp = datetime.timestamp(now) file_name = 'metrics_' metrics.to_csv(analysisConf.data_dir + corpus + '_' + file_name + analysis_type + '_' + str(timestamp) + '.csv') print("total elapsed time:", elapsed) @ft.lru_cache(maxsize=None) def get_ref_n(analysis_type: str, corpus: str, filter_semtype: str) -> int: ref_ann, _ = get_metric_data(analysis_type, corpus) if filter_semtype: ref_ann = ref_ann[ref_ann['semtype'].isin(SemanticTypes(semtypes, corpus).get_system_type('reference'))] if corpus == 'casi': return len(ref_ann) else: # do not overestimate fn if 'entity' in analysis_type: ref_ann = ref_ann[['start', 'end', 'file']].drop_duplicates() elif 'cui' in analysis_type: ref_ann = ref_ann[['value', 'file']].drop_duplicates() elif 'full' in analysis_type: ref_ann = ref_ann[['start', 'end', 'value', 'file']].drop_duplicates() else: pass ref_n = len(ref_ann.drop_duplicates()) return ref_n @ft.lru_cache(maxsize=None) def get_sys_data(system: str, analysis_type: str, corpus: str, filter_semtype, semtype = None) -> pd.DataFrame: _, data = get_metric_data(analysis_type, corpus) out = data[data['system'] == system].copy() if filter_semtype: st = SemanticTypes([semtype], corpus).get_system_type(system) print(system, 'st', st) if corpus == 'casi': cols_to_keep = ['case', 'overlap'] out = out[cols_to_keep].drop_duplicates() return out else: if filter_semtype: out = out[out['semtype'].isin(st)].copy() else: out = out[out['system']== system].copy() if modification == 'negation': out = out[out['modification'] == 'negation'].copy() if system == 'quick_umls': out = out[(out.score.astype(float) >= 0.8) & (out["type"] == 'concept_jaccard_score_False')] # fix for leading space on semantic type field out = out.apply(lambda x: x.str.strip() if x.dtype == "object" else x) out['semtypes'] = out['semtypes'].str.strip() if system == 'metamap' and modification == None: out = out[out.score.abs().astype(int) >= 800] if 'entity' in analysis_type: cols_to_keep = ['begin', 'end', 'note_id'] elif 'cui' in analysis_type: cols_to_keep = ['cui', 'note_id'] elif 'full' in analysis_type: cols_to_keep = ['begin', 'end', 'cui', 'note_id', 'polarity'] out = out[cols_to_keep] return out.drop_duplicates() ``` ``` class SetTotals(object): """ returns an instance with merged match set numbers using either union or intersection of elements in set """ def __init__(self, ref_n, sys_n, match_set): self = self self.ref_ann = ref_n self.sys_n = sys_n self.match_set = match_set def get_ref_sys(self): ref_only = self.ref_ann - len(self.match_set) sys_only = self.sys_n - len(self.match_set) return ref_only, sys_only, len(self.match_set), self.match_set def union_vote(arg): arg['length'] = (arg.end - arg.begin).abs() df = arg[['begin', 'end', 'note_id', 'cui', 'length', 'polarity']].copy() df.sort_values(by=['note_id','begin'],inplace=True) df = df.drop_duplicates(['begin', 'end', 'note_id', 'cui', 'polarity']) cases = set(df['note_id'].tolist()) data = [] out = pd.DataFrame() for case in cases: print(case) test = df[df['note_id']==case].copy() for row in test.itertuples(): iix = pd.IntervalIndex.from_arrays(test.begin, test.end, closed='neither') span_range = pd.Interval(row.begin, row.end) fx = test[iix.overlaps(span_range)].copy() maxLength = fx['length'].max() minLength = fx['length'].min() if len(fx) > 1: #if longer span exists use as tie-breaker if maxLength > minLength: fx = fx[fx['length'] == fx['length'].max()] data.append(fx) out = pd.concat(data, axis=0) # Remaining ties on span with same or different CUIs # randomly reindex to keep random selected row when dropping duplicates: https://gist.github.com/cadrev/6b91985a1660f26c2742 out.reset_index(inplace=True) out = out.reindex(np.random.permutation(out.index)) return out.drop_duplicates(['begin', 'end', 'note_id', 'polarity']) #out.drop('length', axis=1, inplace=True) @ft.lru_cache(maxsize=None) def process_sentence(pt, sentence, analysis_type, corpus, filter_semtype, semtype = None): """ Recursively evaluate parse tree, with check for existence before build :param sentence: to process :return class of merged annotations, boolean operated system df """ class Results(object): def __init__(self): self.results = set() self.system_merges = pd.DataFrame() r = Results() if 'entity' in analysis_type and corpus != 'casi': cols_to_keep = ['begin', 'end', 'note_id', 'polarity'] # entity only elif 'full' in analysis_type: cols_to_keep = ['cui', 'begin', 'end', 'note_id', 'polarity'] # entity only join_cols = ['cui', 'begin', 'end', 'note_id'] elif 'cui' in analysis_type: cols_to_keep = ['cui', 'note_id', 'polarity'] # entity only elif corpus == 'casi': cols_to_keep = ['case', 'overlap'] def evaluate(parseTree): oper = {'&': op.and_, '|': op.or_} if parseTree: leftC = gevent.spawn(evaluate, parseTree.getLeftChild()) rightC = gevent.spawn(evaluate, parseTree.getRightChild()) if leftC.get() is not None and rightC.get() is not None: system_query = pd.DataFrame() fn = oper[parseTree.getRootVal()] if isinstance(leftC.get(), str): # get system as leaf node if filter_semtype: left_sys = get_sys_data(leftC.get(), analysis_type, corpus, filter_semtype, semtype) else: left_sys = get_sys_data(leftC.get(), analysis_type, corpus, filter_semtype) elif isinstance(leftC.get(), pd.DataFrame): l_sys = leftC.get() if isinstance(rightC.get(), str): # get system as leaf node if filter_semtype: right_sys = get_sys_data(rightC.get(), analysis_type, corpus, filter_semtype, semtype) else: right_sys = get_sys_data(rightC.get(), analysis_type, corpus, filter_semtype) elif isinstance(rightC.get(), pd.DataFrame): r_sys = rightC.get() if fn == op.or_: if isinstance(leftC.get(), str) and isinstance(rightC.get(), str): frames = [left_sys, right_sys] elif isinstance(leftC.get(), str) and isinstance(rightC.get(), pd.DataFrame): frames = [left_sys, r_sys] elif isinstance(leftC.get(), pd.DataFrame) and isinstance(rightC.get(), str): frames = [l_sys, right_sys] elif isinstance(leftC.get(), pd.DataFrame) and isinstance(rightC.get(), pd.DataFrame): frames = [l_sys, r_sys] df = pd.concat(frames, ignore_index=True) if analysis_type == 'full': df = union_vote(df) if fn == op.and_: if isinstance(leftC.get(), str) and isinstance(rightC.get(), str): if not left_sys.empty and not right_sys.empty: df = left_sys.merge(right_sys, on=join_cols, how='inner') df = df[cols_to_keep].drop_duplicates(subset=cols_to_keep) else: df = pd.DataFrame(columns=cols_to_keep) elif isinstance(leftC.get(), str) and isinstance(rightC.get(), pd.DataFrame): if not left_sys.empty and not r_sys.empty: df = left_sys.merge(r_sys, on=join_cols, how='inner') df = df[cols_to_keep].drop_duplicates(subset=cols_to_keep) else: df = pd.DataFrame(columns=cols_to_keep) elif isinstance(leftC.get(), pd.DataFrame) and isinstance(rightC.get(), str): if not l_sys.empty and not right_sys.empty: df = l_sys.merge(right_sys, on=join_cols, how='inner') df = df[cols_to_keep].drop_duplicates(subset=cols_to_keep) else: df = pd.DataFrame(columns=cols_to_keep) elif isinstance(leftC.get(), pd.DataFrame) and isinstance(rightC.get(), pd.DataFrame): if not l_sys.empty and not r_sys.empty: df = l_sys.merge(r_sys, on=join_cols, how='inner') df = df[cols_to_keep].drop_duplicates(subset=cols_to_keep) else: df = pd.DataFrame(columns=cols_to_keep) # get combined system results r.system_merges = df if len(df) > 0: system_query = system_query.append(df) else: print('wtf!') return system_query else: return parseTree.getRootVal() if sentence.n_or > 0 or sentence.n_and > 0: evaluate(pt) # trivial case elif sentence.n_or == 0 and sentence.n_and == 0: if filter_semtype: r.system_merges = get_sys_data(sentence.sentence, analysis_type, corpus, filter_semtype, semtype) else: r.system_merges = get_sys_data(sentence.sentence, analysis_type, corpus, filter_semtype) return r """ Incoming Boolean sentences are parsed into a binary tree. Test expressions to parse: sentence = '((((A&B)|C)|D)&E)' sentence = '(E&(D|(C|(A&B))))' sentence = '(((A|(B&C))|(D&(E&F)))|(H&I))' """ # build parse tree from passed sentence using grammatical rules of Boolean logic def buildParseTree(fpexp): """ Iteratively build parse tree from passed sentence using grammatical rules of Boolean logic :param fpexp: sentence to parse :return eTree: parse tree representation Incoming Boolean sentences are parsed into a binary tree. Test expressions to parse: sentence = '(A&B)' sentence = '(A|B)' sentence = '((A|B)&C)' """ fplist = fpexp.split() pStack = Stack() eTree = BinaryTree('') pStack.push(eTree) currentTree = eTree for i in fplist: if i == '(': currentTree.insertLeft('') pStack.push(currentTree) currentTree = currentTree.getLeftChild() elif i not in ['&', '|', ')']: currentTree.setRootVal(i) parent = pStack.pop() currentTree = parent elif i in ['&', '|']: currentTree.setRootVal(i) currentTree.insertRight('') pStack.push(currentTree) currentTree = currentTree.getRightChild() elif i == ')': currentTree = pStack.pop() else: raise ValueError return eTree def make_parse_tree(payload): """ Ensure data to create tree are in correct form :param sentence: sentence to preprocess :return pt, parse tree graph sentence, processed sentence to build tree a: order """ def preprocess_sentence(sentence): # prepare statement for case when a boolean AND/OR is given sentence = payload.replace('(', ' ( '). \ replace(')', ' ) '). \ replace('&', ' & '). \ replace('|', ' | '). \ replace(' ', ' ') return sentence sentence = preprocess_sentence(payload) print('Processing sentence:', sentence) pt = buildParseTree(sentence) #pt.postorder() return pt class Sentence(object): ''' Details about boolean expression -> number operators and expression ''' def __init__(self, sentence): self = self self.n_and = sentence.count('&') self.n_or = sentence.count('|') self.sentence = sentence @ft.lru_cache(maxsize=None) def get_docs(corpus): # KLUDGE!!! if corpus == 'ray_test': corpus = 'fairview' sql = 'select distinct note_id, sofa from sofas where corpus = %(corpus)s order by note_id' df = pd.read_sql(sql, params={"corpus":corpus}, con=engine) df.drop_duplicates() df['len_doc'] = df['sofa'].apply(len) subset = df[['note_id', 'len_doc']] docs = [tuple(x) for x in subset.to_numpy()] return docs @ft.lru_cache(maxsize=None) def get_ref_ann(analysis_type, corpus, filter_semtype, semtype = None): if filter_semtype: if ',' in semtype: semtype = semtype.split(',') else: semtype = [semtype] ann, _ = get_metric_data(analysis_type, corpus) ann = ann.rename(index=str, columns={"start": "begin", "file": "case"}) if filter_semtype: ann = ann[ann['semtype'].isin(semtype)] if analysis_type == 'entity': ann["label"] = 'concept' elif analysis_type in ['cui','full']: ann["label"] = ann["value"] if modification == 'negation': ann = ann[ann['semtype'] == 'negation'] if analysis_type == 'entity': cols_to_keep = ['begin', 'end', 'case', 'label'] elif analysis_type == 'cui': cols_to_keep = ['value', 'case', 'label'] elif analysis_type == 'full': cols_to_keep = ['begin', 'end', 'value', 'case', 'label'] ann = ann[cols_to_keep] return ann @ft.lru_cache(maxsize=None) def get_sys_ann(analysis_type, r): sys = r.system_merges sys = sys.rename(index=str, columns={"note_id": "case"}) if analysis_type == 'entity': sys["label"] = 'concept' cols_to_keep = ['begin', 'end', 'case', 'label'] elif analysis_type == 'full': sys["label"] = sys["cui"] cols_to_keep = ['begin', 'end', 'case', 'value', 'label'] elif analysis_type == 'cui': sys["label"] = sys["cui"] cols_to_keep = ['case', 'cui', 'label'] sys = sys[cols_to_keep] return sys @ft.lru_cache(maxsize=None) def get_metrics(boolean_expression: str, analysis_type: str, corpus: str, run_type: str, filter_semtype, semtype = None): """ Traverse binary parse tree representation of Boolean sentence :params: boolean expression in form of '(<annotator_engine_name1><boolean operator><annotator_engine_name2>)' analysis_type (string value of: 'entity', 'cui', 'full') used to filter set of reference and system annotations :return: dictionary with values needed for confusion matrix """ sentence = Sentence(boolean_expression) pt = make_parse_tree(sentence.sentence) if filter_semtype: r = process_sentence(pt, sentence, analysis_type, corpus, filter_semtype, semtype) else: r = process_sentence(pt, sentence, analysis_type, corpus, filter_semtype) # vectorize merges using i-o labeling if run_type == 'overlap': if filter_semtype: ((TP, TN, FP, FN),(p,r,f1)) = vectorized_cooccurences(r, analysis_type, corpus, filter_semtype, semtype) else: ((TP, TN, FP, FN),(p,r,f1)) = vectorized_cooccurences(r, analysis_type, corpus, filter_semtype) print('results:',((TP, TN, FP, FN),(p,r,f1))) # TODO: validate against ann1/sys1 where val = 1 # total by number chars system_n = TP + FP reference_n = TP + FN if analysis_type != 'cui': d = cm_dict(FN, FP, TP, system_n, reference_n) else: d = dict() d['F1'] = 0 d['precision'] = 0 d['recall'] = 0 d['TP/FN'] = 0 d['TM'] = 0 d['TN'] = TN d['macro_p'] = p d['macro_r'] = r d['macro_f1'] = f1 # return full metrics return d elif run_type == 'exact': # total by number spans if filter_semtype: ann = get_ref_ann(analysis_type, corpus, filter_semtype, semtype) else: ann = get_ref_ann(analysis_type, corpus, filter_semtype) c = get_cooccurences(ann, r.system_merges, analysis_type, corpus) # get matches, FN, etc. if c.ref_system_match > 0: # compute confusion matrix metrics and write to dictionary -> df # get dictionary of confusion matrix metrics d = cm_dict(c.ref_only, c.system_only, c.ref_system_match, c.system_n, c.ref_n) else: d = None return d #get_valid_systems(['biomedicus'], 'Anatomy') # generate all combinations of given list of annotators: def partly_unordered_permutations(lst, k): elems = set(lst) for c in combinations(lst, k): for d in permutations(elems - set(c)): yield c + d def expressions(l, n): for (operations, *operands), operators in product( combinations(l, n), product(('&', '|'), repeat=n - 1)): for operation in zip(operators, operands): operations = [operations, *operation] yield operations # get list of systems with a semantic type in grouping def get_valid_systems(systems, semtype): test = [] for sys in systems: st = system_semtype_check(sys, semtype, corpus) if st: test.append(sys) return test # permute system combinations and evaluate system merges for performance def run_ensemble(systems, analysis_type, corpus, filter_semtype, expression_type, semtype = None): metrics = pd.DataFrame() # pass single system to evaluate if expression_type == 'single': for system in systems: if filter_semtype: d = get_metrics(system, analysis_type, corpus, run_type, filter_semtype, semtype) else: d = get_metrics(system, analysis_type, corpus, run_type, filter_semtype) d['merge'] = system d['n_terms'] = 1 frames = [metrics, pd.DataFrame(d, index=[0])] metrics = pd.concat(frames, ignore_index=True, sort=False) elif expression_type == 'nested': for l in partly_unordered_permutations(systems, 2): print('processing merge combo:', l) for i in range(1, len(l)+1): test = list(expressions(l, i)) for t in test: if i > 1: # format Boolean sentence for parse tree t = '(' + " ".join(str(x) for x in t).replace('[','(').replace(']',')').replace("'","").replace(",","").replace(" ","") + ')' if filter_semtype: d = get_metrics(t, analysis_type, corpus, run_type, filter_semtype, semtype) else: d = get_metrics(t, analysis_type, corpus, run_type, filter_semtype) d['merge'] = t d['n_terms'] = i frames = [metrics, pd.DataFrame(d, index=[0])] metrics = pd.concat(frames, ignore_index=True, sort=False) elif expression_type == 'nested_with_singleton' and len(systems) == 5: # form (((a&b)|c)&(d|e)) nested = list(expressions(systems, 3)) test = list(expressions(systems, 2)) to_do_terms = [] for n in nested: # format Boolean sentence for parse tree n = '(' + " ".join(str(x) for x in n).replace('[','(').replace(']',')').replace("'","").replace(",","").replace(" ","") + ')' for t in test: t = '(' + " ".join(str(x) for x in t).replace('[','(').replace(']',')').replace("'","").replace(",","").replace(" ","") + ')' new_and = '(' + n +'&'+ t + ')' new_or = '(' + n +'|'+ t + ')' if new_and.count('biomedicus') != 2 and new_and.count('clamp') != 2 and new_and.count('ctakes') != 2 and new_and.count('metamap') != 2 and new_and.count('quick_umls') != 2: if new_and.count('&') != 4 and new_or.count('|') != 4: #print(new_and) #print(new_or) to_do_terms.append(new_or) to_do_terms.append(new_and) print('nested_with_singleton', len(to_do_terms)) for term in to_do_terms: if filter_semtype: d = get_metrics(term, analysis_type, corpus, run_type, filter_semtype, semtype) else: d = get_metrics(term, analysis_type, corpus, run_type, filter_semtype) n = term.count('&') m = term.count('|') d['merge'] = term d['n_terms'] = m + n + 1 frames = [metrics, pd.DataFrame(d, index=[0])] metrics = pd.concat(frames, ignore_index=True, sort=False) elif expression_type == 'paired': m = list(expressions(systems, 2)) test = list(expressions(m, 2)) to_do_terms = [] for t in test: # format Boolean sentence for parse tree t = '(' + " ".join(str(x) for x in t).replace('[','(').replace(']',')').replace("'","").replace(",","").replace(" ","") + ')' if t.count('biomedicus') != 2 and t.count('clamp') != 2 and t.count('ctakes') != 2 and t.count('metamap') != 2 and t.count('quick_umls') != 2: if t.count('&') != 3 and t.count('|') != 3: to_do_terms.append(t) if len(systems) == 5: for i in systems: if i not in t: #print('('+t+'&'+i+')') #print('('+t+'|'+i+')') new_and = '('+t+'&'+i+')' new_or = '('+t+'|'+i+')' to_do_terms.append(new_and) to_do_terms.append(new_or) print('paired', len(to_do_terms)) for term in to_do_terms: if filter_semtype: d = get_metrics(term, analysis_type, corpus, run_type, filter_semtype, semtype) else: d = get_metrics(term, analysis_type, corpus, run_type, filter_semtype) n = term.count('&') m = term.count('|') d['merge'] = term d['n_terms'] = m + n + 1 frames = [metrics, pd.DataFrame(d, index=[0])] metrics = pd.concat(frames, ignore_index=True, sort=False) return metrics # write to file def generate_ensemble_metrics(metrics, analysis_type, corpus, ensemble_type, filter_semtype, semtype = None): now = datetime.now() timestamp = datetime.timestamp(now) file_name = corpus + '_all_' # drop exact matches: metrics = metrics.drop_duplicates() if ensemble_type == 'merge': metrics = metrics.sort_values(by=['n_terms', 'merge']) file_name += 'merge_' elif ensemble_type == 'vote': file_name += '_' #metrics = metrics.drop_duplicates(subset=['TP', 'FN', 'FP', 'n_sys', 'precision', 'recall', 'F', 'TM', 'TP/FN', 'TM', 'n_terms']) file = file_name + analysis_type + '_' + run_type +'_' if filter_semtype: file += semtype geometric_mean(metrics).to_csv(analysisConf.data_dir + file + str(timestamp) + '.csv') print(geometric_mean(metrics)) # control ensemble run def ensemble_control(systems, analysis_type, corpus, run_type, filter_semtype, semtypes = None): if filter_semtype: for semtype in semtypes: test = get_valid_systems(systems, semtype) print('SYSTEMS FOR SEMTYPE', semtype, 'ARE', test) metrics = run_ensemble(test, analysis_type, corpus, filter_semtype, expression_type, semtype) if (expression_type == 'nested_with_singleton' and len(test) == 5) or expression_type in ['nested', 'paired', 'single']: generate_ensemble_metrics(metrics, analysis_type, corpus, ensemble_type, filter_semtype, semtype) else: metrics = run_ensemble(systems, analysis_type, corpus, filter_semtype, expression_type) generate_ensemble_metrics(metrics, analysis_type, corpus, ensemble_type, filter_semtype) # ad hoc query for performance evaluation def get_merge_data(boolean_expression: str, analysis_type: str, corpus: str, run_type: str, filter_semtype, semtype = None): """ Traverse binary parse tree representation of Boolean sentence :params: boolean expression in form of '(<annotator_engine_name1><boolean operator><annotator_engine_name2>)' analysis_type (string value of: 'entity', 'cui', 'full') used to filter set of reference and system annotations :return: dictionary with values needed for confusion matrix """ if filter_semtype: ann = get_ref_ann(analysis_type, corpus, filter_semtype, semtype) else: ann = get_ref_ann(analysis_type, corpus, filter_semtype) sentence = Sentence(boolean_expression) pt = make_parse_tree(sentence.sentence) r = process_sentence(pt, sentence, analysis_type, corpus, filter_semtype, semtype) if run_type == 'overlap' and rtype != 6: if filter_semtype: ((TP, TN, FP, FN),(p,r,f1)) = vectorized_cooccurences(r, analysis_type, corpus, filter_semtype, semtype) else: ((TP, TN, FP, FN),(p,r,f1)) = vectorized_cooccurences(r, analysis_type, corpus, filter_semtype) # TODO: validate against ann1/sys1 where val = 1 # total by number chars system_n = TP + FP reference_n = TP + FN d = cm_dict(FN, FP, TP, system_n, reference_n) print(d) elif run_type == 'exact': c = get_cooccurences(ann, r.system_merges, analysis_type, corpus) # get matches, FN, etc. if c.ref_system_match > 0: # compute confusion matrix metrics and write to dictionary -> df # get dictionary of confusion matrix metrics d = cm_dict(c.ref_only, c.system_only, c.ref_system_match, c.system_n, c.ref_n) print('cm', d) else: pass # get matched data from merge return r.system_merges # merge_eval(reference_only, system_only, reference_system_match, system_n, reference_n) # ad hoc query for performance evaluation def get_sys_merge(boolean_expression: str, analysis_type: str, corpus: str, run_type: str, filter_semtype, semtype = None): """ Traverse binary parse tree representation of Boolean sentence :params: boolean expression in form of '(<annotator_engine_name1><boolean operator><annotator_engine_name2>)' analysis_type (string value of: 'entity', 'cui', 'full') used to filter set of reference and system annotations :return: dictionary with values needed for confusion matrix """ # if filter_semtype: # ann = get_ref_ann(analysis_type, corpus, filter_semtype, semtype) # else: # ann = get_ref_ann(analysis_type, corpus, filter_semtype) sentence = Sentence(boolean_expression) pt = make_parse_tree(sentence.sentence) for semtype in semtypes: test = get_valid_systems(systems, semtype) r = process_sentence(pt, sentence, analysis_type, corpus, filter_semtype, semtype) # if run_type == 'overlap' and rtype != 6: # if filter_semtype: # ((TP, TN, FP, FN),(p,r,f1)) = vectorized_cooccurences(r, analysis_type, corpus, filter_semtype, semtype) # else: # ((TP, TN, FP, FN),(p,r,f1)) = vectorized_cooccurences(r, analysis_type, corpus, filter_semtype) # # TODO: validate against ann1/sys1 where val = 1 # # total by number chars # system_n = TP + FP # reference_n = TP + FN # d = cm_dict(FN, FP, TP, system_n, reference_n) # print(d) # elif run_type == 'exact': # c = get_cooccurences(ann, r.system_merges, analysis_type, corpus) # get matches, FN, etc. # if c.ref_system_match > 0: # compute confusion matrix metrics and write to dictionary -> df # # get dictionary of confusion matrix metrics # d = cm_dict(c.ref_only, c.system_only, c.ref_system_match, c.system_n, c.ref_n) # print('cm', d) # else: # pass # get matched data from merge return r.system_merges # merge_eval(reference_only, system_only, reference_system_match, system_n, reference_n) # majority vote def vectorized_annotations(ann): docs = get_docs(corpus) labels = ["concept"] out= [] for n in range(len(docs)): a1 = list(ann.loc[ann["case"] == docs[n][0]].itertuples(index=False)) a = label_vector(docs[n][1], a1, labels) out.append(a) return out def flatten_list(l): return [item for sublist in l for item in sublist] def get_reference_vector(analysis_type, corpus, filter_semtype, semtype = None): ref_ann = get_ref_ann(analysis_type, corpus, filter_semtype, semtype) df = ref_ann.copy() df = df.drop_duplicates(subset=['begin','end','case']) df['label'] = 'concept' cols_to_keep = ['begin', 'end', 'case', 'label'] ref = df[cols_to_keep].copy() test = vectorized_annotations(ref) ref = np.asarray(flatten_list(test), dtype=np.int32) return ref def majority_overlap_sys(systems, analysis_type, corpus, filter_semtype, semtype = None): d = {} cols_to_keep = ['begin', 'end', 'case', 'label'] sys_test = [] for system in systems: sys_ann = get_sys_data(system, analysis_type, corpus, filter_semtype, semtype) df = sys_ann.copy() df['label'] = 'concept' df = df.rename(index=str, columns={"note_id": "case"}) sys = df[df['system']==system][cols_to_keep].copy() test = vectorized_annotations(sys) d[system] = flatten_list(test) sys_test.append(d[system]) output = sum(np.array(sys_test)) n = int(len(systems) / 2) #print(n) if ((len(systems) % 2) != 0): vote = np.where(output > n, 1, 0) else: vote = np.where(output > n, 1, (np.where(output == n, random.randint(0, 1), 0))) return vote def majority_overlap_vote_out(ref, vote, corpus): TP, TN, FP, FN = confused(ref, vote) print(TP, TN, FP, FN) system_n = TP + FP reference_n = TP + FN d = cm_dict(FN, FP, TP, system_n, reference_n) d['TN'] = TN d['corpus'] = corpus print(d) metrics = pd.DataFrame(d, index=[0]) return metrics # control vote run def majority_vote(systems, analysis_type, corpus, run_type, filter_semtype, semtypes = None): print(semtypes, systems) if filter_semtype: for semtype in semtypes: test = get_valid_systems(systems, semtype) print('SYSYEMS FOR SEMTYPE', semtype, 'ARE', test) if run_type == 'overlap': ref = get_reference_vector(analysis_type, corpus, filter_semtype, semtype) vote = majority_overlap_sys(test, analysis_type, corpus, filter_semtype, semtype) metrics = majority_overlap_vote_out(ref, vote, corpus) #generate_ensemble_metrics(metrics, analysis_type, corpus, ensemble_type, filter_semtype, semtype) elif run_type == 'exact': sys = majority_exact_sys(test, analysis_type, corpus, filter_semtype, semtype) d = majority_exact_vote_out(sys, filter_semtype, semtype) metrics = pd.DataFrame(d, index=[0]) elif run_type == 'cui': sys = majority_cui_sys(test, analysis_type, corpus, filter_semtype, semtype) d = majority_cui_vote_out(sys, filter_semtype, semtype) metrics = pd.DataFrame(d, index=[0]) metrics['systems'] = ','.join(test) generate_ensemble_metrics(metrics, analysis_type, corpus, ensemble_type, filter_semtype, semtype) else: if run_type == 'overlap': ref = get_reference_vector(analysis_type, corpus, filter_semtype) vote = majority_overlap_sys(systems, analysis_type, corpus, filter_semtype) metrics = majority_overlap_vote_out(ref, vote, corpus) elif run_type == 'exact': sys = majority_exact_sys(systems, analysis_type, corpus, filter_semtype) d = majority_exact_vote_out(sys, filter_semtype) metrics = pd.DataFrame(d, index=[0]) elif run_type == 'cui': sys = majority_cui_sys(systems, analysis_type, corpus, filter_semtype) d = majority_cui_vote_out(sys, filter_semtype) metrics = pd.DataFrame(d, index=[0]) metrics['systems'] = ','.join(systems) generate_ensemble_metrics(metrics, analysis_type, corpus, ensemble_type, filter_semtype) print(metrics) def majority_cui_sys(systems, analysis_type, corpus, filter_semtype, semtype = None): cols_to_keep = ['cui', 'note_id', 'system'] df = pd.DataFrame() for system in systems: if filter_semtype: sys = get_sys_data(system, analysis_type, corpus, filter_semtype, semtype) else: sys = get_sys_data(system, analysis_type, corpus, filter_semtype) sys = sys[sys['system'] == system][cols_to_keep].drop_duplicates() frames = [df, sys] df = pd.concat(frames) return df def majority_cui_vote_out(sys, filter_semtype, semtype = None): sys = sys.astype(str) sys['value_cui'] = list(zip(sys.cui, sys.note_id.astype(str))) sys['count'] = sys.groupby(['value_cui'])['value_cui'].transform('count') n = int(len(systems) / 2) if ((len(systems) % 2) != 0): sys = sys[sys['count'] > n] else: # https://stackoverflow.com/questions/23330654/update-a-dataframe-in-pandas-while-iterating-row-by-row for i in sys.index: if sys.at[i, 'count'] == n: sys.at[i, 'count'] = random.choice([1,len(systems)]) sys = sys[sys['count'] > n] sys = sys.drop_duplicates(subset=['value_cui', 'cui', 'note_id']) ref = get_ref_ann(analysis_type, corpus, filter_semtype, semtype) c = get_cooccurences(ref, sys, analysis_type, corpus) # get matches, FN, etc. if c.ref_system_match > 0: # compute confusion matrix metrics and write to dictionary -> df # get dictionary of confusion matrix metrics print(cm_dict(c.ref_only, c.system_only, c.ref_system_match, c.system_n, c.ref_n)) return cm_dict(c.ref_only, c.system_only, c.ref_system_match, c.system_n, c.ref_n) def majority_exact_sys(systems, analysis_type, corpus, filter_semtype, semtype = None): cols_to_keep = ['begin', 'end', 'note_id', 'system'] df = pd.DataFrame() for system in systems: if filter_semtype: sys = get_sys_data(system, analysis_type, corpus, filter_semtype, semtype) else: sys = get_sys_data(system, analysis_type, corpus, filter_semtype) sys = sys[sys['system'] == system][cols_to_keep].drop_duplicates() frames = [df, sys] df = pd.concat(frames) return df def majority_exact_vote_out(sys, filter_semtype, semtype = None): sys['span'] = list(zip(sys.begin, sys.end, sys.note_id.astype(str))) sys['count'] = sys.groupby(['span'])['span'].transform('count') n = int(len(systems) / 2) if ((len(systems) % 2) != 0): sys = sys[sys['count'] > n] else: # https://stackoverflow.com/questions/23330654/update-a-dataframe-in-pandas-while-iterating-row-by-row for i in sys.index: if sys.at[i, 'count'] == n: sys.at[i, 'count'] = random.choice([1,len(systems)]) sys = sys[sys['count'] > n] sys = sys.drop_duplicates(subset=['span', 'begin', 'end', 'note_id']) ref = get_ref_ann(analysis_type, corpus, filter_semtype, semtype) c = get_cooccurences(ref, sys, analysis_type, corpus) # get matches, FN, etc. if c.ref_system_match > 0: # compute confusion matrix metrics and write to dictionary -> df # get dictionary of confusion matrix metrics print(cm_dict(c.ref_only, c.system_only, c.ref_system_match, c.system_n, c.ref_n)) return cm_dict(c.ref_only, c.system_only, c.ref_system_match, c.system_n, c.ref_n) #ensemble_type = 'vote' #filter_semtype = False #majority_vote(systems, analysis_type, corpus, run_type, filter_semtype, semtypes) #%%time def main(): ''' corpora: i2b2, mipacq, fv017 analyses: entity only (exact span), cui by document, full (aka (entity and cui on exaact span/exact cui) systems: ctakes, biomedicus, clamp, metamap, quick_umls TODO -> Vectorization (entity only) -> done: add switch for use of TN on single system performance evaluations -> done add switch for overlap matching versus exact span -> done -> Other tasks besides concept extraction ''' analysisConf = AnalysisConfig() print(analysisConf.systems, analysisConf.corpus_config()) if (rtype == 1): print(semtypes, systems) if filter_semtype: for semtype in semtypes: test = get_valid_systems(systems, semtype) print('SYSYEMS FOR SEMTYPE', semtype, 'ARE', test) generate_metrics(analysis_type, corpus, filter_semtype, semtype) else: generate_metrics(analysis_type, corpus, filter_semtype) elif (rtype == 2): print('run_type:', run_type) if filter_semtype: print(semtypes) ensemble_control(analysisConf.systems, analysis_type, corpus, run_type, filter_semtype, semtypes) else: ensemble_control(analysisConf.systems, analysis_type, corpus, run_type, filter_semtype) elif (rtype == 3): t = ['concept_jaccard_score_false'] test_systems(analysis_type, analysisConf.systems, corpus) test_count(analysis_type, corpus) test_ensemble(analysis_type, corpus) elif (rtype == 4): if filter_semtype: majority_vote(systems, analysis_type, corpus, run_type, filter_semtype, semtypes) else: majority_vote(systems, analysis_type, corpus, run_type, filter_semtype) elif (rtype == 5): # control filter_semtype in get_sys_data, get_ref_n and generate_metrics. TODO consolidate. # # run single ad hoc statement statement = '((ctakes&biomedicus)|metamap)' def ad_hoc(analysis_type, corpus, statement): sys = get_merge_data(statement, analysis_type, corpus, run_type, filter_semtype) sys = sys.rename(index=str, columns={"note_id": "case"}) sys['label'] = 'concept' ref = get_reference_vector(analysis_type, corpus, filter_semtype) sys = vectorized_annotations(sys) sys = np.asarray(flatten_list(list(sys)), dtype=np.int32) return ref, sys ref, sys = ad_hoc(analysis_type, corpus, statement) elif (rtype == 6): # 5 w/o evaluation statement = '(ctakes|biomedicus)' #((((A∧C)∧D)∧E)∨B)->for covid pipeline def ad_hoc(analysis_type, corpus, statement): print(semtypes) for semtype in semtypes: sys = get_sys_merge(statement, analysis_type, corpus, run_type, filter_semtype, semtype) sys = sys.rename(index=str, columns={"note_id": "case"}) return sys sys = ad_hoc(analysis_type, corpus, statement).sort_values(by=['case', 'begin']) sys.drop_duplicates(['cui', 'case', 'polarity'],inplace=True) sys.to_csv(data_directory + 'test_new.csv') test = sys.copy() test.drop(['begin','end','case','polarity'], axis=1, inplace=True) test.to_csv(data_directory + 'test_dedup_new.csv') if __name__ == '__main__': #%prun main() main() print('done!') pass ```
github_jupyter
``` %pylab notebook import numpy as np import numpy.linalg as la np.set_printoptions(suppress=True) ``` Let's imagine a micro-internet, with just 6 websites (**A**vocado, **B**ullseye, **C**atBabel, **D**romeda, **e**Tings, and **F**aceSpace). Each website links to some of the others, and this forms a network like this. ![Micro-Network](Examples/mini-network.png "A Micro-Network") We have 100 *Procrastinating Pat*s (see [README](https://github.com/rgaezsd/pagerank/blob/main/README.md)) on our micro-internet, each viewing a single website at a time. Each minute the Pats follow a link on their website to another site on the micro-internet. After a while, the websites that are most linked to will have more Pats visiting them, and in the long run, each minute for every Pat that leaves a website, another will enter keeping the total numbers of Pats on each website constant. We represent the number of Pats on each website with the vector, $$\mathbf{r} = \begin{bmatrix} r_A \\ r_B \\ r_C \\ r_D \\ r_E \\ r_F \end{bmatrix}$$ And say that the number of Pats on each website in minute $i+1$ is related to those at minute $i$ by the matrix transformation $$ \mathbf{r}^{(i+1)} = L \,\mathbf{r}^{(i)}$$ with the matrix $L$ taking the form, $$ L = \begin{bmatrix} L_{A→A} & L_{B→A} & L_{C→A} & L_{D→A} & L_{E→A} & L_{F→A} \\ L_{A→B} & L_{B→B} & L_{C→B} & L_{D→B} & L_{E→B} & L_{F→B} \\ L_{A→C} & L_{B→C} & L_{C→C} & L_{D→C} & L_{E→C} & L_{F→C} \\ L_{A→D} & L_{B→D} & L_{C→D} & L_{D→D} & L_{E→D} & L_{F→D} \\ L_{A→E} & L_{B→E} & L_{C→E} & L_{D→E} & L_{E→E} & L_{F→E} \\ L_{A→F} & L_{B→F} & L_{C→F} & L_{D→F} & L_{E→F} & L_{F→F} \\ \end{bmatrix} $$ where the columns represent the probability of leaving a website for any other website, and sum to one. The rows determine how likely you are to enter a website from any other, though these need not add to one. The long time behaviour of this system is when $ \mathbf{r}^{(i+1)} = \mathbf{r}^{(i)}$, so we'll drop the superscripts here, and that allows us to write, $$ L \,\mathbf{r} = \mathbf{r}$$ which is an eigenvalue equation for the matrix $L$, with eigenvalue 1 (this is guaranteed by the probabalistic structure of the matrix $L$). ``` L = np.array([[0, 1/2, 1/3, 0, 0, 0 ], [1/3, 0, 0, 0, 1/2, 0 ], [1/3, 1/2, 0, 1, 0, 1/2 ], [1/3, 0, 1/3, 0, 1/2, 1/2 ], [0, 0, 0, 0, 0, 0 ], [0, 0, 1/3, 0, 0, 0 ]]) eVals, eVecs = la.eig(L) order = np.absolute(eVals).argsort()[::-1] eVals = eVals[order] eVecs = eVecs[:,order] r = eVecs[:, 0] 100 * np.real(r / np.sum(r)) ``` We can see from this list, the number of Procrastinating Pats that we expect to find on each website after long times. Putting them in order of *popularity* (based on this metric), the PageRank of this micro-internet is: **C**atBabel, **D**romeda, **A**vocado, **F**aceSpace, **B**ullseye, **e**Tings Referring back to the micro-internet diagram, is this what you would have expected? Convince yourself that based on which pages seem important given which others link to them, that this is a sensible ranking. Let's now try to get the same result using the Power-Iteration method. ``` r = 100 * np.ones(6) / 6 r ``` Next, let's update the vector to the next minute, with the matrix $L$. Run the following cell multiple times, until the answer stabilises. ``` r = 100 * np.ones(6) / 6 for i in np.arange(100): r = L @ r r ``` Or even better, we can keep running until we get to the required tolerance. ``` r = 100 * np.ones(6) / 6 lastR = r r = L @ r i = 0 while la.norm(lastR - r) > 0.01 : lastR = r r = L @ r i += 1 print(str(i) + " iterations to convergence.") r ``` ### Damping Parameter Case Let's consider an extension to our micro-internet where things start to go wrong. Say a new website is added to the micro-internet: *Geoff's* Website. This website is linked to by *FaceSpace* and only links to itself. ![An Expanded Micro-Internet](Examples/network-extended.png "An Expanded Micro-Internet") Intuitively, only *FaceSpace*, which is in the bottom half of the page rank, links to this website amongst the two others it links to, so we might expect *Geoff's* site to have a correspondingly low PageRank score. ``` L2 = np.array([[0, 1/2, 1/3, 0, 0, 0, 0 ], [1/3, 0, 0, 0, 1/2, 0, 0 ], [1/3, 1/2, 0, 1, 0, 1/3, 0 ], [1/3, 0, 1/3, 0, 1/2, 1/3, 0 ], [0, 0, 0, 0, 0, 0, 0 ], [0, 0, 1/3, 0, 0, 0, 0 ], [0, 0, 0, 0, 0, 1/3, 1 ]]) r = 100 * np.ones(7) / 7 lastR = r r = L2 @ r i = 0 while la.norm(lastR - r) > 0.01 : lastR = r r = L2 @ r i += 1 print(str(i) + " iterations to convergence.") r ``` That's no good! *Geoff* seems to be taking all the traffic on the micro-internet, and somehow coming at the top of the PageRank. This behaviour can be understood, because once a Pat get's to *Geoff's* Website, they can't leave, as all links head back to Geoff. To combat this, we can add a small probability that the Procrastinating Pats don't follow any link on a webpage, but instead visit a website on the micro-internet at random. We'll say the probability of them following a link is $d$ and the probability of choosing a random website is therefore $1-d$. We can use a new matrix to work out where the Pat's visit each minute. $$ M = d \, L + \frac{1-d}{n} \, J $$ where $J$ is an $n\times n$ matrix where every element is one. If $d$ is one, we have the case we had previously, whereas if $d$ is zero, we will always visit a random webpage and therefore all webpages will be equally likely and equally ranked. For this extension to work best, $1-d$ should be somewhat small - though we won't go into a discussion about exactly how small. ``` d = 0.5 M = d * L2 + (1-d)/7 * np.ones([7, 7]) r = 100 * np.ones(7) / 7 M = d * L2 + (1-d)/7 * np.ones([7, 7]) lastR = r r = M @ r i = 0 while la.norm(lastR - r) > 0.01 : lastR = r r = M @ r i += 1 print(str(i) + " iterations to convergence.") r ``` This is certainly better, the PageRank gives sensible numbers for the Procrastinating Pats that end up on each webpage. This method still predicts Geoff has a high ranking webpage however. This could be seen as a consequence of using a small network.
github_jupyter
``` import numpy as np import scipy as sp import pandas as pd import urllib.request import os import shutil import tarfile import matplotlib.pyplot as plt from sklearn import datasets, cross_validation, metrics from sklearn.preprocessing import KernelCenterer %matplotlib notebook ``` First we need to download the Caltech256 dataset. ``` DATASET_URL = r"http://homes.esat.kuleuven.be/~tuytelaa/"\ "unsup/unsup_caltech256_dense_sift_1000_bow.tar.gz" DATASET_DIR = "../../../projects/weiyen/data" filename = os.path.split(DATASET_URL)[1] dest_path = os.path.join(DATASET_DIR, filename) if os.path.exists(dest_path): print("{} exists. Skipping download...".format(dest_path)) else: with urllib.request.urlopen(DATASET_URL) as response, open(dest_path, 'wb') as out_file: shutil.copyfileobj(response, out_file) print("Dataset downloaded. Extracting files...") tar = tarfile.open(dest_path) tar.extractall(path=DATASET_DIR) print("Files extracted.") tar.close() path = os.path.join(DATASET_DIR, "bow_1000_dense/") ``` Calculate multi-class KNFST model for multi-class novelty detection INPUT K: NxN kernel matrix containing similarities of n training samples labels: Nx1 column vector containing multi-class labels of N training samples OUTPUT proj: Projection of KNFST target_points: The projections of training data into the null space Load the dataset into memory ``` ds = datasets.load_files(path) ds.data = np.vstack([np.fromstring(txt, sep='\t') for txt in ds.data]) data = ds.data target = ds.target ``` Select a few "known" classes ``` classes = np.unique(target) num_class = len(classes) num_known = 5 known = np.random.choice(classes, num_known) mask = np.array([y in known for y in target]) X_train = data[mask] y_train = target[mask] idx = y_train.argsort() X_train = X_train[idx] y_train = y_train[idx] print(X_train.shape) print(y_train.shape) def _hik(x, y): ''' Implements the histogram intersection kernel. ''' return np.minimum(x, y).sum() from scipy.linalg import svd def nullspace(A, eps=1e-12): u, s, vh = svd(A) null_mask = (s <= eps) null_space = sp.compress(null_mask, vh, axis=0) return sp.transpose(null_space) A = np.array([[2,3,5],[-4,2,3],[0,0,0]]) np.array([-4,2,3]).dot(nullspace(A)) ``` Train the model, and obtain the projection and class target points. ``` def learn(K, labels): classes = np.unique(labels) if len(classes) < 2: raise Exception("KNFST requires 2 or more classes") n, m = K.shape if n != m: raise Exception("Kernel matrix must be quadratic") centered_k = KernelCenterer().fit_transform(K) basis_values, basis_vecs = np.linalg.eigh(centered_k) basis_vecs = basis_vecs[:,basis_values > 1e-12] basis_values = basis_values[basis_values > 1e-12] basis_values = np.diag(1.0/np.sqrt(basis_values)) basis_vecs = basis_vecs.dot(basis_values) L = np.zeros([n,n]) for cl in classes: for idx1, x in enumerate(labels == cl): for idx2, y in enumerate(labels == cl): if x and y: L[idx1, idx2] = 1.0/np.sum(labels==cl) M = np.ones([m,m])/m H = (((np.eye(m,m)-M).dot(basis_vecs)).T).dot(K).dot(np.eye(n,m)-L) t_sw = H.dot(H.T) eigenvecs = nullspace(t_sw) if eigenvecs.shape[1] < 1: eigenvals, eigenvecs = np.linalg.eigh(t_sw) eigenvals = np.diag(eigenvals) min_idx = eigenvals.argsort()[0] eigenvecs = eigenvecs[:, min_idx] proj = ((np.eye(m,m)-M).dot(basis_vecs)).dot(eigenvecs) target_points = [] for cl in classes: k_cl = K[labels==cl, :] pt = np.mean(k_cl.dot(proj), axis=0) target_points.append(pt) return proj, np.array(target_points) kernel_mat = metrics.pairwise_kernels(X_train, metric=_hik) proj, target_points = learn(kernel_mat, y_train) def squared_euclidean_distances(x, y): n = np.shape(x)[0] m = np.shape(y)[0] distmat = np.zeros((n,m)) for i in range(n): for j in range(m): buff = x[i,:] - y[j,:] distmat[i,j] = buff.dot(buff.T) return distmat def assign_score(proj, target_points, ks): projection_vectors = ks.T.dot(proj) sq_dist = squared_euclidean_distances(projection_vectors, target_points) scores = np.sqrt(np.amin(sq_dist, 1)) return scores auc_scores = [] classes = np.unique(target) num_known = 5 for n in range(20): num_class = len(classes) known = np.random.choice(classes, num_known) mask = np.array([y in known for y in target]) X_train = data[mask] y_train = target[mask] idx = y_train.argsort() X_train = X_train[idx] y_train = y_train[idx] sample_idx = np.random.randint(0, len(data), size=1000) X_test = data[sample_idx,:] y_labels = target[sample_idx] # Test labels are 1 if novel, otherwise 0. y_test = np.array([1 if cl not in known else 0 for cl in y_labels]) # Train model kernel_mat = metrics.pairwise_kernels(X_train, metric=_hik) proj, target_points = learn(kernel_mat, y_train) # Test ks = metrics.pairwise_kernels(X_train, X_test, metric=_hik) scores = assign_score(proj, target_points, ks) auc = metrics.roc_auc_score(y_test, scores) print("AUC:", auc) auc_scores.append(auc) fpr, tpr, thresholds = metrics.roc_curve(y_test, scores) plt.figure() plt.plot(fpr, tpr, label='ROC curve') plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC Curve of the KNFST Novelty Classifier') plt.legend(loc="lower right") plt.show() ```
github_jupyter
## KNN imputation The missing values are estimated as the average value from the closest K neighbours. [KNNImputer from sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.impute.KNNImputer.html#sklearn.impute.KNNImputer) - Same K will be used to impute all variables - Can't really optimise K to better predict the missing values - Could optimise K to better predict the target **Note** If what we want is to predict, as accurately as possible the values of the missing data, then, we would not use the KNN imputer, we would build individual KNN algorithms to predict 1 variable from the remaining ones. This is a common regression problem. ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt # to split the datasets from sklearn.model_selection import train_test_split # multivariate imputation from sklearn.impute import KNNImputer ``` ## Load data ``` # list with numerical varables cols_to_use = [ 'MSSubClass', 'LotFrontage', 'LotArea', 'OverallQual', 'OverallCond', 'YearBuilt', 'YearRemodAdd', 'MasVnrArea', 'BsmtFinSF1', 'BsmtFinSF2', 'BsmtUnfSF', 'TotalBsmtSF', '1stFlrSF', '2ndFlrSF', 'LowQualFinSF', 'GrLivArea', 'BsmtFullBath', 'BsmtHalfBath', 'FullBath', 'HalfBath', 'BedroomAbvGr', 'KitchenAbvGr', 'TotRmsAbvGrd', 'Fireplaces', 'GarageYrBlt', 'GarageCars', 'GarageArea', 'WoodDeckSF', 'OpenPorchSF', 'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'PoolArea', 'MiscVal', 'MoSold', 'YrSold', 'SalePrice' ] # let's load the dataset with a selected variables data = pd.read_csv('../houseprice.csv', usecols=cols_to_use) # find variables with missing data for var in data.columns: if data[var].isnull().sum() > 1: print(var, data[var].isnull().sum()) # let's separate into training and testing set # first drop the target from the feature list cols_to_use.remove('SalePrice') X_train, X_test, y_train, y_test = train_test_split( data[cols_to_use], data['SalePrice'], test_size=0.3, random_state=0) X_train.shape, X_test.shape # reset index, so we can compare values later on # in the demo X_train.reset_index(inplace=True, drop=True) X_test.reset_index(inplace=True, drop=True) ``` ## KNN imputation ``` imputer = KNNImputer( n_neighbors=5, # the number of neighbours K weights='distance', # the weighting factor metric='nan_euclidean', # the metric to find the neighbours add_indicator=False, # whether to add a missing indicator ) imputer.fit(X_train) train_t = imputer.transform(X_train) test_t = imputer.transform(X_test) # sklearn returns a Numpy array # lets make a dataframe train_t = pd.DataFrame(train_t, columns=X_train.columns) test_t = pd.DataFrame(test_t, columns=X_test.columns) train_t.head() # variables without NA after the imputation train_t[['LotFrontage', 'MasVnrArea', 'GarageYrBlt']].isnull().sum() # the obseravtions with NA in the original train set X_train[X_train['MasVnrArea'].isnull()]['MasVnrArea'] # the replacement values in the transformed dataset train_t[X_train['MasVnrArea'].isnull()]['MasVnrArea'] # the mean value of the variable (i.e., for mean imputation) X_train['MasVnrArea'].mean() ``` In some cases, the imputation values are very different from the mean value we would have used in MeanMedianImputation. ## Imputing a slice of the dataframe We can use Feature-engine to apply the KNNImputer to a slice of the dataframe. ``` from feature_engine.wrappers import SklearnTransformerWrapper data = pd.read_csv('../houseprice.csv') X_train, X_test, y_train, y_test = train_test_split( data.drop('SalePrice', axis=1), data['SalePrice'], test_size=0.3, random_state=0) X_train.shape, X_test.shape # start the KNNimputer inside the SKlearnTransformerWrapper imputer = SklearnTransformerWrapper( transformer = KNNImputer(weights='distance'), variables = cols_to_use, ) # fit the wrapper + KNNImputer imputer.fit(X_train) # transform the data train_t = imputer.transform(X_train) test_t = imputer.transform(X_test) # feature-engine returns a dataframe train_t.head() # no NA after the imputation train_t['MasVnrArea'].isnull().sum() # same imputation values as previously train_t[X_train['MasVnrArea'].isnull()]['MasVnrArea'] ``` ## Automatically find best imputation parameters We can optimise the parameters of the KNN imputation to better predict our outcome. ``` # import extra classes for modelling from sklearn.preprocessing import StandardScaler from sklearn.linear_model import Lasso from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV # separate intro train and test set X_train, X_test, y_train, y_test = train_test_split( data[cols_to_use], # just the features data['SalePrice'], # the target test_size=0.3, # the percentage of obs in the test set random_state=0) # for reproducibility X_train.shape, X_test.shape pipe = Pipeline(steps=[ ('imputer', KNNImputer( n_neighbors=5, weights='distance', add_indicator=False)), ('scaler', StandardScaler()), ('regressor', Lasso(max_iter=2000)), ]) # now we create the grid with all the parameters that we would like to test param_grid = { 'imputer__n_neighbors': [3,5,10], 'imputer__weights': ['uniform', 'distance'], 'imputer__add_indicator': [True, False], 'regressor__alpha': [10, 100, 200], } grid_search = GridSearchCV(pipe, param_grid, cv=5, n_jobs=-1, scoring='r2') # cv=3 is the cross-validation # no_jobs =-1 indicates to use all available cpus # scoring='r2' indicates to evaluate using the r squared # for more details in the grid parameters visit: #https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html # and now we train over all the possible combinations # of the parameters above grid_search.fit(X_train, y_train) # and we print the best score over the train set print(("best linear regression from grid search: %.3f" % grid_search.score(X_train, y_train))) # let's check the performance over the test set print(("best linear regression from grid search: %.3f" % grid_search.score(X_test, y_test))) # and find the best parameters grid_search.best_params_ ``` ## Compare with univariate imputation ``` from sklearn.impute import SimpleImputer # separate intro train and test set X_train, X_test, y_train, y_test = train_test_split( data[cols_to_use], # just the features data['SalePrice'], # the target test_size=0.3, # the percentage of obs in the test set random_state=0) # for reproducibility X_train.shape, X_test.shape pipe = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='mean', fill_value=-1)), ('scaler', StandardScaler()), ('regressor', Lasso(max_iter=2000)), ]) param_grid = { 'imputer__strategy': ['mean', 'median', 'constant'], 'imputer__add_indicator': [True, False], 'regressor__alpha': [10, 100, 200], } grid_search = GridSearchCV(pipe, param_grid, cv=5, n_jobs=-1, scoring='r2') # and now we train over all the possible combinations of the parameters above grid_search.fit(X_train, y_train) # and we print the best score over the train set print(("best linear regression from grid search: %.3f" % grid_search.score(X_train, y_train))) # and finally let's check the performance over the test set print(("best linear regression from grid search: %.3f" % grid_search.score(X_test, y_test))) # and find the best fit parameters like this grid_search.best_params_ ``` We see that imputing the values with an arbitrary value of -1, returns approximately the same performance as doing KNN imputation, so we might not want to add the additional complexity of training models to impute NA, to then go ahead and predict the real target we are interested in.
github_jupyter
``` import os import matplotlib.pyplot as plt import numpy as np import pandas as pd from random import randint from numpy import array from numpy import argmax from numpy import array_equal import tensorflow as tf from tensorflow.keras.utils import to_categorical from keras.models import Model from keras.layers import Input from keras.layers import LSTM from keras.layers import Dense from keras.preprocessing.sequence import pad_sequences from sklearn.model_selection import train_test_split # from google.colab import drive # drive.mount('/content/drive') # os.chdir("drive/My Drive/Colab Notebooks/Structural/Project") ``` Dataset Preparation and Split ``` dataset = pd.read_csv('./data/dataset.ultrafltr.csv') print(dataset) ``` Lengths of sequences ``` data = dataset['sequence'].str.len() counts, bins = np.histogram(data) plt.hist(bins[:-1], bins, weights=counts) df_filtered = dataset[dataset['sequence'].str.len() <= 1000] print(df_filtered.shape) data = df_filtered['sequence'].str.len() counts, bins = np.histogram(data) plt.hist(bins[:-1], bins, weights=counts) dataset = df_filtered measurer = np.vectorize(len) res1 = measurer(dataset.values.astype(str)).max(axis=0)[0] print(res1) df, df_test = train_test_split(dataset, test_size=0.1) print(df) ``` Encoding of Aminoacids ``` codes = ['A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y'] def create_dict(codes): char_dict = {} for index, val in enumerate(codes): char_dict[val] = index+1 return char_dict char_dict = create_dict(codes) def integer_encoding(data): """ - Encodes code sequence to integer values. - 20 common amino acids are taken into consideration and rest 4 are categorized as 0. """ row_encode = [] for code in list(data): row_encode.append(char_dict.get(code, 0)) return row_encode ``` Model ``` # prepare data for the LSTM def get_dataset(df): X1, X2, y = list(), list(), list() for index, row in df.iterrows(): # generate source sequence source = row['sequence'] # source = source.ljust(res1, '0') source = integer_encoding(source) # define padded target sequence target = row['opm_class'] # target = target.ljust(res1, '0') target = list(map(int, target)) # create padded input target sequence target_in = [0] + target[:-1] # encode src_encoded = to_categorical(source, num_classes=20+1) tar_encoded = to_categorical(target, num_classes=2) tar2_encoded = to_categorical(target_in, num_classes=2) # store X1.append(src_encoded) X2.append(tar2_encoded) y.append(tar_encoded) return array(X1), array(X2), array(y)#, temp_df # Creating the first Dataframe using dictionary X1, X2, y = get_dataset(df) X1 = pad_sequences(X1, maxlen=res1, padding='post', truncating='post') X2 = pad_sequences(X2, maxlen=res1, padding='post', truncating='post') y = pad_sequences(y, maxlen=res1, padding='post', truncating='post') # returns train, inference_encoder and inference_decoder models def define_models(n_input, n_output, n_units): # define training encoder encoder_inputs = Input(shape=(None, n_input)) encoder = LSTM(n_units, return_state=True) encoder_outputs, state_h, state_c = encoder(encoder_inputs) encoder_states = [state_h, state_c] # define training decoder decoder_inputs = Input(shape=(None, n_output)) decoder_lstm = LSTM(n_units, return_sequences=True, return_state=True) decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states) decoder_dense = Dense(n_output, activation='softmax') decoder_outputs = decoder_dense(decoder_outputs) model = Model([encoder_inputs, decoder_inputs], decoder_outputs) # define inference encoder encoder_model = Model(encoder_inputs, encoder_states) # define inference decoder decoder_state_input_h = Input(shape=(n_units,)) decoder_state_input_c = Input(shape=(n_units,)) decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c] decoder_outputs, state_h, state_c = decoder_lstm(decoder_inputs, initial_state=decoder_states_inputs) decoder_states = [state_h, state_c] decoder_outputs = decoder_dense(decoder_outputs) decoder_model = Model([decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states) # return all models return model, encoder_model, decoder_model train, infenc, infdec = define_models(20+1, 2, 128) train.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) train.summary() # train model train.fit([X1, X2], y, epochs=10) ``` Prediction ``` # decode a one hot encoded string def one_hot_decode(encoded_seq): return [argmax(vector) for vector in encoded_seq] def compare_seqs(source, target): correct = 0 for i in range(len(source)): if source[i] == target[i]: correct += 1 return correct # generate target given source sequence def predict_sequence(infenc, infdec, source, n_steps, cardinality): # encode state = infenc.predict(source) # start of sequence input target_seq = array([0.0 for _ in range(cardinality)]).reshape(1, 1, cardinality) # collect predictions output = list() for t in range(n_steps): # predict next char yhat, h, c = infdec.predict([target_seq] + state) # store prediction output.append(yhat[0,0,:]) # update state state = [h, c] # update target sequence target_seq = yhat return array(output) # evaluate LSTM X1, X2, y = get_dataset(df_test) X1 = pad_sequences(X1, maxlen=res1, padding='post', truncating='post') X2 = pad_sequences(X2, maxlen=res1, padding='post', truncating='post') y = pad_sequences(y, maxlen=res1, padding='post', truncating='post') accuracies = [] for i in range(len(X1)): row = X1[i] row = row.reshape((1, row.shape[0], row.shape[1])) target = predict_sequence(infenc, infdec, row, res1, 2) curr_acc = compare_seqs(one_hot_decode(target), one_hot_decode(y[i]))/res1 accuracies.append(curr_acc) print(f'Sequence{i} Accuracy: {curr_acc}') total_acc = 0 for i in range(len(accuracies)): total_acc += accuracies[i] print('Total Accuracy: %.2f%%' % (float(total_acc)/float(len(X1))*100.0)) ```
github_jupyter
# R Bootcamp Part 5 ## stargazer, xtable, robust standard errors, and fixed effects regressions This bootcamp will help us get more comfortableusing **stargazer** and **xtable** to produce high-quality results and summary statistics tables, and using `felm()` from the **lfe** package for regressions (both fixed effects and regular OLS). For today, let's load a few packages and read in a dataset on residential water use for residents in Alameda and Contra Costa Counties. ## Preamble Here we'll load in our necessary packages and the data file ``` library(tidyverse) library(haven) library(lfe) library(stargazer) library(xtable) # load in wateruse data, add in measure of gallons per day "gpd" waterdata <- read_dta("wateruse.dta") %>% mutate(gpd = (unit*748)/num_days) head(waterdata) ``` # Summary Statistics Tables with xtable `xtable` is a useful package for producing custom summary statistics tables. let's say we're interested in summarizing water use ($gpd$) and degree days ($degree\_days$) according to whether a lot is less than or greater than one acre ($lotsize_1$) or more than 4 acres ($lotsize_4$): `homesize <- waterdata %>% select(hh, billingcycle, gpd, degree_days, lotsize) %>% drop_na() %>% mutate(lotsize_1 = ifelse((lotsize < 1), "< 1", ">= 1"), lotsize_4 = ifelse((lotsize > 4), "> 4", "<= 4")) head(homesize)` We know how to create summary statistics for these two variables for both levels of $lotsize\_1$ and $lotsize\_4$ using `summarise()`: `sumstat_1 <- homesize %>% group_by(lotsize_1) %>% summarise(mean_gpd = mean(gpd), mean_degdays = mean(degree_days)) sumstat_1` `sumstat_4 <- homesize %>% group_by(lotsize_4) %>% summarise(mean_gpd = mean(gpd), mean_degdays = mean(degree_days)) sumstat_4` And now we can use `xtable()` to put them into the same table! `full <- xtable(cbind(t(sumstat_1), t(sumstat_4))) rownames(full)[1] <- "Lotsize Group" colnames(full) <- c("lotsize_1 = 1", "lotsize_1 = 0", "lotsize_4 = 0", "lotsize_4 =1") full` We now have a table `full` that is an xtable object. We can also spit this table out in html or latex form if needed using the `print.xtable()` function on our xtable `full`, specifying `type = "html": `print.xtable(full, type = "html")` Copy and paste the html code here to see how it appears # Regression Tables in Stargazer `stargazer` is a super useful package for producing professional-quality regression tables. While it defaults to producing LaTeX format tables (a typesetting language a lot of economists use), for use in our class we can also produce html code that can easily be copied into text cells and formatted perfectly. If we run the following three regressions: \begin{align*} GPD_{it} &= \beta_0 + \beta_1 degree\_days_{it} + \beta_2 precip_{it} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(1)\\ GPD_{it} &= \beta_0 + \beta_1 degree\_days_{it} + \beta_2 precip_{it} + \beta_3 lotsize_{i}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(2)\\ GPD_{it} &= \beta_0 + \beta_1 degree\_days_{it} + \beta_2 precip_{it} + \beta_3 lotsize_{i} + \beta_4 Homeval_i~~~~~~~~~~~~~~~~~~(3) \end{align*} We might want to present the results side by side in the same table so that we can easily compare coefficients from one column to the other. To do that with `stargazer`, we can 1. Run each regression, storing them in memory 2. Run `stargazer(reg1, reg2, reg3, ..., type )` where the first arguments are all the regression objects we want in the table, and telling R what type of output we want If we specify `type = "text"`, we'll get the table displayed directly in the output window: `reg_a <- lm(gpd ~ degree_days + precip, waterdata) reg_b <- lm(gpd ~ degree_days + precip + lotsize, waterdata) reg_c <- lm(gpd ~ degree_days + precip + lotsize + homeval, waterdata)` `stargazer(reg_a, reg_b, reg_c, type = "text")` And if we specify `type = "html"`, we'll get html code that we need to copy and paste into a text/markdown cell: `stargazer(reg_a, reg_b, reg_c, type = "html")` Now all we need to do is copy and paste that html code from the output into a text cell and we've got our table! (copy your code here) And we get a nice looking regression table with all three models side by side! This makes it easy to see how the coefficient on lot size falls when we add in home value, letting us quickly figure out the sign of correlation between the two. ## Table Options Stargazer has a ton of different options for customizing the look of our table with optional arguments, including * `title` lets us add a custom title * `column.labels` lets you add text labels to the columns * `covariate.labels` lets us specify custom labels for all our variables other than the variable names. Specify each label in quotations in the form of a vector with `c()` * `ci = TRUE` adds in confidence intervals (by default for the 10\% level, but you can change it to the 1\% level with `ci.level = 0.99` * `intercept.bottom = FALSE` will move the constant to the top of the table * `digits` lets you choose the number of decimal places to display * `notes` lets you add some notes at the bottom For example, we could customize the above table as `stargazer(reg_a, reg_b, reg_c, type = "text", title = "Water Use, Weather, and Home Characteristics", column.labels = c("Weather", "With Lotsize", "With HomeVal"), covariate.labels = c("Intercept", "Degree Days", "Precipitation (mm)", "Lot Size (Acres)", "Home Value (USD)"), intercept.bottom = FALSE, digits = 2, note = "Isn't stargazer neat?" )` # Summary Statistics Tables in Stargazer Can we use Stargazer for summary statistics tables too? You bet we can! Stargazer especially comes in handy if we have a lot of variables we want to summarize and one or no variables we want to group them on. This approach works especially well with `across()` within `summarise()`. For example, let's say we wanted to summarise the median and variance of `gpd`, `precip`, and `degree_days` by whether the home was built after 1980 or not. Rather than create separate tables for all of the variables and merge them together like with xtable, we can just summarise across with `ss_acr <- mutate(waterdata, pre_80 = ifelse(yearbuilt < 1980, "1. Pre-1980", "2. 1980+")) %>% group_by(pre_80) %>% summarise(across(.cols = c(gpd, precip, degree_days), .fns = list(Median = median, Variance = var))) ss_acr` Note that `ifelse()` is a function that follows the format `ifelse(Condition, Value if True, Value if False)` Here our condition is that the $yearbuilt$ variable is less than 1980. If it’s true, we want this new variable to take on the label "1. Pre-1980", and otherwise be "2. 1980+". This table then contains everything we want, but having it displayed "wide" like this is a bit tough to see. If we wanted to display it "long" where there is one column for each or pre-1980 and post-1980 homes, we can just use the transpose function `t()`. Placing that within the `stargazer()` call and specifying that we want html code then gets us `stargazer(t(ss_acr), type = "html")` (copy your html code here) ## Heteroskedasticity-Robust Standard Errors There are often times where you want to use heteroskedasticity-robust standard errors in place of the normal kind to account for situations where we might be worried about violating our homoskedasticity assumption. To add robust standard errors to our table, we'll take advantage of the `lmtest` and `sandwich` packages (that we already loaded in the preamble). If we want to see the coefficient table from Regression B with robust standard errors, we can use the `coeftest()` function as specified below: `coeftest(reg_b, vcov = vcovHC(reg_b, type = "HC1"))` What the `vcovHC(reg_a, type = "HC1")` part is doing is telling R we want to calculate standard errors using the heteroskedasticity-robust approach (i.e. telling it a specific form of the variance-covariance matrix between our residuals). `coeftest()` then prints the nice output table. While this is a nice way to view the robust standard errors in a summary-style table, sometimes we want to extract the robust standard errors so we can use them elsewhere - like in stargazer! To get a vector of robust standard errors from Regression B, we can use the following: `robust_b <- sqrt(diag(vcovHC(reg_b, type = "HC1")))` `robust_b` Which matches the robust standard errors using `coeftest()` earlier. But woah there, that's a function nested in a function nested in *another function*! Let's break this down step-by-step: `vcov_b <- vcovHC(reg_b, type = "HC1")` This first `vcov_b` object is getting the entire variance-covariance matrix for our regression coefficients. Since we again specified `type = "HC1"`, we ensure we get the heteroskedasticity-robust version of this matrix (if we had instead specified `type = "constant"` we would be assuming homoskedasticity and would get our usual variance estimates). What this looks like is $$VCOV_b = \begin{matrix}{} \widehat{Var}(\hat \beta_0) & \widehat{Cov}(\hat \beta_0, \hat\beta_1) & \widehat{Cov}(\hat \beta_0, \hat\beta_2) \\ \widehat{Cov}(\hat \beta_1, \hat\beta_0) & \widehat{Var}(\hat \beta_1) & \widehat{Cov}(\hat \beta_1, \hat\beta_2) \\ \widehat{Cov}(\hat \beta_2, \hat\beta_0) & \widehat{Cov}(\hat \beta_2, \hat\beta_1) & \widehat{Var}(\hat \beta_2) \end{matrix}$$ Where each element is $\hat\sigma_i$ in the ith row mutiplied by $\hat\sigma_j$ in the jth column. Note that when $i = j$ in the main diagonal, we get the variance estimate for $\hat \beta_i$! You can check this by running the following lines: `vcov_b <- vcovHC(reg_b, type = "HC1") vcov_b` `var_b <- diag(vcov_b)` The `diag()` function extracts this main diagonal, giving us a vector of our robust estimated variances `robust_b <- sqrt(var_b)` And taking the square root gets us our standard error estimates for our $\hat\beta$'s! See the process by running the following lines: `var_b <- diag(vcov_b) var_b` `robust_b <- sqrt(var_b) robust_b` ## Stargazer and Heteroskedasticity-Robust Standard Errors Now that we know how to get our robust standard errors, we can grab them for all three of our regressions and add them to our beautiful stargazer table: `robust_a <- sqrt(diag(vcovHC(reg_a, type = "HC1"))) robust_b <- sqrt(diag(vcovHC(reg_b, type = "HC1"))) robust_c <- sqrt(diag(vcovHC(reg_c, type = "HC1")))` `stargazer(reg_a, reg_b, reg_c, type = "html", se = list(robust_a, robust_b, robust_c), omit.stat = "f")` Here we're adding the robust standard errors to `stargazer()` with the `se =` argument (combining them together in the right order as a list). I'm also omitting the overall F test at the bottom with `omit.stat = "f"` since we'd need to correct that too for heteroskedasticity. Try running this code below to see how the standard errors change when we use robust standard errors: Copy and paste the table code here and run the cell to see it formatted. Now that looks pretty good, though note that the less than signs in the note for significance labels don't appear right. This is because html is reading the < symbol as a piece of code and not the math symbol. To get around this, you can add dollar signs around the < signs in the html code for the note to have the signs display properly: `<sup>*</sup>p $<$ 0.1; <sup>**</sup>p $<$ 0.05; <sup>***</sup>p $<$ 0.01</td>` # Fixed Effects Regression Today we will practice with fixed effects regressions in __R__. We have two different ways to estimate the model, and we will see how to do both and the situations in which we might favor one versus the other. Let's give this a try using the dataset `wateruse.dta`. The subset of households are high water users, people who used over 1,000 gallons per billing cycle. We have information on their water use, weather during the period, as well as information on the city and zipcode of where the home is located, and information on the size and value of the house. Suppose we are interested in running the following panel regression of residential water use: $$ GPD_{it} = \beta_0 + \beta_1 degree\_days_{it} + \beta_2 precip_{it} ~~~~~~~~~~~~~~~~~~~~~~~(1)$$ Where $GPD$ is the gallons used per day by household $i$ in billing cycle $t$, $degree\_days$ the count of degree days experienced by the household in that billing cycle (degree days are a measure of cumulative time spent above a certain temperature threshold), and $precip$ the amount of precipitation in millimeters. `reg1 <- lm(gpd ~ degree_days + precip, data = waterdata) summary(reg1)` Here we obtain an estimate of $\hat\beta_1 = 0.777$, telling us that an additional degree day per billing cycle is associated with an additional $0.7769$ gallon used per day. These billing cycles are roughly two months long, so this suggests an increase of roughly 47 gallons per billing cycle. Our estimate is statistically significant at all conventional levels, suggesting residential water use does respond to increased exposure to high heat. We estimate a statistically insignificant coefficient on additional precipitation, which tells us that on average household water use in our sample doesn't adjust to how much it rains. We might think that characteristics of the home impact how much water is used there, so we add in some home controls: $$ GPD_{it} = \beta_0 + \beta_1 degree\_days_{it} + \beta_2 precip_{it} + \beta_3 lotsize_{i} + \beta_4 homesize_i + \beta_5 num\_baths_i + \beta_6 num\_beds_i + \beta_7 homeval_i~~~~~~~~~~~~~~~~~~~~~~~(2)$$ `reg2 <- lm(gpd ~ degree_days + precip + lotsize + homesize + num_baths + num_beds + homeval, data = waterdata) summary(reg2)` Our coefficient on $degree\_days$ remains statistically significant and doesn't change much, so we find that $\hat\beta_1$ is robust to the addition of home characteristics. Of these characteristics, we obtain statistically significant coefficients on the size of the lot in acres ($lotsize$), the size of the home in square feet ($homesize$), and the number of bedrooms in the home ($num_beds$). We get a curious result for $\hat\beta_6$: for each additional bedroom in the home we predict that water use will *fall* by 48 gallons per day. ### Discussion: what might be driving this effect? Since there are likely a number of sources of omitted variable bias in the previous model, we think it might be worth including some fixed effects in our model. These will allow us to control for some of the unobserved sources of OVB without having to measure them directly! ## Method 1: Fixed Effects with lm() Up to this point we have been running our regressions using the `lm()` function. We can still use `lm()` for our fixed effects models, but it takes some more work and gets increasingly time-intensive as datasets get large. Recall that we can write our general panel fixed effects model as $$ y_{it} = \beta x_{it} + \mathbf{a}_i + \mathbf{d}_t + u_{it} $$ * $y$ our outcome of interest, which varies in both the time and cross-sectional dimensions * $x_{it}$ our set of time-varying unit characteristics * $\mathbf{a}_i$ our set of unit fixed effects * $\mathbf{d}_t$ our time fixed effects We can estimate this model in `lm()` provided we have variables in our dataframe that correspond to each level of $a_i$ and $d_t$. This means we'll have to generate them before we can run any regression. ### Generating Dummy Variables In order to include fixed effects for our regression, we can first generate the set of dummy variables that we want. For example, if we want to include a set of city fixed effects in our model, we need to generate them. We can do this in a few ways. 1. First, we can use `mutate()` and add a separate line for each individual city: `fe_1 <- waterdata %>% mutate(city_1 = as.numeric((city==1)), city_2 = as.numeric((city ==2)), city_3 = as.numeric((city ==3))) %>% select(n, hh, city, city_1, city_2, city_3) head(fe_1)` This can be super tedious though when we have a bunch of different levels of our variable that we want to make fixed effects for. In this case, we have 27 different cities. 2. Alternatively, we can use the `spread()` function to help us out. Here we add in a constant variable `v` that is equal to one in all rows, and a copy of city that adds "city_" to the front of the city number. Then we pass the data to `spread`, telling it to split the variable `cty` into dummy variables for all its levels, with all the "false" cases filled with zeros. `fe_2 <- waterdata %>% select(n, city, billingycle)` `fe_2 %>% mutate(v = 1, cty = paste0("city_", city)) %>% spread(cty, v, fill = 0)` That is much easier! This is a useful approach if you want to produce summary statistics for the fixed effects (i.e. what share of the sample lives in each city), but isn't truly necessary. Alternatively, we can tell R to read our fixed effects variables as factors: `lm(gpd ~ degree_days + precip + factor(city), data = waterdata)` `factor()` around $city$ tells R to split city into dummy variables for each unique value it takes. R will then drop the first level when we run the regression - in our case making the first city our omitted group. `reg3 <- lm(gpd ~ degree_days + precip + factor(city), data = waterdata) summary(reg3)` Now we have everything we need to run the regression $$ GPD_{it} = \beta_0 + \beta_1 degree\_days_{it} + \beta_2 precip_{it} + \mathbf{a}_i + \mathbf{d}_t~~~~~~~~~~~~~~~~~~~~~~~(2)$$ Where $\mathbf{a}_i$ are our city fixed effects, and $\mathbf{d}_t$ our billing cycle fixed effects: `fe_reg1 <- lm(gpd ~ degree_days + precip + factor(city) + factor(billingcycle), data = waterdata) summary(fe_reg1)` __R__ automatically chose the first dummy variable for each set of fixed effect (city 1 and billing cycle 1) to leave out as our omitted group. Now that we account for which billing cycle we're in (i.e. whether we're in the winter or whether we're in the summer), we find that the coefficient on $degree\_days$ is now much smaller and statistically insignificant. This makes sense, as we were falsely attributing the extra water use that comes from seasonality to temperature on its own. Now that we control for the season we're in via billing cycle fixed effects, we find that deviations in temperature exposure during a billing cycle don't result in dramatically higher water use within the sample. ### Discussion: Why did we drop the home characteristics from our model? ## Method 2: Fixed Effects with felm() Alternatively, we could do everything way faster using the `felm()` function from the package __lfe__. This package doesn't require us to produce all the dummy variables by hand. Further, it performs the background math way faster so will be much quicker to estimate models using large datasets and many variables. The syntax we use is now `felm(y ~ x1 + x2 + ... + xk | FE_1 + FE_2 + ..., data = df)` * The first section $y \sim x1 + x2 +... xk$ is our formula, written the same way as with `lm()` - but omitting the fixed effects * We now add a `|` and in the second section we specify our fixed effects. Here we say $FE\_1 + FE\_2$ which tells __R__ to include fixed effects for each level of the variables $FE\_1$ and $FE\_2$. * we add the data source after the comma, as before. Let's go ahead and try this now with our water data model: `fe_reg2 <- felm(gpd ~ degree_days + precip | city + billingcycle, data = waterdata) summary(fe_reg2)` And we estimate the exact same coefficients on $degree\_days$ and $precip$ as in the case where we specified everything by hand! We didn't have to mutate our data or add any variables. The one potential downside is that this approach doesn't report the fixed effects themselves by default. The tradeoff is that `felm` runs a lot faster than `lm`, especially with large datasets. We can also recover the fixed effects with getfe(): `getfe(fe_reg2, se = TRUE, robust = TRUE)` the argument `se = TRUE` tells it to produce standard errors too, and `robust = TRUE` further indicates that we want heteroskedasticity-robust standard errors. Note that this approach doesn't give you the same reference groups as before, but we get the same relative values. Note that before the coefficient on $city2$ was 301.7 and now is 73.9. But the coefficient on $city1$ is -227.8, and if we subtract $city1$ from $city2$ to get the difference in averages for city 2 relative to city 1 we get $73.9 - (-227.8) = 301.7$, the same as before! # Fixed Effects Practice Question #1 #### From a random sample of agricultural yields Y (1000 dollars per acre) for region $i$ in year $t$ for the US, we have estimated the following eqation: \begin{align*} \widehat{\log(Y)}_{it} &= 0.49 + .01 GE_{it} ~~~~ R^2 = .32\\ &~~~~~(.11) ~~~~ (.01) ~~~~ n = 1526 \end{align*} #### (a) Interpret the results on the Genetically engineered ($GE$) technology on yields. (follow SSS= Sign Size Significance) #### (b) Suppose $GE$ is used more on the West Coast, where crop yields are also higher. How would the estimated effect of GE change if we include a West Coast region dummy variable in the equation? Justify your answer. #### (c) If we include region fixed effects, would they control for the factors in (b)? Justify your answer. #### (d) If yields have been generally improving over time and GE adoption was only recently introduced in the USA, what would happen to the coefficient of GE if we included year fixed effects? # Fixed Effects Practice Question #2 #### A recent paper investigates whether advertisement for Viagra causes increases in birth rates in the USA. Apparently, advertising for products, including Viagra, happens on TV and reaches households that have a TV within a Marketing region and does not happen in areas outside a designated marketing region. What the authors do is look at hospital birth rates in regions inside and near the advertising region border and collect data on dollars per 100 people (Ads) for a certain time, and compare those to the birth rates in hospitals located outside and near the advertising region designated border. They conduct a panel data analysis and estimate the following model: $$ Births_{it} = \beta_0 + \beta_1 Ads + \beta_2 Ads^2 + Z_i + M_t + u_{it}$$ #### Where $Z_i$ are zipcode fixed effects and $M_t$ monthly fixed effects. #### (a) Why do the authors include Zip Code Fixed Effects? In particular, what would be a variable that they are controlling for when adding Zip Code fixed effects that could cause a problem when interpreting the marginal effect of ad spending on birth rates? What would that (solved) problem be? #### (b) Why do they add month fixed effects?
github_jupyter
``` %%javascript MathJax.Hub.Config({ TeX: { equationNumbers: { autoNumber: "AMS" } } }); from IPython.display import HTML HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide(); $('div.prompt').hide(); } else { $('div.input').show(); $('div.prompt').show(); } code_show = !code_show } $( document ).ready(code_toggle); </script> <form action="javascript:code_toggle()"><input type="submit" value="Code Toggle"></form>''') from IPython.display import HTML HTML(''' <a href="{{ site.links.github }}/raw/nist-pages/benchmarks/benchmark7.ipynb" download> <button type="submit">Download Notebook</button> </a> ''') ``` # Benchmark Problem 7: MMS Allen-Cahn ``` from IPython.display import HTML HTML('''{% include jupyter_benchmark_table.html num="[7]" revision=0 %}''') ``` * [Overview](#Overview) * [Governing equation and manufactured solution](#Governing-equation-and-manufactured-solution) * [Domain geometry, boundary conditions, initial conditions, and stopping condition](#Domain-geometry,-boundary-conditions,-initial-conditions,-and-stopping-condition) * [Parameter values](#Parameter-values) * [Benchmark simulation instructions](#Benchmark-simulation-instructions) * [Part (a)](#Part-%28a%29) * [Part (b)](#Part-%28b%29) * [Part (c)](#Part-%28c%29) * [Results](#Results) * [Feedback](#Feedback) * [Appendix](#Appendix) * [Computer algebra systems](#Computer-algebra-systems) * [Source equation](#Source-equation) * [Code](#Code) See the journal publication entitled ["Benchmark problems for numerical implementations of phase field models"][benchmark_paper] for more details about the benchmark problems. Furthermore, read [the extended essay][benchmarks] for a discussion about the need for benchmark problems. [benchmarks]: ../ [benchmark_paper]: http://dx.doi.org/10.1016/j.commatsci.2016.09.022 # Overview The Method of Manufactured Solutions (MMS) is a powerful technique for verifying the accuracy of a simulation code. In the MMS, one picks a desired solution to the problem at the outset, the "manufactured solution", and then determines the governing equation that will result in that solution. With the exact analytical form of the solution in hand, when the governing equation is solved using a particular simulation code, the deviation from the expected solution can be determined exactly. This deviation can be converted into an error metric to rigously quantify the error for a calculation. This error can be used to determine the order of accuracy of the simulation results to verify simulation codes. It can also be used to compare the computational efficiency of different codes or different approaches for a particular code at a certain level of error. Furthermore, the spatial/temporal distribution can give insight into the conditions resulting in the largest error (high gradients, changes in mesh resolution, etc.). After choosing a manufactured solution, the governing equation must be modified to force the solution to equal the manufactured solution. This is accomplished by taking the nominal equation that is to be solved (e.g. Allen-Cahn equation, Cahn-Hilliard equation, Fick's second law, Laplace equation) and adding a source term. This source term is determined by plugging the manufactured solution into the nominal governing equation and setting the source term equal to the residual. Thus, the manufactured solution satisfies the MMS governing equation (the nominal governing equation plus the source term). A more detailed discussion of MMS can be found in [the report by Salari and Knupp][mms_report]. In this benchmark problem, the objective is to use the MMS to rigorously verify phase field simulation codes and then provide a basis of comparison for the computational performance between codes and for various settings for a single code, as discussed above. To this end, the benchmark problem was chosen as a balance between two factors: simplicity, to minimize the development effort required to solve the benchmark, and transferability to a real phase field system of physical interest. [mms_report]: http://prod.sandia.gov/techlib/access-control.cgi/2000/001444.pdf # Governing equation and manufactured solution For this benchmark problem, we use a simple Allen-Cahn equation as the governing equation $$\begin{equation} \frac{\partial \eta}{\partial t} = - \left[ 4 \eta \left(\eta - 1 \right) \left(\eta-\frac{1}{2} \right) - \kappa \nabla^2 \eta \right] + S(x,y,t) \end{equation}$$ where $S(x,y,t)$ is the MMS source term and $\kappa$ is a constant parameter (the gradient energy coefficient). The manufactured solution, $\eta_{sol}$ is a hyperbolic tangent function, shifted to vary between 0 and 1, with the $x$ position of the middle of the interface ($\eta_{sol}=0.5$) given by the function $\alpha(x,t)$: $$\begin{equation} \eta_{sol}(x,y,t) = \frac{1}{2}\left[ 1 - \tanh\left( \frac{y-\alpha(x,t)}{\sqrt{2 \kappa}} \right) \right] \end{equation}$$ $$\begin{equation} \alpha(x,t) = \frac{1}{4} + A_1 t \sin\left(B_1 x \right) + A_2 \sin \left(B_2 x + C_2 t \right) \end{equation}$$ where $A_1$, $B_1$, $A_2$, $B_2$, and $C_2$ are constant parameters. This manufactured solution is an equilbrium solution of the governing equation, when $S(x,y,t)=0$ and $\alpha(x,t)$ is constant. The closeness of this manufactured solution to a solution of the nominal governing equation increases the likihood that the behavior of simulation codes when solving this benchmark problem is representive of the solution of the regular Allen-Cahn equation (i.e. without the source term). The form of $\alpha(x,t)$ was chosen to yield complex behavior while still retaining a (somewhat) simple functional form. The two spatial sinusoidal terms introduce two controllable length scales to the interfacial shape. Summing them gives a "beat" pattern with a period longer than the period of either individual term, permitting a domain size that is larger than the wavelength of the sinusoids without a repeating pattern. The temporal sinusoidal term introduces a controllable time scale to the interfacial shape in addition to the phase transformation time scale, while the linear temporal dependence of the other term ensures that the sinusoidal term can go through multiple periods without $\eta_{sol}$ repeating itself. Inserting the manufactured solution into the governing equation and solving for $S(x,y,t)$ yields: $$\begin{equation} S(x,y,t) = \frac{\text{sech}^2 \left[ \frac{y-\alpha(x,t)}{\sqrt{2 \kappa}} \right]}{4 \sqrt{\kappa}} \left[-2\sqrt{\kappa} \tanh \left[\frac{y-\alpha(x,t)}{\sqrt{2 \kappa}} \right] \left(\frac{\partial \alpha(x,t)}{\partial x} \right)^2+\sqrt{2} \left[ \frac{\partial \alpha(x,t)}{\partial t}-\kappa \frac{\partial^2 \alpha(x,t)}{\partial x^2} \right] \right] \end{equation}$$ where $\alpha(x,t)$ is given above and where: $$\begin{equation} \frac{\partial \alpha(x,t)}{\partial x} = A_1 B_1 t \cos\left(B_1 x\right) + A_2 B_2 \cos \left(B_2 x + C_2 t \right) \end{equation}$$ $$\begin{equation} \frac{\partial^2 \alpha(x,t)}{\partial x^2} = -A_1 B_1^2 t \sin\left(B_1 x\right) - A_2 B_2^2 \sin \left(B_2 x + C_2 t \right) \end{equation}$$ $$\begin{equation} \frac{\partial \alpha(x,t)}{\partial t} = A_1 \sin\left(B_1 x\right) + A_2 C_2 \cos \left(B_2 x + C_2 t \right) \end{equation}$$ #### *N.B.*: Don't transcribe these equations. Please download the appropriate files from the [Appendix](#Appendix). # Domain geometry, boundary conditions, initial conditions, and stopping condition The domain geometry is a rectangle that spans [0, 1] in $x$ and [0, 0.5] in $y$. This elongated domain was chosen to allow multiple peaks and valleys in $\eta_{sol}$ without stretching the interface too much in the $y$ direction (which causes the thickness of the interface to change) or having large regions where $\eta_{sol}$ never deviates from 0 or 1. Periodic boundary conditions are applied along the $x = 0$ and the $x = 1$ boundaries to accomodate the periodicity of $\alpha(x,t)$. Dirichlet boundary conditions of $\eta$ = 0 and $\eta$ = 1 are applied along the $y = 0$ and the $y = 0.5$ boundaries, respectively. These boundary conditions are chosen to be consistent with $\eta_{sol}(x,y,t)$. The initial condition is the manufactured solution at $t = 0$: $$ \begin{equation} \eta_{sol}(x,y,0) = \frac{1}{2}\left[ 1 - \tanh\left( \frac{y-\left(\frac{1}{4}+A_2 \sin(B_2 x) \right)}{\sqrt{2 \kappa}} \right) \right] \end{equation} $$ The stopping condition for all calculations is when t = 8 time units, which was chosen to let $\alpha(x,t)$ evolve substantially, while still being slower than the characteristic time for the phase evolution (determined by the CFL condition for a uniform mesh with a reasonable level of resolution of $\eta_{sol}$). # Parameter values The nominal parameter values for the governing equation and manufactured solution are given below. The value of $\kappa$ will change in Part (b) in the following section and the values of $\kappa$ and $C_2$ will change in Part (c). | Parameter | Value | |-----------|-------| | $\kappa$ | 0.0004| | $A_1$ | 0.0075| | $B_1$ | 0.03 | | $A_2$ | 8.0 | | $B_2$ | 22.0 | | $C_2$ | 0.0625| # Benchmark simulation instructions This section describes three sets of tests to conduct using the MMS problem specified above. The primary purpose of the first test is provide a computationally inexpensive problem to verify a simulation code. The second and third tests are more computationally demanding and are primarily designed to serve as a basis for performance comparisons. ## Part (a) The objective of this test is to verify the accuracy of your simulation code in both time and space. Here, we make use of convergence tests, where either the mesh size (or grid point spacing) or the time step size is systematically changed to determine the response of the error to these quantities. Once a convergence test is completed the order of accuracy can be calculated from the result. The order of accuracy can be compared to the theoretical order of accuracy for the numerical method employed in the simulation. If the two match (to a reasonable degree), then one can be confident that the simulation code is working as expected. The remainder of this subsection will give instructions for convergence tests for this MMS problem. Implement the MMS problem specified above using the simulation code of your choice. Perform a spatial convergence test by running the simulation for a variety of mesh sizes. For each simulation, determine the discrete $L_2$ norm of the error at $t=8$: $$\begin{equation} L_2 = \sqrt{\sum\limits_{x,y}\left(\eta^{t=8}_{x,y} - \eta_{sol}(x,y,8)\right)^2 \Delta x \Delta y} \end{equation}$$ For all of these simulations, verify that the time step is small enough that any temporal error is much smaller that the total error. This can be accomplished by decreasing the time step until it has minimal effect on the error. Ensure that at least three simulation results have $L_2$ errors in the range $[5\times10^{-3}, 1\times10^{-4}]$, attempting to cover as much of that range as possible/practical. This maximum and minimum errors in the range roughly represent a poorly resolved simulation and a very well-resolved simulation. For at least three simulations that have $L_2$ errors in the range $[5\times10^{-3}, 1\times10^{-4}]$, save the effective mesh size and $L_2$ error in a CSV or JSON file. Upload this file to the PFHub website as a 2D data set with the effective mesh size as the x-axis column and the $L_2$ error as the y-axis column. Calculate the effective element size as the square root of the area of the finest part of the mesh for nonuniform meshes. For irregular meshes with continous distributions of element sizes, approximate the effective mesh size as the average of the square root of the area of the smallest 5% of the elements. Next, confirm that the observed order of accuracy is approximately equal to the expected value. Calculate the order of accuracy, $p$, with a least squares fit of the following function: $$\begin{equation} \log(E)=p \log(R) + b \end{equation}$$ where $E$ is the $L_2$ error, $R$ is the effective element size, and b is an intercept. Deviations of ±0.2 or more from the theoretical value are to be expected (depending on the range of errors considered and other factors). Finally, perform a similar convergence test, but for the time step, systematically changing the time step and recording the $L_2$ error. Use a time step that does not vary over the course of any single simulation. Verify that the spatial discretization error is small enough that it does not substantially contribute to the total error. Once again, ensure that at least three simulations have $L_2$ errors in the range $[5\times10^{-3}, 1\times10^{-4}]$, attempting to cover as much of that range as possible/practical. Save the effective mesh size and $L_2$ error for each individual simulation in a CSV or JSON file. [Upload this file to the PFHub website](https://pages.nist.gov/pfhub/simulations/upload_form/) as a 2D data set with the time step size as the x-axis column and the $L_2$ error as the y-axis column. Confirm that the observed order of accuracy is approximately equal to the expected value. ## Part (b) Now that your code has been verified in (a), the objective of this part is to determine the computational performance of your code at various levels of error. These results can then be used to objectively compare the performance between codes or settings within the same code. To make the problem more computationally demanding and stress solvers more than in (a), decrease $\kappa$ by a factor of $256$ to $1.5625\times10^{-6}$. This change will reduce the interfacial thickness by a factor of $16$. Run a series of simulations, attempting to optimize solver parameters (mesh, time step, tolerances, etc.) to minimize the required computational resources for at least three levels of $L_2$ error in range $[5\times10^{-3}, 1\times10^{-5}]$. Use the same CPU and processor type for all simulations. For the best of these simulations, save the wall time, number of computing cores, maximum memory usage, and $L_2$ error for each individual simulation in a CSV or JSON file. [Upload this to the PFHub website](https://pages.nist.gov/pfhub/simulations/upload_form/) as a 3D data set with the wall time as the x-axis column, the number of computing cores as the y-axis column, and the $L_2$ error as the z-axis column. (The PFHub upload system is currently limited to three columns of data. Once this constraint is relaxed, the maximum memory usage data will be incorporated as well.) <!---For the best of these simulations, submit the wall time, number of computing cores, processor speed, maximum memory usage, and $L_2$ error at $t=8$ to the CHiMaD website.---> ## Part (c) This final part is designed to stress time integrators even further by increasing the rate of change of $\alpha(x,t)$. Increase $C_2$ to $0.5$. Keep $\kappa= 1.5625\times10^{-6}$ from (b). Repeat the process from (b), uploading the wall time, number of computing cores, processor speed, maximum memory usage, and $L_2$ error at $t=8$ to the PFHub website. # Results Results from this benchmark problem are displayed on the [simulation result page]({{ site.baseurl }}/simulations) for different codes. # Feedback Feedback on this benchmark problem is appreciated. If you have questions, comments, or seek clarification, please contact the [CHiMaD phase field community](https://pages.nist.gov/chimad-phase-field/community/) through the [Gitter chat channel](https://gitter.im/usnistgov/chimad-phase-field) or by [email](https://pages.nist.gov/chimad-phase-field/mailing_list/). If you found an error, please file an [issue on GitHub](https://github.com/usnistgov/chimad-phase-field/issues/new). # Appendix ## Computer algebra systems Rigorous verification of software frameworks using MMS requires posing the equation and manufacturing the solution with as much complexity as possible. This can be straight-forward, but interesting equations produce complicated source terms. To streamline the MMS workflow, it is strongly recommended that you use a CAS such as SymPy, Maple, or Mathematica to generate source equations and turn it into executable code automatically. For accessibility, we will use [SymPy](http://www.sympy.org/), but so long as vector calculus is supported, and CAS will do. ## Source term ``` from sympy import Symbol, symbols, simplify from sympy import Eq, sin, cos, cosh, sinh, tanh, sqrt from sympy.physics.vector import divergence, gradient, ReferenceFrame, time_derivative from sympy.printing import pprint from sympy.abc import kappa, S, t, x, y # Spatial coordinates: x=R[0], y=R[1], z=R[2] R = ReferenceFrame('R') # sinusoid amplitudes A1, A2 = symbols('A1 A2') B1, B2 = symbols('B1 B2') C1, C2 = symbols('C1 C2') # Define interface offset (alpha) alpha = (1/4 + A1 * t * sin(B1 * R[0]) + A2 * sin(B2 * R[0] + C2 * t) ).subs({R[0]: x, R[1]: y}) # Define the solution equation (eta) eta = (1/2 * (1 - tanh((R[1] - alpha) / sqrt(2*kappa))) ).subs({R[0]: x, R[1]: y}) # Compute the initial condition eta0 = eta.subs({t: 0, R[0]: x, R[1]: y}) # Compute the source term from the equation of motion S = simplify(time_derivative(eta, R) + 4 * eta * (eta - 1) * (eta - 1/2) - divergence(kappa * gradient(eta, R), R) ).subs({R[0]: x, R[1]: y}) pprint(Eq(symbols('alpha'), alpha)) pprint(Eq(symbols('eta'), eta)) pprint(Eq(symbols('eta0'), eta0)) pprint(Eq(symbols('S'), S)) ``` ## Code ### Python Copy the first cell under Source Term directly into your program. For a performance boost, convert the expressions into lambda functions: ```python from sympy.utilities.lambdify import lambdify apy = lambdify([x, y], alpha, modules='sympy') epy = lambdify([x, y], eta, modules='sympy') ipy = lambdify([x, y], eta0, modules='sympy') Spy = lambdify([x, y], S, modules='sympy') ``` #### *N.B.*: You may need to add coefficients to the variables list. ### C ``` from sympy.utilities.codegen import codegen [(c_name, c_code), (h_name, c_header)] = codegen([('alpha', alpha), ('eta', eta), ('eta0', eta0), ('S', S)], language='C', prefix='MMS', project='PFHub') print(c_code) ``` ### C++ ``` from sympy.printing.cxxcode import cxxcode print("α:") cxxcode(alpha) print("η:") cxxcode(eta) print("η₀:") cxxcode(eta0) print("S:") cxxcode(S) ``` ### Fortran ``` from sympy.printing import fcode print("α:") fcode(alpha) print("η:") fcode(eta) print("η₀:") fcode(eta0) print("S:") fcode(S) ``` ### Julia ``` from sympy.printing import julia_code print("α:") julia_code(alpha) print("η:") julia_code(eta) print("η₀:") julia_code(eta0) print("S:") julia_code(S) ``` ### Mathematica ``` from sympy.printing import mathematica_code print("α:") mathematica_code(alpha) print("η:") mathematica_code(eta) print("η₀:") mathematica_code(eta0) print("S:") mathematica_code(S) ```
github_jupyter
# EXTRACCION, LIMPIEZA Y CARGA DATA REFERENTE A DELITOS VERSION 0.2 FECHA: 16/10/2020 ANALIZAR DELITOS VIGENTES Y NO VIGENTES ``` import os import pandas as pd import numpy as np from pyarrow import feather from tqdm import tqdm from unicodedata import normalize from src.data import clean_data tqdm.pandas() codigos_delitos = pd.read_excel("../data/external/codigos_penal_2020.xlsx", sheet_name = "codigos vigentes") codigos_delitos = codigos_delitos.drop_duplicates() # elimino filas con NaN codigos_delitos = codigos_delitos.drop([0,1,2], axis = 0) # elimino 2 primeras filas que son titulos # elimino columnas con datos NaN variables = range(2,248) columnas = [] for variable in variables: columnas.append("Unnamed: " + str(variable)) codigos_delitos = codigos_delitos.drop(columns = columnas, axis = 1) # cambio nombres columnas codigos_delitos = codigos_delitos.rename(columns = {'VERSION AL 01/01/2018':'COD. MATERIA', 'Unnamed: 1':'MATERIA'}) codigos_delitos delitos_vigentes = [] for item in codigos_delitos.index: if str(codigos_delitos['COD. MATERIA'][item]).isupper(): tipologia_delito=str(codigos_delitos['COD. MATERIA'][item]) else: delitos_vigentes.append([codigos_delitos['COD. MATERIA'][item], str(codigos_delitos['MATERIA'][item]).upper().rstrip(), tipologia_delito,'VIGENTE']) df_delitos_vigentes = pd.DataFrame(delitos_vigentes,columns = ['COD. MATERIA','MATERIA','TIPOLOGIA MATERIA','VIGENCIA MATERIA']) df_delitos_vigentes df_delitos_vigentes.dtypes ``` ## Limpieza de variable MATERIA ``` # Elimino tildes de las columnas object cols = df_delitos_vigentes.select_dtypes(include = ["object"]).columns df_delitos_vigentes[cols] = df_delitos_vigentes[cols].progress_apply(clean_data.elimina_tilde) df_delitos_vigentes[cols] = df_delitos_vigentes[cols].progress_apply(clean_data.limpieza_caracteres) df_delitos_vigentes.tail(100) df_delitos_vigentes['COD. MATERIA'] = df_delitos_vigentes['COD. MATERIA'].fillna(0).astype('int16') df_delitos_vigentes.info() ``` ## CARGA Y LIMPIEZA DE DATOS RELACIONADOS A DELITOS NO VIGENTES ``` codigos_delitos_novigentes = pd.read_excel("../data/external/codigos_penal_2020.xlsx", sheet_name = "Codigos no vigentes") # cambio nombres columnas codigos_delitos_novigentes = codigos_delitos_novigentes.rename(columns = {'MATERIAS PENALES NO VIGENTES':'TIPOLOGIA MATERIA', 'Unnamed: 1':'COD. MATERIA','Unnamed: 2':'MATERIA' }) codigos_delitos_novigentes = codigos_delitos_novigentes.drop([0], axis = 0) # elimino primera fila que son titulos codigos_delitos_novigentes = codigos_delitos_novigentes.fillna('ST') # reemplazo Nan por ST codigos_delitos_novigentes delitos_no_vigentes = [] for item in codigos_delitos_novigentes.index: tipologia_delito = codigos_delitos_novigentes['TIPOLOGIA MATERIA'][item] if tipologia_delito != 'ST': tipologia = codigos_delitos_novigentes['TIPOLOGIA MATERIA'][item] else: tipologia_delito = tipologia delitos_no_vigentes.append([codigos_delitos_novigentes['COD. MATERIA'][item], codigos_delitos_novigentes['MATERIA'][item].rstrip(), tipologia_delito,'NO VIGENTE']) df_delitos_no_vigentes = pd.DataFrame(delitos_no_vigentes, columns = ['COD. MATERIA','MATERIA','TIPOLOGIA MATERIA','VIGENCIA MATERIA']) df_delitos_no_vigentes # Elimino tildes de las columnas object cols = df_delitos_no_vigentes.select_dtypes(include = ["object"]).columns df_delitos_no_vigentes[cols] = df_delitos_no_vigentes[cols].progress_apply(clean_data.elimina_tilde) df_delitos_no_vigentes['COD. MATERIA'] = df_delitos_no_vigentes['COD. MATERIA'].astype('int16') df_delitos_no_vigentes.dtypes ``` # UNION DE AMBOS DATASET CON DELITOS VIGENTES Y NO VIGENTES ``` df_delitos = pd.concat([df_delitos_vigentes,df_delitos_no_vigentes]) df_delitos df_delitos.info() df_delitos.sort_values("COD. MATERIA") ``` ## Guardamos DF como to_feather ``` # Reset el index para realizar feather df_delitos.reset_index(inplace = True) # Guardamos dataset como archivo feather df_delitos.to_feather('../data/processed/Delitos_feather') ```
github_jupyter
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eirasf/GCED-AA2/blob/main/lab4/lab4_parte1.ipynb) # Práctica 4: Redes neuronales usando Keras con Regularización ## Parte 1. Early Stopping ### Overfitting El problema del sobreajuste (*overfitting*) consiste en que la solución aprendida se ajusta muy bien a los datos de entrenamiento, pero no generaliza adecuadamente ante la aparición de nuevos datos. # Regularización Una vez diagnosticado el sobreajuste, es hora de probar diferentes técnicas que intenten reducir la varianza, sin incrementar demasiado el sesgo y, con ello, el modelo generaliza mejor. Las técnicas de regularización que vamos a ver en este laboratorio son: 1. *Early stopping*. Detiene el entrenamiento de la red cuando aumenta el error. 1. Penalización basada en la norma de los parámetros (tanto norma L1 como L2). 1. *Dropout*. Ampliamente utilizada en aprendizaje profundo, "desactiva" algunas neuronas para evitar el sobreajuste. En esta primera parte del Laboratorio 4 nos centraremos en **Early Stopping** ## Pre-requisitos. Instalar paquetes Para la primera parte de este Laboratorio 4 necesitaremos TensorFlow, TensorFlow-Datasets y otros paquetes para inicializar la semilla y poder reproducir los resultados ``` import tensorflow as tf import tensorflow_datasets as tfds import os import numpy as np import random #Fijamos la semilla para poder reproducir los resultados seed=1234 os.environ['PYTHONHASHSEED']=str(seed) tf.random.set_seed(seed) np.random.seed(seed) random.seed(seed) ``` Además, cargamos también APIs que vamos a emplear para que el código quede más legible ``` #API de Keras, modelo Sequential y las capas que vamos a usar en nuestro modelo from tensorflow import keras from keras.models import Sequential from keras.layers import InputLayer from keras.layers import Dense #Para mostrar gráficas from matplotlib import pyplot #Necesario para el EarlyStopping from keras.callbacks import EarlyStopping ``` ## Cargamos el conjunto de datos De nuevo, seguimos empleando el conjunto *german_credit_numeric* ya empleado en los laboratorios anteriores, aunque esta vez lo dividimos para tener un subconjunto de entrenamiento, otro de validación (que nos servirá para detener el entrenamiento) y otro de test para evaluar el rendimiento del modelo. ``` # Cargamos el conjunto de datos ds_train = tfds.load('german_credit_numeric', split='train[:40%]', as_supervised=True).batch(128) ds_val = tfds.load('german_credit_numeric', split='train[40%:50%]', as_supervised=True).batch(128) ds_test = tfds.load('german_credit_numeric', split='train[50%:]', as_supervised=True).batch(128) ``` También vamos a establecer la función de pérdida, el algoritmo que vamos a emplear para el entrenamiento y la métrica que nos servirá para evaluar el rendimiento del modelo entrenado. ``` #Indicamos la función de perdida, el algoritmo de optimización y la métrica para evaluar el rendimiento fn_perdida = tf.keras.losses.BinaryCrossentropy() optimizador = tf.keras.optimizers.Adam(0.001) metrica = tf.keras.metrics.AUC() ``` ## Creamos un modelo *Sequential* Creamos un modelo *Sequential* tal y como se ha hecho en el Laboratorio 3. Parte 2. ``` tamano_entrada = 24 h0_size = 20 h1_size = 10 h2_size = 5 #TODO - define el modelo Sequential model = ... #TODO - incluye la capa de entrada y las 4 capas Dense del modelo ...... #Construimos el modelo y mostramos model.build() print(model.summary()) ``` Completar el método *compile*. ``` #TODO - indicar los parametros del método compile model.compile(loss=fn_perdida, optimizer=optimizador, metrics=[metrica]) ``` Hacemos una llamada al método *fit* usando el conjunto de entrenamiento como entrada, indicando el número de epochs y, además, incluyendo el argumento *validation_data* que permite usar un subconjunto de datos para validar. Las diferencias entre entrenamiento y validación se pueden apreciar en el gráfico. **NOTA**: Observad las diferencias de resultado entre entrenamiento, validación y test. ``` #Establecemos el número de epochs num_epochs = 700 # Guardamos los pesos antes de entrenar, para poder resetear el modelo posteriormente y hacer comparativas. pesos_preentrenamiento = model.get_weights() #TODO - entrenar el modelo usando como entradas el conjunto de entrenamiento, #indicando el número de epochs y el conjunto de validación history = model.fit(....) # plot training history pyplot.plot(history.history['loss'], label='train') pyplot.plot(history.history['val_loss'], label='val') pyplot.legend() pyplot.show() #TODO - llamar a evaluate usando el conjunto de test, guardando el resultado result = model.evaluate(.....) print(model.metrics_names) print(result) ``` ## Usando Early Stopping en el entrenamiento Keras nos facilita un *Callback* para realizar la parada temprana (*keras.callbacks.EarlyStopping*). De este modo, podemos parar el entrenamiento cuando una determinada medida (especificada en el argumento *monitor*) empeore su rendimiento (el argumento *mode* nos dirá si se espera que dicha medida se minimice, *min*, o maximice, *max*). Opcionalmente, el usuario puede proporcionar el argumento *patience* para especificar cuantas *epochs* debe esperar el entrenamiento antes de detenerse. **TO-DO**: Realizar varias veces el entrenamiento, cambiando los distintos parámetros para ver las diferencias en el aprendizaje. ¿Se para siempre en el mismo *epoch*? Comprobar el rendimiento en el conjunto de test. ``` # simple early stopping #TODO- indica la medida a monitorizar, el modo y la paciencia es = EarlyStopping( monitor=.... mode=... patience=... ) # Antes de entrenar, olvidamos el entrenamiento anterior restaurando los pesos iniciales model.set_weights(pesos_preentrenamiento) #TODO - entrenar el modelo usando como entradas el conjunto de entrenamiento, #indicando el número de epochs, el conjunto de validación y la callback para el EarlyStopping history = model.fit(...., callbacks=[es]) # plot training history pyplot.plot(history.history['loss'], label='train') pyplot.plot(history.history['val_loss'], label='val') pyplot.legend() pyplot.show() ``` Evaluación sobre el conjunto de test (no usado para el entrenamiento). ``` #TODO - llamar a evaluate usando el conjunto de test, guardando el resultado result = model.evaluate(....) print(model.metrics_names) print(result) ```
github_jupyter
# DSCI 525: Web and Cloud Computing ## Milestone 1: Tackling Big Data on Computer ### Group 13 Authors: Ivy Zhang, Mike Lynch, Selma Duric, William Xu ## Table of contents - [Download the data](#1) - [Combining data CSVs](#2) - [Load the combined CSV to memory and perform a simple EDA](#3) - [Perform a simple EDA in R](#4) - [Reflection](#5) ### Imports ``` import re import os import glob import zipfile import requests from urllib.request import urlretrieve import json import pandas as pd import numpy as np import pyarrow.feather as feather from memory_profiler import memory_usage import pyarrow.dataset as ds import pyarrow as pa import pyarrow.parquet as pq import dask.dataframe as dd %load_ext rpy2.ipython %load_ext memory_profiler ``` ## 1. Download the data <a name="1"></a> 1. Download the data from figshare to local computer using the figshare API. 2. Extract the zip file programmatically. ``` # Attribution: DSCI 525 lecture notebook # Necessary metadata article_id = 14096681 # unique identifier of the article on figshare url = f"https://api.figshare.com/v2/articles/{article_id}" headers = {"Content-Type": "application/json"} output_directory = "figsharerainfall/" response = requests.request("GET", url, headers=headers) data = json.loads(response.text) files = data["files"] %%time files_to_dl = ["data.zip"] for file in files: if file["name"] in files_to_dl: os.makedirs(output_directory, exist_ok=True) urlretrieve(file["download_url"], output_directory + file["name"]) %%time with zipfile.ZipFile(os.path.join(output_directory, "data.zip"), 'r') as f: f.extractall(output_directory) ``` ## 2. Combining data CSVs <a name="2"></a> 1. Use one of the following options to combine data CSVs into a single CSV (Pandas, Dask). **We used the option of Pandas**. 2. When combining the csv files, we added extra column called "model" that identifies the model (we get this column populated from the file name eg: for file name "SAM0-UNICON_daily_rainfall_NSW.csv", the model name is SAM0-UNICON) 3. Compare run times and memory usages of these options on different machines within the team, and summarize observations. ``` %%time %memit # Shows time that regular python takes to merge file # Join all data together ## here we are using a normal python way of merging the data # use_cols = ["time", "lat_min", "lat_max", "lon_min","lon_max","rain (mm/day)"] files = glob.glob('figsharerainfall/*.csv') df = pd.concat((pd.read_csv(file, index_col=0) .assign(model=re.findall(r'[^\/]+(?=\_d)', file)[0]) for file in files) ) df.to_csv("figsharerainfall/combined_data.csv") %%time df = pd.read_csv("figsharerainfall/combined_data.csv") %%sh du -sh figsharerainfall/combined_data.csv print(df.shape) df.head() ``` **Summary of run times and memory usages:** ***William*** - Combining files: - peak memory: 95.41 MiB, increment: 0.26 MiB - CPU times: user 7min 28s, sys: 31 s, total: 7min 59s - Wall time: 9min 17s - Reading the combined file: - Wall time: 1min 51s ***Mike*** - Combining files: - peak memory: 168.59 MiB, increment: 0.12 MiB - CPU times: user 3min 29s, sys: 5.09 s, total: 3min 34s - Wall time: 3min 34s - Reading the combined file: - Wall time: 37.1 s ***Selma*** - Combining files: - peak memory: 150.54 MiB, increment: 0.23 MiB - CPU times: user 6min 46s, sys: 23.1 s, total: 7min 9s - Wall time: 7min 29s - Reading the combined file: - Wall time: 1min 19s ***Ivy*** - Combining files: - peak memory: 156.23 MiB, increment: 0.00 MiB - CPU times: user 5min 14s, sys: 18.2 s, total: 5min 32s - Wall time: 5min 45s - Reading the combined file: - Wall time: 1min 30s ## 3. Load the combined CSV to memory and perform a simple EDA <a name="3"></a> ### Establish a baseline for memory usage ``` # First load in the dataset using default settings for dtypes df_eda = pd.read_csv("figsharerainfall/combined_data.csv", parse_dates=True, index_col='time') df_eda.head() # As we can see below, dtypes are float64 and object df_eda.dtypes # Measure the memory usage when representing numbers using float64 dtype print(f"Memory usage with float64: {df_eda.memory_usage().sum() / 1e6:.2f} MB") %%time %memit # Now perform a simple EDA with pandas describe function df_eda.describe() ``` Baseline memory and time data: - Memory usage with float64: 3500.78 MB - peak memory: 698.22 MiB, increment: 0.35 MiB - CPU times: user 16.2 s, sys: 13.8 s, total: 30 s - Wall time: 36.5 s ### Effects of changing dtypes on memory usage ``` # Now load in the dataset using float32 dtype to represent numbers colum_dtypes = {'lat_min': np.float32, 'lat_max': np.float32, 'lon_min': np.float32, 'lon_max': np.float32, 'rain (mm/day)': np.float32, 'model': str} df_eda = pd.read_csv("figsharerainfall/combined_data.csv",parse_dates=True, index_col='time', dtype=colum_dtypes) df_eda.head() # As we can see below, dtypes are float32 and object df_eda.dtypes print(f"Memory usage with float32: {df_eda.memory_usage().sum() / 1e6:.2f} MB") %%time %memit # Now perform a simple EDA with pandas describe function df_eda.describe() ``` Time and memory data when using different dtypes: - Memory usage with float32: 2250.50 MB - peak memory: 609.06 MiB, increment: 0.36 MiB - CPU times: user 11.3 s, sys: 5.72 s, total: 17 s - Wall time: 22.7 s ### Effects of loading a smaller subset of columns on memory usage ``` # Now load only a subset of columns from the dataset df_eda = pd.read_csv("figsharerainfall/combined_data.csv",parse_dates=True, index_col='time', usecols=['time', 'lat_min', 'rain (mm/day)']) df_eda.head() # As we can see below, dtypes are float64 by default df_eda.dtypes print(f"Memory usage with reduced number of columns: {df_eda.memory_usage().sum() / 1e6:.2f} MB") %%time %memit # Now perform a simple EDA with pandas describe function df_eda.describe() ``` Time and memory data when using column subset: - Memory usage with reduced number of columns: 1500.33 MB - peak memory: 340.50 MiB, increment: 0.40 MiB - CPU times: user 7.13 s, sys: 5.6 s, total: 12.7 s - Wall time: 18.2 s ### Summary #### Using float32 vs. baseline float64 dtype to perform a simple EDA: - The memory usage decreased from 3500.78 MB to 2250.50 MB when representing numbers using float32 instead of float64 - When using the pandas describe function to perform a simple EDA, we found that the peak memory increased when using float32 dtype for the numerical columns. - The wall time taken to perform the EDA also decreased substantially to 22.7s from the baseline of 36.5s. #### Using a reduced number of columns compared to the baseline to perform a simple EDA: - The memory usage decreased from 3500.78 MB to 1500.33 MB when using a subset of columns from the dataset - When using the pandas describe function to perform a simple EDA, we found that the peak memory increased when using fewer columns. - The wall time taken to perform the EDA also decreased substantially to 18.2s from the baseline of 36.5s. ## 4. Perform a simple EDA in R <a name="4"></a> We will transform our dataframe into different formats before loading into R. #### I. Default memory format + feather file format ``` %%time feather.write_feather(df, "figsharerainfall/combined_data.feather") ``` #### II. dask + parquet file format ``` ddf = dd.read_csv("figsharerainfall/combined_data.csv") %%time dd.to_parquet(ddf, 'figsharerainfall/combined_data.parquet') ``` #### III. Arrow memory format + parquet file format ``` %%time %%memit dataset = ds.dataset("figsharerainfall/combined_data.csv", format="csv") table = dataset.to_table() %%time pq.write_to_dataset(table, 'figsharerainfall/rainfall.parquet') ``` #### IV. Arrow memory format + feather file format ``` %%time feather.write_feather(table, 'figsharerainfall/rainfall.feather') %%sh du -sh figsharerainfall/combined_data.csv du -sh figsharerainfall/combined_data.parquet du -sh figsharerainfall/rainfall.parquet du -sh figsharerainfall/rainfall.feather ``` ### Transfer different formats of data from Python to R It is usually not efficient to directly transfer Python dataframe to R due to serialization and deserialization involved in the process. Also, we observe Arrow memory format performs better than the default memory default. Thus, our next step is to further compare the performance of transferring Arrow-feather file and Arrow-parquet file to R. #### I. Read Arrow-parquet file to R ```python %%time %%R library(arrow) start_time <- Sys.time() r_table <- arrow::read_parquet("figsharerainfall/rainfall.parquet/e5a0076fe71f4bdead893e20a935897b.parquet") print(class(r_table)) library(dplyr) result <- r_table %>% count(model) end_time <- Sys.time() print(result) print(end_time - start_time) ``` ![](../img/output.png) *Note that the code above has been commented out to ensure the workbook is reproducible. Please check Reflection for more details.* #### II. Read Arrow-feather file to R ``` %%time %%R library(arrow) start_time <- Sys.time() r_table <- arrow::read_feather("figsharerainfall/rainfall.feather") print(class(r_table)) library(dplyr) result <- r_table %>% count(model) end_time <- Sys.time() print(result) print(end_time - start_time) ``` #### Summary of format selection Based on the data storage and processing time comparison from above, our preferred format among all is **parquet using Arrow package**. The file with this format takes much less space to store it. Also, it takes less time to write to this format and read it in R. ## Reflection <a name="5"></a> After some trial and error, all team members were individually able to successfully run the analysis from start to finish, however during the process we did experience some problems which included the following: - William had issue with `%load_ext rpy2.ipython` despite the successful environment installation on his MacOS. After many hours debugging, ry2 finally worked after specifying the python version in the course yml file. The solution is to add `python=3.8.6` to the 525.yml file under `dependencies:` and reinstall the environment. - Even though the file sizes were only 5 GB, we actually required 10 GB of disk space since we needed to download and unzip the data. - We got some confusing results by accidentally re-downloading the dataset without first deleting it since we were then combining twice as many files in the next step. - We noticed that parquet file name under the parquet folder is generated differently every time we re-run the workbook. If we keep current file name and re-run all cells, `arrow::read_parquet` function would return an error message indicating that the file "e5a0076fe71f4bdead893e20a935897b.parquet" does not exist in the directory. For reproducibility reason, we decided to comment out the code but record the output for further comparison.
github_jupyter
# Computer Vision Nanodegree ## Project: Image Captioning --- In this notebook, you will use your trained model to generate captions for images in the test dataset. This notebook **will be graded**. Feel free to use the links below to navigate the notebook: - [Step 1](#step1): Get Data Loader for Test Dataset - [Step 2](#step2): Load Trained Models - [Step 3](#step3): Finish the Sampler - [Step 4](#step4): Clean up Captions - [Step 5](#step5): Generate Predictions! <a id='step1'></a> ## Step 1: Get Data Loader for Test Dataset Before running the code cell below, define the transform in `transform_test` that you would like to use to pre-process the test images. Make sure that the transform that you define here agrees with the transform that you used to pre-process the training images (in **2_Training.ipynb**). For instance, if you normalized the training images, you should also apply the same normalization procedure to the test images. ``` import sys sys.path.append('/opt/cocoapi/PythonAPI') from pycocotools.coco import COCO from data_loader import get_loader from torchvision import transforms # TODO #1: Define a transform to pre-process the testing images. transform_test = transforms.Compose([ transforms.Resize(256), # smaller edge of image resized to 256 transforms.RandomCrop(224), # get 224x224 crop from random location, transforms.ToTensor(), # convert the PIL Image to a tensor transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model (0.229, 0.224, 0.225))]) #-#-#-# Do NOT modify the code below this line. #-#-#-# # Create the data loader. data_loader = get_loader(transform=transform_test, mode='test') ``` Run the code cell below to visualize an example test image, before pre-processing is applied. ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Obtain sample image before and after pre-processing. orig_image, image = next(iter(data_loader)) # Visualize sample image, before pre-processing. plt.imshow(np.squeeze(orig_image)) plt.title('example image') plt.show() ``` <a id='step2'></a> ## Step 2: Load Trained Models In the next code cell we define a `device` that you will use move PyTorch tensors to GPU (if CUDA is available). Run this code cell before continuing. ``` import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") ``` Before running the code cell below, complete the following tasks. ### Task #1 In the next code cell, you will load the trained encoder and decoder from the previous notebook (**2_Training.ipynb**). To accomplish this, you must specify the names of the saved encoder and decoder files in the `models/` folder (e.g., these names should be `encoder-5.pkl` and `decoder-5.pkl`, if you trained the model for 5 epochs and saved the weights after each epoch). ### Task #2 Plug in both the embedding size and the size of the hidden layer of the decoder corresponding to the selected pickle file in `decoder_file`. ``` # Watch for any changes in model.py, and re-load it automatically. % load_ext autoreload % autoreload 2 import os import torch from model import EncoderCNN, DecoderRNN # TODO #2: Specify the saved models to load. encoder_file = 'encoder-3.pkl' decoder_file = 'decoder-3.pkl' # TODO #3: Select appropriate values for the Python variables below. embed_size = 250 hidden_size = 125 # The size of the vocabulary. vocab_size = len(data_loader.dataset.vocab) # Initialize the encoder and decoder, and set each to inference mode. encoder = EncoderCNN(embed_size) encoder.eval() decoder = DecoderRNN(embed_size, hidden_size, vocab_size) decoder.eval() # Load the trained weights. encoder.load_state_dict(torch.load(os.path.join('./models', encoder_file))) decoder.load_state_dict(torch.load(os.path.join('./models', decoder_file))) # Move models to GPU if CUDA is available. encoder.to(device) decoder.to(device) ``` <a id='step3'></a> ## Step 3: Finish the Sampler Before executing the next code cell, you must write the `sample` method in the `DecoderRNN` class in **model.py**. This method should accept as input a PyTorch tensor `features` containing the embedded input features corresponding to a single image. It should return as output a Python list `output`, indicating the predicted sentence. `output[i]` is a nonnegative integer that identifies the predicted `i`-th token in the sentence. The correspondence between integers and tokens can be explored by examining either `data_loader.dataset.vocab.word2idx` (or `data_loader.dataset.vocab.idx2word`). After implementing the `sample` method, run the code cell below. If the cell returns an assertion error, then please follow the instructions to modify your code before proceeding. Do **not** modify the code in the cell below. ``` # Move image Pytorch Tensor to GPU if CUDA is available. image = image.to(device) # Obtain the embedded image features. features = encoder(image).unsqueeze(1) # Pass the embedded image features through the model to get a predicted caption. output = decoder.sample(features) print('example output:', output) assert (type(output)==list), "Output needs to be a Python list" assert all([type(x)==int for x in output]), "Output should be a list of integers." assert all([x in data_loader.dataset.vocab.idx2word for x in output]), "Each entry in the output needs to correspond to an integer that indicates a token in the vocabulary." ``` <a id='step4'></a> ## Step 4: Clean up the Captions In the code cell below, complete the `clean_sentence` function. It should take a list of integers (corresponding to the variable `output` in **Step 3**) as input and return the corresponding predicted sentence (as a single Python string). ``` # TODO #4: Complete the function. def clean_sentence(output): sentence = '' for x in output: sentence = sentence + ' ' + data_loader.dataset.vocab.idx2word[x] sentence = sentence.strip() return sentence ``` After completing the `clean_sentence` function above, run the code cell below. If the cell returns an assertion error, then please follow the instructions to modify your code before proceeding. ``` sentence = clean_sentence(output) print('example sentence:', sentence) assert type(sentence)==str, 'Sentence needs to be a Python string!' ``` <a id='step5'></a> ## Step 5: Generate Predictions! In the code cell below, we have written a function (`get_prediction`) that you can use to use to loop over images in the test dataset and print your model's predicted caption. ``` def get_prediction(): orig_image, image = next(iter(data_loader)) plt.imshow(np.squeeze(orig_image)) plt.title('Sample Image') plt.show() image = image.to(device) features = encoder(image).unsqueeze(1) output = decoder.sample(features) sentence = clean_sentence(output) print(sentence) ``` Run the code cell below (multiple times, if you like!) to test how this function works. ``` get_prediction() ``` As the last task in this project, you will loop over the images until you find four image-caption pairs of interest: - Two should include image-caption pairs that show instances when the model performed well. - Two should highlight image-caption pairs that highlight instances where the model did not perform well. Use the four code cells below to complete this task. ### The model performed well! Use the next two code cells to loop over captions. Save the notebook when you encounter two images with relatively accurate captions. ``` get_prediction() get_prediction() ``` ### The model could have performed better ... Use the next two code cells to loop over captions. Save the notebook when you encounter two images with relatively inaccurate captions. ``` get_prediction() get_prediction() ```
github_jupyter
# FIFA Transfer Market Analysis of European Football Leagues ## Data 512 Project ## Tharun Sikhinam ## I. Introduction Transfer windows are as busy a time as any other in the football world. The game attracts so much attention that even when the ball is not rolling, the eyes of the entire world are on football. Transfers occur all year round and in every corner of the globe, but activity on the transfer market is typically at its peak during the summer and winter season. The purpose of this project is to study of the economics in play from 1990 - 2019 in the transfer market of the top european leagues. It would be interesting to know how the trends have changed over the years. And by looking closely at the data, one hopes to uncover any hidden trends and explain the rise of the elite clubs in the European continent. With each clubs bringing in vast amounts of money through TV rights and other sources, it would be interesting to know how the clubs have put this money to use. Understanding these macroeconomics can help us build a fair game and find areas of improvement. Through this analysis we want to address the wealth inequality of the top elite european clubs and how football has almost become a ‘money-game’. ## II. Background The best resource for transfmer markets is the official FIFA TMS[1]. This link holds a summary reports for each transfer window in the european leagues dating back to 2013. The analysis concentrates on a particular transfer window and not the trends over the years. While the report talks about how each country is represented, the players are not classified on their Position (Striker, Midfield etc.,) Through this analysis we wish to answer questions such as "Which country produces the best strikers?" KPMG also publishes reports on clubs valuations and their spendings[2]. These reports also serve as useful guidelines so that I don't replicate any existing work. Most of these papers focus on a short duration of time, and this motivated me to explore the trends dating back to 1990 and correlating some of the trends found to real-life events. The wealth gap between the top clubs and lower level clubs has been evident for a while [3], but it would be intersting to know what the exact numbers are. And how this wealth gap has been changing over the years. This can inform football policy makers to design better laws to safeguard the values and spirit of the game. ## III. Data The data for this analysis is scraped off of transfermarkt.co.uk The scraping was done in accordance with the Terms of Use in this link: https://www.transfermarkt.co.uk/intern/anb After reading the Terms of Use, it clearly states that ```4.8. With the transmission of Content, Transfermarkt grants you the irrevocable, temporally and spatially unlimited and transferable right to reproduce, distribute, publish, issue, make publicly accessible, modify, translate and store the Content. This includes the right to edit, design, adapt to the file formats required for use, and to change and / or improve the presentation quality.``` Furthermore looking at the https://www.transfermarkt.co.uk/robots.txt , the pages we want to scrap off of are not disallowed and are open to web crawlers and scrapers. This dataset consists of all transfers in the European football market from 1991-2018. The data consists of a player name, club-out, club-in along with the transfer value grouped by different leagues. The dataset also consists of free transfers and loans as well. Most of the data is updated and contributed by the users of the website, and there might be few factual inaccuracies in the transfer figures stated. <b>Ethical considerations:</b> Player names and ages have been removed from the dataset and will not be used as part of the analysis. This analysis doesn't aim to disrespect any player or country. The results of the analysis are aimed at the governing bodies of different leagues and sport lawmakers. Individual clubs are singled out over the analysis. ## IV. Research Questions The following research questions are posed and answered methodically following best practices: - <b>Q1. How has the transfer spending increased in the top 5 European Leagues?</b> - <b>Q2. Which clubs spent the most and received the most transfer fee on players from 2010-2018?</b> - <b> Q3. How have transfers fees moved betwen the leagues from 2010-2018? </b> - <b> Q4. How has the wealth gap changed amongst the European elite? </b> - <b> Q5. Investigating the spending trends of Manchester City, Chelsea and Paris Saint-Germain </b> - <b> Q6. Which country produces the best footballing talent? </b> ## V. Reproducibility Each individual section of the analysis can be reproduced in its entirety. To reproduce the analysis, the following softwares have to be installed - R - python - pandas - numpy - matplotlib - sqlite3 - javascript/codepen - Tableau For questions 1-5 of the analysis sqlite3 is used as a database to query the results from. Sqlite3 is an in-memory/local database that is easy to setup and use. An installation of sqlite3 is recommended to reproduce all parts of the analysis. https://www.sqlite.org/download.html Javascript is used to create a sankey diagram for research question 3. The code for the javascript visualization can be found at https://codepen.io/tharunsikhinam/pen/QWwbzKj. To create annotations and vertical stacked bar chart for question 5, Tableau was used. The Tableau .twb file is stored in the images directory of the project called q5Viz.twb. Load the file into tableau desktop/online to view the visualization. A copy of the database used for the analysis is also stored in the cleanData table, called as transfers.db. It holds all the tables necessary for the analysis and can be directly imported into sqlite3 or any other SQL database. ## VI. Analysis and Code The analysis and code section of this document is broken down into 3 main parts 1. Data Collection 2. Data Pre-Processing 3. Data Analysis & Results ## 1. Data Collection #### Data is scraped off of transfermarket.co.uk in accordance with their Terms of Use. #### The R scripts to scrape data are stored under scrapingScripts/, run the following cell to generate raw data ``` R < scrape.R --no-save ``` ### 1.1 Run R scripts to scrape data ``` !R < ./scrapingScripts/scrape.R --no-save ``` #### scrape.R can be modified to include more seasons and leagues, by default we are considering the years 1991-2018 and the European leagues from England, Spain, Italy, France, Germany, (Top 5) , Portugal and Netherlands ### 1.2 List first 5 files in data directory ``` from os import walk f = [] for (dirpath, dirnames, filenames) in walk("./rawData"): f.extend(filenames) break f[0:5] ``` ### 1.3 Combining data into one file ``` import pandas as pd import glob interesting_files = glob.glob("./rawData/*.csv") # Combine data frames inside rawData directory combinedDf = pd.concat((pd.read_csv(f, header = 0) for f in interesting_files)) combinedDf.to_csv("./cleanData/allSeasons.csv") combinedDf.head(5) ``` ## 2. Data Pre-Processing ``` import pandas as pd import numpy as np raw = pd.read_csv("./cleanData/allSeasons.csv") ``` #### Shape of the dataset ``` raw.shape ``` #### Columns ``` raw.columns ``` ### 2.1 Dropping unnecessary columns ``` raw.head(5) raw = raw.drop(["player_name","age","fee"],axis=1) ``` ### 2.2 Clean up nationality and position #### For players belonging to more than one nation, only the first country is used #### Positions are generalized into Forwards, Midfield, Defense, Wingers and Goalkeepers ``` # Consider only the first country that player belongs to def cleanCountry(row): if isinstance(row["nat"],str) : return row["nat"].split(";")[0] else: return row["nat"] # Replace positions with a general category def cleanPosition(row): if isinstance(row["position"],str) : if "Midfield" in row["position"]: return "Midfield" elif row["position"].find("Back")>-1: return "Defense" elif "Forward" in row["position"]: return "Forward" elif "Striker" in row["position"]: return "Forward" elif "Winger" in row["position"]: return "Winger" else: return row["position"] else: return row["position"] ``` <b>Clean up country Clean up position and add new posNew column Remove incoming transfers (duplicates) Remove rows from Championship, we're only considering top flight leagues </b> ``` # Clean up country raw["nat"] = raw.apply(cleanCountry,axis=1) # Clean up Position aand add new posNew column raw["posNew"] = raw.apply(cleanPosition,axis=1) # Remove incoming transfers (duplicates) raw = raw[raw['transfer_movement']=='in'] # Remove rows from Championship, we're only considering top flights leagues raw = raw[raw["league_name"]!="Championship"] ``` ### 2.4 Replace NA's with zero's and drop all rows with zero's #### For the purposes of this analysis we will be only considering publicly stated transfers. Transfers involving loans, free transfers and end of contract signings are not considered. ``` # Replace 0's with NA's and drop rows raw = raw.replace(0,np.nan) raw = raw.dropna() leagues = raw.league_name.unique() raw.to_csv("./cleanData/allSeasonsClean.csv") raw.shape ``` ### 2.5. Load data into SQLite <b>For all future parts of this analysis a local installation of sqlite3 is recommended. Download sqlite3 from the following link and unzip the file https://www.sqlite.org/download.html. </b> <b> We will be using an in-memory database to run SQL queries against. If you would like to create a persistent copy of the database, replace :memory: with a path in the filesystem </b> ``` import sqlite3 import pandas as pd # Create a new database called transfers.db raw = pd.read_csv("./cleanData/allSeasonsClean.csv") cnx = sqlite3.connect(":memory:") # create the dataframe from a query raw.to_sql("transfer", cnx, if_exists='append', index=False) ``` #### Verifying counts in database ``` df = pd.read_sql_query("SELECT count(*) FROM transfer limit 1", cnx) df.head() ``` #### Loading inflation dataset <b>British inflation data is obtained from http://inflation.iamkate.com/ under the CC0 1.0 Universal License https://creativecommons.org/publicdomain/zero/1.0/legalcode <b>Multiplier mentions by how much the price at a particular year should be multiplied by to get an inflation adjusted value</b> ``` inflation = pd.read_csv("./cleanData/inflation.csv") inflation.head() inflation.to_sql("inflation", cnx, if_exists='append', index=False) ``` ### 2.6. Adjust transfer fee for inflation <b>Adjust transfer fees for inflation and store this data as transfers table</b> ``` inflation_adjusted = pd.read_sql_query("select transfer.*,(fee_cleaned*multiplier) as `fee_inflation_adjusted` from transfer join inflation on transfer.year = inflation.year",cnx) inflation_adjusted.to_sql("transfers", cnx, if_exists='append', index=False) ``` ## 3. Data Analysis #### Verifying data in the table ``` pd.read_sql_query("SELECT count(*) FROM transfers limit 1", cnx).head() ``` ## Q1. How has the transfer spending increased in the top 5 European Leagues? Transfer records are being broken every summer, with player values reaching upwards for £100M pounds. English Premier League in particular seems to be setting the trend for high transfer spending with some of the richest clubs belonging to England. Through this question we want to analyze how the transfer spending has increased amongst the top flights leagues in Europe. The actual inflation from 1990's to 2018's is over 2.5% for the British Pound[4], while the inflation in transfer spending appears to be extreme. We want to be able to quantify this increase in value of players. To observe the trends over the years we write a SQL query that sums over the transfer_fee spending for each league over the years. SQL query - sum(transfer_fee) and group by league_name and year ``` df = pd.read_sql_query("SELECT league_name,year,sum(fee_inflation_adjusted) as `total_spending in £million` from transfers where\ league_name in ('Serie A', '1 Bundesliga', 'Premier League', 'Ligue 1','Primera Division') \ group by league_name,year",cnx) df.head(5) ``` ### Plot ``` import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (20,10) for l in ['Serie A', '1 Bundesliga', 'Premier League', 'Ligue 1','Primera Division']: x = df[df["league_name"]==l] plt.plot(x["year"], x["total_spending in £million"],label=l) plt.legend(prop={'size': 15}) plt.xlabel("Years",fontsize = 20) plt.ylabel("Transfer Spending in £Million ",fontsize = 20) plt.title("Transfer Spending in the top 5 European Leagues",fontsize = 20) ``` ### Results - Transfer spending has been steadily increasing for all European leagues since 1991 - The spending gap between Premier League and the others shows a steep increase since the 2010. - The percentage change in median price of player from 1990’s to 2018 is 521% - The rise TV rights revenue can be an explanation for the rise in Premier League Transfer Spending (https://www.usatoday.com/story/sports/soccer/2019/05/21/english-premier-league-broadcast-rights-rise-to-12-billion/39500789/) ## Q 2.1. Which clubs spent the most on players from 2010-2018? After observing the trends in transfer spending to increase by the year, it would be interesting to know who are the top spenders and which clubs receive the most in transfer fees. You would expect the top spenders to also produce the best footballing talent. The results from this question paint a different picture ### SQL Query - sum(transfer_fee) and group by club_name and order by descending ``` topSpenders = pd.read_sql_query("SELECT club_name,league_name,sum(fee_inflation_adjusted) as `total_spending in £million` from transfers where\ year>2010 and year<2019\ group by club_name,league_name order by `total_spending in £million` desc",cnx) topSpenders = topSpenders.head(10) topSpenders.head(10) ``` ### Plot ``` plt.rcParams["figure.figsize"] = (10,5) plt.barh(topSpenders["club_name"],topSpenders["total_spending in £million"]) plt.xlabel("Transfer Spending in £Million",fontsize = 15) plt.ylabel("Club Names",fontsize = 15) plt.title("Clubs with highest transfer fee spending (2010-2018)",fontsize = 15) ``` ## Q 2.2: Which clubs receive the highest transfer fees for their players? ### SQL Query - sum(transfer_fee) over outgoing transfers and group by club_name and order by descending ``` highestReceivers = pd.read_sql_query("SELECT club_involved_name,sum(fee_inflation_adjusted) as `total_spending in £million` from transfers where\ year>=2010 and year<=2019\ group by club_involved_name order by `total_spending in £million` desc",cnx).head(10) highestReceivers = highestReceivers.head(10) highestReceivers.head(10) ``` ### Plot ``` plt.rcParams["figure.figsize"] = (10,5) plt.barh(highestReceivers["club_involved_name"],highestReceivers["total_spending in £million"]) plt.xlabel("Transfer Spending in £Million",fontsize = 15) plt.ylabel("Club Names",fontsize = 15) plt.title("Clubs receiving the highest transfer fee (2010-2018)",fontsize = 15) ``` ### Results - The highest spending clubs are Manchester City, Chelsea and PSG. It would be interesting to know how the transfer trends have changed for these three clubs (explored in q5) - The club with highest transfer fees received is Monaco a relatively small club from Ligue 1 (France). We also notice Benfica another club from Liga Nos (Portugal) that receives high transfer fees. This goes to show that the club spending the highest doesn't necessarily sell their players for a high value - 4 of the top 10 highest spending clubs are from the English Premier League, which leads us into the next question. ## Q3. How have the transfers flowed betwen the leagues from 2010-2018? ### We create a new temporary table 'movements' with the following columns: from_league, fee_spent and to_league ``` ### To find out the movement of money across the leagues, we create a temporary table df = pd.read_sql_query("select t1.league_name as 'to_league',fee_inflation_adjusted,b.league_name as 'from_league' from transfers t1 left outer join (select club_name,league_name from transfers \ where league_name!='Championship' and year>=2010 \ group by club_name,league_name) as b on t1.club_involved_name = b.club_name where t1.year>=2010",cnx); df.to_sql("movements", cnx, if_exists='append', index=False) movements = pd.read_sql_query("select to_league,sum(fee_inflation_adjusted) as fee_spent,from_league from movements where \ from_league is not null and to_league!=from_league group by to_league,from_league",cnx) movements.head(5) ``` ### The above data is loaded into a sankey diagram written in javascript, the visualization is displayed below (only visible when using notebook). ### Code for the visualization can be found at https://codepen.io/tharunsikhinam/pen/QWwbzKj ### Link to the visualization https://codepen.io/tharunsikhinam/full/QWwbzKj ``` # Display the associated webpage in a new window import IPython url = 'https://codepen.io/tharunsikhinam/full/QWwbzKj' iframe = '<iframe src=' + url + ' width=1000 height=700></iframe>' IPython.display.HTML(iframe) ``` ### Results - The league importing the maximum talent is the English Premier League. This also explains the high transfer spending in that league. - The league exporting most talent is La Liga (Spain). The Spanish League exports players to nearly all leagues with the highest being to English Premier League. ## Q 4: How has the wealth gap changed amongst the European elite? ### For this part of the analysis we will be only focussing on the top 5 European leagues ``` leagues = ['Primera Division', 'Serie A', '1 Bundesliga', 'Premier League','Ligue 1'] ``` ### To evaluate the wealth gap - We collect top 5 spenders in each league and calculate their transfer spending over the years - We collect the bottom 15 in the league and calculate their transfer spendings over the years - The above two measures are plotted on a graph ``` i=120; for l in leagues: plt.figure() # Query to get spendings of top 5 clubs in a league df = pd.read_sql_query("select '"+l+"' ,year,sum(fee_inflation_adjusted) from transfers where club_name in\ (select club_name from transfers where league_name='"+l+"'\ and year>=2010 \ group by club_name order by sum(fee_cleaned) desc limit 5) and league_name='"+l+"'\ and year>=2010 group by year",cnx) plt.plot(df["year"], df["sum(fee_inflation_adjusted)"],label="Top 5") # Query to get spendings of bottom 15 clubs in a league df = pd.read_sql_query("select '"+l+"' ,year,sum(fee_inflation_adjusted) from transfers where club_name not in\ (select club_name from transfers where league_name='"+l+"'\ and year>=2010 \ group by club_name order by sum(fee_cleaned) desc limit 5) and league_name='"+l+"'\ and year>=2010 group by year",cnx) plt.title("Transfer spendings in " + l ) plt.plot(df["year"], df["sum(fee_inflation_adjusted)"],label="Bottom 15") plt.xlabel("Years",fontsize = 10) plt.ylabel("Transfer Spending in £Million ",fontsize = 10) plt.legend() ``` ### Results - We observe a huge wealth inequality between the top and bottom clubs in Ligue 1, Serie A and Primera Division - The difference is not so significant for English Premier League and the Bundesliga - This is still a cause for concern since the top 5 clubs hold a disproportionate share of wealth in the top flight clubs - These top 5 clubs in their respective leagues have won the domestic or international titles since 2010 (except for Leicester City in 2016). - High transfer spending for domestic and international performance can lead to inequality between leagues and clubs. ## Q5: Investigating the spending trends of Manchester City, Chelsea and Paris Saint-Germain We are particularly interested in the spending trends of the above 3 clubs. They have arrived into the footballing scene relatively recently and have gone on to challenge the European Elite. #### Calculate transfer spendings over the years for the above clubs ``` df = pd.read_sql_query("select club_name,year,sum(fee_inflation_adjusted) as `transfer_fee_total` from transfers\ where club_name in ('Manchester City','Chelsea FC','Paris Saint-Germain') and year<=2017\ group by club_name,year",cnx) df.head(5) ``` ### For this question, Tableau was used to create the visualization. The Tableau file for the visualization can be found at clubsSpending.twb. Steps to reproduce the visualization - Run the above SQL query - Dump the dataframe into csv file - Open Tableau desktop/online. - Load the csv file into Tableau - Move transfer_fee_total to rows - Move year to columns - Add club_name to color - Choose vertical stacked chart from the right top corner ``` from IPython.display import Image Image(filename='./images/q5.png') ``` - Chelsea, Manchester City and PSG have challenged the European elite in the past decade partly due to their huge spending - Chelsea’s investment grew by over 234%, while Paris Saint Germain’s by 477% and Manchester City’s by 621% - The huge transfer spendings can be attributed to the massive amounts of foreign investment into clubs <b>Amount & time to first title</b> - Chelsea - £470.5 million (2 yrs) - Manchester City - £761 million (5 yrs) - Paris SaintGermain - £421 million (2 ys) ## Q 6: Which country produces the best footballing talent? ### Following steps are performed to decide which country produces the best kind of footballing talent - Iterate over the years 2000-2018 - Iterate over chosen positions - Sum transfer fee by nation - Rank countries in descending order of transfer spending - Compute median rank over the years - Sort by median rank and display for each position - Disregard countries that appear in the top 10 rankings only 10 times in the 18 year span ``` clean = raw positions=['Forward', 'Midfield', 'Winger', 'Defense', 'Goalkeeper'] final = {} #Iterate over the years for i in range(2000,2019): # Iterate over positions for pos in positions: if pos not in final: final[pos]=pd.DataFrame() # Sum over the fee by nation and rank x = clean[(clean["posNew"]==pos)&(clean["year"]==i)].groupby(['nat'])['fee_cleaned'].agg('sum').rank(ascending=False).sort_values(ascending=True) x = x.to_frame() x["year"]=i final[pos]= final[pos].append(x) # Add column to maintain counts for pos in positions: final[pos]["count"]=1 # Compute median and display for pos in positions: z = final[pos].groupby('nat').median().reset_index() z1 = final[pos].groupby(['nat']).agg({"count":"count","fee_cleaned": "median",}) print(pos) z1 = z1.rename(columns={"fee_cleaned": "median_rank","nat":"Country"}) print(z1[z1["count"]>=10].drop(['count'],axis=1).sort_values(by="median_rank").head(5)) print("\n\n") ``` ## VII. Limitations - All of the data is collected and maintained by users of the transfermarkt.co.uk website. There might be inaccuracies in the stated transfer figures. - These inaccuracies might be more frequent as we go back the years (1990-2000) - As part of the analysis players on loan and free transfers are not being considered, this could change the results of the analysis - Calculation of median rank might not be the best metric for measuring which country produces the best footballing talent. Player ratings or yearly performances are a better measure for this analysis ## VIII. Conclusion - By analyzing the transfer market we are now aware of some of the big spenders in the European leagues and the hyper-inflation in transfer fees in the English Premier League - With clubs raking in huge amounts of revenue, checks and balances need to be put into place to prevent the sport from being dominated from a few European elite clubs. which could lead to an European Super League - High transfer spending for domestic and international performance can lead to wealth inequality between leagues and clubs. - The increase in foreign investments into European Clubs has led to the rise of super-rich clubs - Clubs chasing success are spending more and more on players, which creates an unequal playing field for all the clubs. Although, it might not be feasible to completely curb the spending of these clubs, regulations need to be put in place to prevent such clubs from taking over. ## IX. References 1. FIFA TMS reports https://www.fifatms.com/data-reports/reports/ 2. Evaluation football clubs in Europe - A report by KPMG https://www.footballbenchmark.com/library/football_clubs_valuation_the_european_elite_2019 3. Wealth gap in the top European Clubs https://www.usatoday.com/story/sports/soccer/2018/01/16/uefa-warns-of-growing-wealth-gap-in-top-clubs-finance-study/109521284/ 4. British Inflation Calculator http://inflation.iamkate.com/
github_jupyter
## The 1cycle policy ``` from fastai.gen_doc.nbdoc import * from fastai.vision import * from fastai.callbacks import * ``` ## What is 1cycle? This Callback allows us to easily train a network using Leslie Smith's 1cycle policy. To learn more about the 1cycle technique for training neural networks check out [Leslie Smith's paper](https://arxiv.org/pdf/1803.09820.pdf) and for a more graphical and intuitive explanation check out [Sylvain Gugger's post](https://sgugger.github.io/the-1cycle-policy.html). To use our 1cycle policy we will need an [optimum learning rate](https://sgugger.github.io/how-do-you-find-a-good-learning-rate.html). We can find this learning rate by using a learning rate finder which can be called by using [`lr_finder`](/callbacks.lr_finder.html#callbacks.lr_finder). It will do a mock training by going over a large range of learning rates, then plot them against the losses. We will pick a value a bit before the minimum, where the loss still improves. Our graph would look something like this: ![onecycle_finder](imgs/onecycle_finder.png) Here anything between `3x10^-2` and `10^-2` is a good idea. Next we will apply the 1cycle policy with the chosen learning rate as the maximum learning rate. The original 1cycle policy has three steps: 1. We progressively increase our learning rate from lr_max/div_factor to lr_max and at the same time we progressively decrease our momentum from mom_max to mom_min. 2. We do the exact opposite: we progressively decrease our learning rate from lr_max to lr_max/div_factor and at the same time we progressively increase our momentum from mom_min to mom_max. 3. We further decrease our learning rate from lr_max/div_factor to lr_max/(div_factor x 100) and we keep momentum steady at mom_max. This gives the following form: <img src="imgs/onecycle_params.png" alt="1cycle parameteres" width="500"> Unpublished work has shown even better results by using only two phases: the same phase 1, followed by a second phase where we do a cosine annealing from lr_max to 0. The momentum goes from mom_min to mom_max by following the symmetric cosine (see graph a bit below). ## Basic Training The one cycle policy allows to train very quickly, a phenomenon termed [_superconvergence_](https://arxiv.org/abs/1708.07120). To see this in practice, we will first train a CNN and see how our results compare when we use the [`OneCycleScheduler`](/callbacks.one_cycle.html#OneCycleScheduler) with [`fit_one_cycle`](/train.html#fit_one_cycle). ``` path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) model = simple_cnn((3,16,16,2)) learn = Learner(data, model, metrics=[accuracy]) ``` First lets find the optimum learning rate for our comparison by doing an LR range test. ``` learn.lr_find() learn.recorder.plot() ``` Here 5e-2 looks like a good value, a tenth of the minimum of the curve. That's going to be the highest learning rate in 1cycle so let's try a constant training at that value. ``` learn.fit(2, 5e-2) ``` We can also see what happens when we train at a lower learning rate ``` model = simple_cnn((3,16,16,2)) learn = Learner(data, model, metrics=[accuracy]) learn.fit(2, 5e-3) ``` ## Training with the 1cycle policy Now to do the same thing with 1cycle, we use [`fit_one_cycle`](/train.html#fit_one_cycle). ``` model = simple_cnn((3,16,16,2)) learn = Learner(data, model, metrics=[accuracy]) learn.fit_one_cycle(2, 5e-2) ``` This gets the best of both world and we can see how we get a far better accuracy and a far lower loss in the same number of epochs. It's possible to get to the same amazing results with training at constant learning rates, that we progressively diminish, but it will take a far longer time. Here is the schedule of the lrs (left) and momentum (right) that the new 1cycle policy uses. ``` learn.recorder.plot_lr(show_moms=True) show_doc(OneCycleScheduler) ``` Create a [`Callback`](/callback.html#Callback) that handles the hyperparameters settings following the 1cycle policy for `learn`. `lr_max` should be picked with the [`lr_find`](/train.html#lr_find) test. In phase 1, the learning rates goes from `lr_max/div_factor` to `lr_max` linearly while the momentum goes from `moms[0]` to `moms[1]` linearly. In phase 2, the learning rates follows a cosine annealing from `lr_max` to 0, as the momentum goes from `moms[1]` to `moms[0]` with the same annealing. ``` show_doc(OneCycleScheduler.steps, doc_string=False) ``` Build the [`Scheduler`](/callback.html#Scheduler) for the [`Callback`](/callback.html#Callback) according to `steps_cfg`. ### Callback methods You don't call these yourself - they're called by fastai's [`Callback`](/callback.html#Callback) system automatically to enable the class's functionality. ``` show_doc(OneCycleScheduler.on_train_begin, doc_string=False) ``` Initiate the parameters of a training for `n_epochs`. ``` show_doc(OneCycleScheduler.on_batch_end, doc_string=False) ``` Prepares the hyperparameters for the next batch. ## Undocumented Methods - Methods moved below this line will intentionally be hidden ## New Methods - Please document or move to the undocumented section ``` show_doc(OneCycleScheduler.on_epoch_end) ```
github_jupyter
# BIG DATA ANALYTICS PROGRAMMING : PySpark ### PySpark 맛보기 --- ``` import sys !{sys.executable} -m pip install pyspark # PYSPARK를 활용하기 위한 관련 설정 import os import sys os.environ["PYSPARK_PYTHON"]=sys.executable os.environ["PYSPARK_DRIVER_PYTHON"]=sys.executable ``` ## RDD 활용하기 - Resilient Disributed Data ``` # pyspark 임포트 from pyspark import SparkContext # Spark context를 활용해 RDD를 생성 할 수 있다 sc = SparkContext() ``` ### 테스트 파일 생성 ``` %%writefile example.txt first line second line third line fourth line ``` ### RDD 기본 동작 ``` textFile = sc.textFile('example.txt') textFile ``` ### Line 수 세기 ``` textFile.count() ``` ### 첫번째 줄 출력 ``` textFile.first() ``` ## 특정 text를 포함하는 데이터 출력 ``` secfind = textFile.filter(lambda line: 'second' in line) # RDD, 아직까지 어떠한 연산도 이루어지지 않은 상태입니다! secfind # 이때 연산 시작 secfind.collect() # 이때 연산 시작 secfind.count() ``` ## RDD에서의 전처리 ``` %%writefile example2.txt first second line the third line then a fourth line text_rdd = sc.textFile('example2.txt') text_rdd.collect() ``` ### Map과 Flatmap의 차이 ``` text_rdd.map(lambda line: line.split()).collect() # Collect everything as a single flat map text_rdd.flatMap(lambda line: line.split()).collect() ``` ### CSV 파일 전처리 ``` rdd = sc.textFile('data.csv') rdd.take(2) rdd.map(lambda x: x.split(",")).take(3) rdd.map(lambda x: x.replace(" ","_")).collect() rdd.map(lambda x: x.replace(" ","_")).map(lambda x: x.replace("'","_")).collect() rdd.map(lambda x: x.replace(" ","_")).map(lambda x: x.replace("'","_")).map(lambda x: x.replace("/","_")).collect() clean_rdd = rdd.map(lambda x: x.replace(" ","_").replace("'","_").replace("/","_").replace('"',"")) clean_rdd.collect() clean_rdd = clean_rdd.map(lambda x: x.split(",")) clean_rdd.collect() ``` ### Group BY 구현 ``` clean_rdd.map(lambda lst: (lst[0],lst[-1])).collect() # 첫번째 원소(lst[0])를 키로 인지 clean_rdd.map(lambda lst: (lst[0],lst[-1]))\ .reduceByKey(lambda amt1,amt2 : amt1+amt2)\ .collect() # 올바른 연산을 위해 Float으로 캐스팅 clean_rdd.map(lambda lst: (lst[0],lst[-1]))\ .reduceByKey(lambda amt1,amt2 : float(amt1)+float(amt2))\ .collect() # 최종 코드 clean_rdd.map(lambda lst: (lst[0],lst[-1]))\ .reduceByKey(lambda amt1,amt2 : float(amt1)+float(amt2))\ .filter(lambda x: not x[0]=='gender')\ .sortBy(lambda stateAmount: stateAmount[1], ascending=False)\ .collect() ``` ## DataFrame 활용하기 ``` from pyspark.sql import SparkSession appName = "Python Example - PySpark Read CSV" master = 'local' # Create Spark session spark = SparkSession.builder \ .master(master) \ .appName(appName) \ .getOrCreate() # Convert list to data frame df = spark.read.format('csv') \ .option('header',True) \ .option('multiLine', True) \ .load('data.csv') df.show() print(f'Record count is: {df.count()}') df.columns df.describe() df.select('gender').show() df.select('gender').distinct().show() df.select('race/ethnicity').distinct().show() from pyspark.sql import functions as F df.groupBy("gender").agg(F.mean('writing score'), F.mean('math score')).show() ```
github_jupyter
# A Simple Autoencoder We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data. ![Autoencoder](assets/autoencoder_1.png) In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset. ``` %matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', validation_size=0) ``` Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits. ``` img = mnist.train.images[2] plt.imshow(img.reshape((28, 28)), cmap='Greys_r') ``` We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a **single ReLU hidden layer**. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a **sigmoid activation on the output layer** to get values matching the input. ![Autoencoder architecture](assets/simple_autoencoder.png) > **Exercise:** Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, `tf.layers`. For instance, you would use [`tf.layers.dense(inputs, units, activation=tf.nn.relu)`](https://www.tensorflow.org/api_docs/python/tf/layers/dense) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this `tf.nn.sigmoid_cross_entropy_with_logits` ([documentation](https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits)). You should note that `tf.nn.sigmoid_cross_entropy_with_logits` takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function. ``` # Size of the encoding layer (the hidden layer) encoding_dim = 32 # feel free to change this value total_pixels = mnist.train.images.shape[1] # Input and target placeholders inputs_ = tf.placeholder(tf.float32, (None, total_pixels), name='inputs') targets_ = tf.placeholder(tf.float32, (None, total_pixels), name='targets') # Output of hidden layer, single fully connected layer here with ReLU activation encoded = tf.layers.dense(inputs=inputs_, units=encoding_dim, activation=None) # Output layer logits, fully connected layer with no activation logits = tf.layers.dense(inputs=encoded, units=total_pixels, activation=None) # Sigmoid output from logits decoded = tf.sigmoid(logits, name='output') # Sigmoid cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) # Mean of the loss cost = tf.reduce_mean(loss) # Adam optimizer opt = tf.train.AdamOptimizer(0.001).minimize(cost) ``` ## Training ``` # Create the session sess = tf.Session() ``` Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss. Calling `mnist.train.next_batch(batch_size)` will return a tuple of `(images, labels)`. We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with `sess.run(tf.global_variables_initializer())`. Then, run the optimizer and get the loss with `batch_cost, _ = sess.run([cost, opt], feed_dict=feed)`. ``` epochs = 20 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) feed = {inputs_: batch[0], targets_: batch[0]} batch_cost, _ = sess.run([cost, opt], feed_dict=feed) print("Epoch: {}/{}: Training loss: {:.4f}".format(e+1, epochs, batch_cost)) ``` ## Checking out the results Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. ``` fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs}) for images, row in zip([in_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) sess.close() ``` ## Up Next We're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers. In practice, autoencoders aren't actually better at compression compared to typical methods like JPEGs and MP3s. But, they are being used for noise reduction, which you'll also build.
github_jupyter
``` # Observations Insights # As the timepoint passes the mice had seen their tumor size go down when using cap. # Study seems fair - there are equal parts male to female sitting at 49 - 50 % # there may be a correlation to weight and tumor size - possibly obesity playing a role? ``` ## Dependencies and starter code ``` # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import scipy.stats as st import numpy as np from scipy.stats import linregress # Study data files mouse_metadata = "Data/Mouse_metadata.csv" study_results = "Data/Study_results.csv" # Read the mouse data and the study results mouse_metadata = pd.read_csv(mouse_metadata) study_results = pd.read_csv(study_results) # Combine the data into a single dataset merged=pd.merge(mouse_metadata, study_results, on="Mouse ID", how="left") merged.head() ``` ## Summary statistics ``` # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen # calculate stats for mean median var and stdv mean = merged.groupby('Drug Regimen')['Tumor Volume (mm3)'].mean() median = merged.groupby('Drug Regimen')['Tumor Volume (mm3)'].median() variance = merged.groupby('Drug Regimen')['Tumor Volume (mm3)'].var() stdv = merged.groupby('Drug Regimen')['Tumor Volume (mm3)'].std() sem = merged.groupby('Drug Regimen')['Tumor Volume (mm3)'].sem() # create df stats_df = pd.DataFrame({"Mean": mean, "Median": median, "Variance": variance, "Standard Deviation": stdv, "SEM": sem}) # print df stats_df ``` ## Bar plots ``` # Generate a bar plot showing number of data points for each treatment regimen using pandas group_df = pd.DataFrame(merged.groupby(["Drug Regimen"]).count()).reset_index() # set to only these columns drugRegimen = group_df[["Drug Regimen","Mouse ID"]] drugRegimen = drugRegimen.rename(columns={"Mouse ID": "Amount"}) drugRegimen = drugRegimen.set_index("Drug Regimen") # create bg drugRegimen.plot(kind="bar") #set bg title plt.title("Amount per Drug Regimen") #print bg plt.show() # Generate a bar plot showing number of data points for each treatment regimen using pyplot # convert data into lists drugRegimenplt = stats_df.index.tolist() regCount = (merged.groupby(["Drug Regimen"])["Age_months"].count()).tolist() x_axis = np.arange(len(regCount)) x_axis = drugRegimenplt # Create a bg based upon the above data plt.figure(figsize=(10,4)) plt.bar(x_axis, regCount) ``` ## Pie plots ``` # Generate a pie plot showing the distribution of female versus male mice using pandas # generate df for sex data sex_df = pd.DataFrame(merged.groupby(["Sex"]).count()).reset_index() #set data for only these columns sex_df = sex_df[["Sex","Mouse ID"]] sex_df = sex_df.rename(columns={"Mouse ID": "Sex Ratio"}) # generate pie plot and set ratios plt.figure(figsize=(10,12)) ax1 = plt.subplot(121, aspect='equal') # format to percentages and set title sex_df.plot(kind='pie', y = "Sex Ratio", ax=ax1, autopct='%1.1f%%') # Generate a pie plot showing the distribution of female versus male mice using pyplot sexPie = (merged.groupby(["Sex"])["Age_months"].count()).tolist() # format pc for title and color labels = ["Females", "Males"] colors = ["orange", "blue"] # generate pc and convert to percentage plt.pie(sexPie, labels=labels, colors=colors, autopct="%1.1f%%", shadow=True, startangle=90) ``` ## Quartiles, outliers and boxplots ``` # Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. # get four colums for df and sort for iqr and boxplot topReg = merged[merged["Drug Regimen"].isin(["Capomulin", "Ramicane", "Infubinol", "Ceftamin"])] topReg = topReg.sort_values(["Timepoint"], ascending=True) # format headers for rows topRegData = topReg[["Drug Regimen", "Mouse ID", "Timepoint", "Tumor Volume (mm3)"]] topRegData.head() #Calculate the IQR and quantitatively determine if there are any potential outliers. # Group data for tumor volume topRegList = topRegData.groupby(['Drug Regimen', 'Mouse ID']).last()['Tumor Volume (mm3)'] # create df with assigned labels topRegData_df = topRegList.to_frame() drugs = ['Capomulin', 'Ramicane', 'Infubinol','Ceftamin'] # create box plot and format box_df = topRegData_df.reset_index() tumors = box_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].apply(list) tumors_df = pd.DataFrame(tumors) tumors_df = tumors_df.reindex(drugs) tumorVolumesBox = [vol for vol in tumors_df['Tumor Volume (mm3)']] plt.boxplot(tumorVolumesBox, labels=drugs) plt.ylim(0, 100) # display box plt.show() ``` ## Line and scatter plots ``` # Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin # create df for capomuline capLine_df = merged.loc[merged["Drug Regimen"] == "Capomulin"] capLine_df = capLine_df.reset_index() # pull data for mouse m601 capMouse_df = capLine_df.loc[capLine_df["Mouse ID"] == "m601"] # sort columns for line chart capMouse_df = capMouse_df.loc[:, ["Timepoint", "Tumor Volume (mm3)"]] # create lne graph of time vs volume capMouse_df.set_index('Timepoint').plot(figsize=(10, 4), lineWidth=4, color='blue') # Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen # sort columns for scatter plot scatt_df = capLine_df.loc[:, ["Mouse ID","Weight (g)", "Tumor Volume (mm3)"]] # set up df for average for mouse tumor vol tumAvg = pd.DataFrame(scatt_df.groupby(["Mouse ID", "Weight (g)"])["Tumor Volume (mm3)"].mean()).reset_index() # set new name for Tumor volume for chart title tumAvg = tumAvg.rename(columns={"Tumor Volume (mm3)": "Mouse Tumor Average Volume"}) # create scatter plot tumAvg.plot(kind="scatter", x="Weight (g)",y = "Mouse Tumor Average Volume",grid =True, figsize=(10,4), title= "Weight (g) VS Mouse Tumor Average Volume") # Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen x = tumAvg['Weight (g)'] x_values = tumAvg['Weight (g)'] y_values = tumAvg['Mouse Tumor Average Volume'] (slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values) regress_values = x_values * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.scatter(x_values,y_values) plt.plot(x_values,regress_values,"r-") plt.annotate(line_eq,(10,4),color="orange") # format labels for axis plt.xlabel('Mouse Weight') plt.ylabel('Mouse Tumor Average Volume') plt.show() ```
github_jupyter
``` from keras.layers import Input, Dense, Activation from keras.layers import Maximum, Concatenate from keras.models import Model from keras.optimizers import adam_v2 from sklearn.tree import DecisionTreeClassifier from sklearn.linear_model import SGDClassifier from sklearn.ensemble import GradientBoostingClassifier from sklearn.svm import SVC from Ensemble_Classifiers import Ensemble_Classifier from sklearn.model_selection import train_test_split import numpy as np global seed seed = 0 class MalGAN(): def __init__(self, blackbox, X, Y, threshold): self.apifeature_dims = 69 self.z_dims = 30 self.generator_layers = [self.apifeature_dims+self.z_dims, 32, 32, 64 , self.apifeature_dims] # self.generator_layers = [self.apifeature_dims+self.z_dims, 64, 64, 128 , self.apifeature_dims] self.substitute_detector_layers = [self.apifeature_dims, 64, 64, 1] # self.substitute_detector_layers = [self.apifeature_dims, 128, 128, 1] self.blackbox = blackbox optimizer = adam_v2.Adam(learning_rate=0.0002, beta_1=0.5) self.X = X self.Y = Y self.threshold = threshold # Build and Train blackbox_detector self.blackbox_detector = self.build_blackbox_detector() # Build and compile the substitute_detector self.substitute_detector = self.build_substitute_detector() self.substitute_detector.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy']) # Build the generator self.generator = self.build_generator() # The generator takes malware and noise as input and generates adversarial malware examples example = Input(shape=(self.apifeature_dims,)) noise = Input(shape=(self.z_dims,)) input = [example, noise] malware_examples = self.generator(input) # The discriminator takes generated images as input and determines validity validity = self.substitute_detector(malware_examples) # The combined model (stacked generator and substitute_detector) # Trains the generator to fool the discriminator self.combined = Model(input, validity) self.combined.compile(loss='binary_crossentropy', optimizer=optimizer) # For the combined model we will only train the generator self.substitute_detector.trainable = False def build_blackbox_detector(self): if self.blackbox in ['SVM']: blackbox_detector = SVC(kernel = 'linear') elif self.blackbox in ['GB']: blackbox_detector = GradientBoostingClassifier(random_state=seed) elif self.blackbox in ['SGD']: blackbox_detector = SGDClassifier(random_state=seed) elif self.blackbox in ['DT']: blackbox_detector = DecisionTreeClassifier(random_state=seed) elif self.blackbox in ['Ensem']: blackbox_detector = Ensemble_Classifier() return blackbox_detector def build_generator(self): example = Input(shape=(self.apifeature_dims,)) noise = Input(shape=(self.z_dims,)) x = Concatenate(axis=1)([example, noise]) for dim in self.generator_layers[1:]: x = Dense(dim)(x) x = Activation(activation='tanh')(x) x = Maximum()([example, x]) generator = Model([example, noise], x, name='generator') generator.summary() return generator def build_substitute_detector(self): input = Input(shape=(self.substitute_detector_layers[0],)) x = input for dim in self.substitute_detector_layers[1:]: x = Dense(dim)(x) x = Activation(activation='sigmoid')(x) substitute_detector = Model(input, x, name='substitute_detector') substitute_detector.summary() return substitute_detector def load_data(self): x_ben, x_ran,y_ben, y_ran = self.X[:self.threshold], self.X[self.threshold:], self.Y[:self.threshold], self.Y[self.threshold:] return (x_ran, y_ran), (x_ben, y_ben) def train(self, epochs, batch_size=32): # Load and Split the dataset (xmal, ymal), (xben, yben) = self.load_data() xtrain_mal, xtest_mal, ytrain_mal, ytest_mal = train_test_split(xmal, ymal, test_size=0.50) xtrain_ben, xtest_ben, ytrain_ben, ytest_ben = train_test_split(xben, yben, test_size=0.50) bl_xtrain_mal, bl_ytrain_mal, bl_xtrain_ben, bl_ytrain_ben = xtrain_mal, ytrain_mal, xtrain_ben, ytrain_ben self.blackbox_detector.fit(np.concatenate([xmal, xben]), np.concatenate([ymal, yben])) ytrain_ben_blackbox = self.blackbox_detector.predict(bl_xtrain_ben) Original_Train_TPR = self.blackbox_detector.score(bl_xtrain_mal, bl_ytrain_mal) Original_Test_TPR = self.blackbox_detector.score(xtest_mal, ytest_mal) Train_TPR, Test_TPR = [Original_Train_TPR], [Original_Test_TPR] for epoch in range(epochs): for step in range(xtrain_mal.shape[0] // batch_size): # --------------------- # Train substitute_detector # --------------------- # Select a random batch of malware examples idx_mal = np.random.randint(0, xtrain_mal.shape[0], batch_size) xmal_batch = xtrain_mal[idx_mal] noise = np.random.normal(0, 1, (batch_size, self.z_dims)) idx_ben = np.random.randint(0, xmal_batch.shape[0], batch_size) xben_batch = xtrain_ben[idx_ben] yben_batch = ytrain_ben_blackbox[idx_ben] # Generate a batch of new malware examples gen_examples = self.generator.predict([xmal_batch, noise]) ymal_batch = self.blackbox_detector.predict(np.ones(gen_examples.shape)*(gen_examples > 0.5)) # Train the substitute_detector d_loss_real = self.substitute_detector.train_on_batch(gen_examples, ymal_batch) d_loss_fake = self.substitute_detector.train_on_batch(xben_batch, yben_batch) d_loss = 0.5 * np.add(d_loss_real, d_loss_fake) # --------------------- # Train Generator # --------------------- idx = np.random.randint(0, xtrain_mal.shape[0], batch_size) xmal_batch = xtrain_mal[idx] noise = np.random.uniform(0, 1, (batch_size, self.z_dims)) # Train the generator g_loss = self.combined.train_on_batch([xmal_batch, noise], np.zeros((batch_size, 1))) # Compute Train TPR noise = np.random.uniform(0, 1, (xtrain_mal.shape[0], self.z_dims)) gen_examples = self.generator.predict([xtrain_mal, noise]) TPR = self.blackbox_detector.score(np.ones(gen_examples.shape) * (gen_examples > 0.5), ytrain_mal) Train_TPR.append(TPR) # Compute Test TPR noise = np.random.uniform(0, 1, (xtest_mal.shape[0], self.z_dims)) gen_examples = self.generator.predict([xtest_mal, noise]) TPR = self.blackbox_detector.score(np.ones(gen_examples.shape) * (gen_examples > 0.5), ytest_mal) Test_TPR.append(TPR) print("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100*d_loss[1], g_loss)) if int(epoch) == int(epochs-1): return d_loss[0], 100*d_loss[1], g_loss # create the dict to save the D loss, acc and G loss for different classifiers D_loss_dict, Acc_dict, G_loss_dict = {}, {}, {} # get the data from Feature-Selector import pandas as pd df= pd.read_csv('../dataset/matrix/CLaMP.csv') df.dtypes.value_counts() df.columns # encode categorical column from sklearn.preprocessing import LabelEncoder df['packer_type'] = LabelEncoder().fit_transform(df['packer_type']) df['packer_type'].value_counts() Y = df['class'].values X = df.drop('class', axis=1).values X.shape from sklearn.preprocessing import MinMaxScaler X = MinMaxScaler().fit_transform(X) X from collections import Counter Counter(Y) # load the classifier for classifier in [ 'SVM', 'SGD', 'DT', 'GB', 'Ensem']: print('[+] \nTraining the model with {} classifier\n'.format(classifier)) malgan = MalGAN(blackbox=classifier, X=X, Y=Y, threshold = 2488) d_loss, acc, g_loss = malgan.train(epochs=50, batch_size=32) D_loss_dict[classifier] = d_loss Acc_dict[classifier] = acc G_loss_dict[classifier] = g_loss print('=====================') print(D_loss_dict) print('=====================') print(Acc_dict) print('=====================') print(G_loss_dict) matrix_dict = {} for key, value in D_loss_dict.items(): matrix_dict[key] = [] for key, value in D_loss_dict.items(): matrix_dict[key].append(D_loss_dict[key]) matrix_dict[key].append(Acc_dict[key]) matrix_dict[key].append(G_loss_dict[key]) import pandas as pd df = pd.DataFrame.from_dict(matrix_dict, orient='columns') df.index= list([ 'D_Loss', 'Acc', 'G_Loss']) df import dataframe_image as dfi dfi.export(df, '64_mal_matrix.png') ```
github_jupyter
<font color="red">训练直接将样本的分词+词性 改为 分词+命名实体类型即可</font> ## 目录 - [8. 命名实体识别](#8-命名实体识别) - [8.1 概述](#81-概述) - [8.2 基于隐马尔可夫模型序列标注的命名实体识别](#82-基于隐马尔可夫模型序列标注的命名实体识别) - [8.3 基于感知机序列标注的命名实体识别](#83-基于感知机序列标注的命名实体识别) - [8.4 基于条件随机场序列标注的命名实体识别](#84-基于条件随机场序列标注的命名实体识别) - [8.5 命名实体识别标准化评测](#85-命名实体识别标准化评测) - [8.6 自定义领域命名实体识别](#86-自定义领域命名实体识别) ## 8. 命名实体识别 ### 8.1 概述 1. **命名实体** 文本中有一些描述实体的词汇。比如人名、地名、组织机构名、股票基金、医学术语等,称为**命名实体**。具有以下共性: - 数量无穷。比如宇宙中的恒星命名、新生儿的命名不断出现新组合。 - 构词灵活。比如中国工商银行,既可以称为工商银行,也可以简称工行。 - 类别模糊。有一些地名本身就是机构名,比如“国家博物馆” 2. **命名实体识别** 识别出句子中命名实体的边界与类别的任务称为**命名实体识别**。由于上述难点,命名实体识别也是一个统计为主、规则为辅的任务。 对于规则性较强的命名实体,比如网址、E-mail、IBSN、商品编号等,完全可以通过正则表达式处理,未匹配上的片段交给统计模型处理。 命名实体识别也可以转化为一个序列标注问题。具体做法是将命名实体识别附着到{B,M,E,S}标签,比如, 构成地名的单词标注为“B/ME/S- 地名”,以此类推。对于那些命名实体边界之外的单词,则统一标注为0 ( Outside )。具体实施时,HanLP做了一个简化,即所有非复合词的命名实体都标注为S,不再附着类别。这样标注集更精简,模型更小巧。 命名实体识别实际上可以看作分词与词性标注任务的集成: 命名实体的边界可以通过{B,M,E,S}确定,其类别可以通过 B-nt 等附加类别的标签来确定。 HanLP内部提供了语料库转换工序,用户无需关心,只需要传入 PKU 格式的语料库路径即可。 ### 8.2 基于隐马尔可夫模型序列标注的命名实体识别 之前我们就介绍过隐马尔可夫模型,详细见: [4.隐马尔可夫模型与序列标注](https://github.com/NLP-LOVE/Introduction-NLP/blob/master/chapter/4.%E9%9A%90%E9%A9%AC%E5%B0%94%E5%8F%AF%E5%A4%AB%E6%A8%A1%E5%9E%8B%E4%B8%8E%E5%BA%8F%E5%88%97%E6%A0%87%E6%B3%A8.md) 隐马尔可夫模型命名实体识别代码见(**自动下载 PKU 语料库**): hmm_ner.py [https://github.com/NLP-LOVE/Introduction-NLP/tree/master/code/ch08/hmm_ner.py](https://github.com/NLP-LOVE/Introduction-NLP/tree/master/code/ch08/hmm_ner.py) 运行代码后结果如下: ``` 华北电力公司/nt 董事长/n 谭旭光/nr 和/c 秘书/n 胡花蕊/nr 来到/v 美国纽约/ns 现代/ntc 艺术/n 博物馆/n 参观/v ``` 其中机构名“华北电力公司”、人名“谭旭光”“胡花蕊”全部识别正确。但是地名“美国纽约现代艺术博物馆”则无法识别。有以下两个原因: - PKU 语料库中没有出现过这个样本。 - 隐马尔可夫模型无法利用词性特征。 对于第一个原因,只能额外标注一些语料。对于第二个原因可以通过切换到更强大的模型来解决。 ### 8.3 基于感知机序列标注的命名实体识别 之前我们就介绍过感知机模型,详细见: [5.感知机分类与序列标注](https://github.com/NLP-LOVE/Introduction-NLP/blob/master/chapter/5.%E6%84%9F%E7%9F%A5%E6%9C%BA%E5%88%86%E7%B1%BB%E4%B8%8E%E5%BA%8F%E5%88%97%E6%A0%87%E6%B3%A8.md) 感知机模型词性标注代码见(**自动下载 PKU 语料库**): perceptron_ner.py [https://github.com/NLP-LOVE/Introduction-NLP/tree/master/code/ch08/perceptron_ner.py](https://github.com/NLP-LOVE/Introduction-NLP/tree/master/code/ch08/perceptron_ner.py) 运行会有些慢,结果如下: ``` 华北电力公司/nt 董事长/n 谭旭光/nr 和/c 秘书/n 胡花蕊/nr 来到/v [美国纽约/ns 现代/ntc 艺术/n 博物馆/n]/ns 参观/v ``` 与隐马尔可夫模型相比,已经能够正确识别地名了。 ### 8.4 基于条件随机场序列标注的命名实体识别 之前我们就介绍过条件随机场模型,详细见: [6.条件随机场与序列标注](https://github.com/NLP-LOVE/Introduction-NLP/blob/master/chapter/6.%E6%9D%A1%E4%BB%B6%E9%9A%8F%E6%9C%BA%E5%9C%BA%E4%B8%8E%E5%BA%8F%E5%88%97%E6%A0%87%E6%B3%A8.md) 条件随机场模型词性标注代码见(**自动下载 PKU 语料库**): crf_ner.py [https://github.com/NLP-LOVE/Introduction-NLP/tree/master/code/ch08/crf_ner.py](https://github.com/NLP-LOVE/Introduction-NLP/tree/master/code/ch08/crf_ner.py) 运行时间会比较长,结果如下: ``` 华北电力公司/nt 董事长/n 谭旭光/nr 和/c 秘书/n 胡花蕊/nr 来到/v [美国纽约/ns 现代/ntc 艺术/n 博物馆/n]/ns 参观/v ``` 得到了结果是一样的。 ### 8.5 命名实体识别标准化评测 各个命名实体识别模块的准确率如何,并非只能通过几个句子主观感受。任何监督学习任务都有一套标准化评测方案,对于命名实体识别,按照惯例引入P、R 和 F1 评测指标。 在1998年1月《人民日报》语料库上的标准化评测结果如下: | 模型 | P | R | F1 | | -------------- | ----- | ----- | ----- | | 隐马尔可夫模型 | 79.01 | 30.14 | 43.64 | | 感知机 | 87.33 | 78.98 | 82.94 | | 条件随机场 | 87.93 | 73.75 | 80.22 | 值得一提的是,准确率与评测策略、特征模板、语料库规模息息相关。通常而言,当语料库较小时,应当使用简单的特征模板,以防止模型过拟合;当语料库较大时,则建议使用更多特征,以期更高的准确率。当特征模板固定时,往往是语料库越大,准确率越高。 ### 8.6 自定义领域命名实体识别 以上我们接触的都是通用领域上的语料库,所含的命名实体仅限于人名、地名、机构名等。假设我们想要识别专门领域中的命名实体,这时,我们就要自定义领域的语料库了。 1. **标注领域命名实体识别语料库** 首先我们需要收集一些文本, 作为标注语料库的原料,称为**生语料**。由于我们的目标是识别文本中的战斗机名称或型号,所以生语料的来源应当是些军事网站的报道。在实际工程中,求由客户提出,则应当由该客户提供生语料。语料的量级越大越好,一般最低不少于数千个句子。 生语料准备就绪后,就可以开始标注了。对于命名实体识别语料库,若以词语和词性为特征的话,还需要标注分词边界和词性。不过我们不必从零开始标注,而可以在HanLP的标注基础上进行校正,这样工作量更小。 样本标注了数千个之后,生语料就被标注成了**熟语料**。下面代码自动下载语料库。 2. **训练领域模型** 选择感知机作为训练算法(**自动下载 战斗机 语料库**): plane_ner.py [https://github.com/NLP-LOVE/Introduction-NLP/tree/master/code/ch08/plane_ner.py](https://github.com/NLP-LOVE/Introduction-NLP/tree/master/code/ch08/plane_ner.py) 运行结果如下: ``` 下载 http://file.hankcs.com/corpus/plane-re.zip 到 /usr/local/lib/python3.7/site-packages/pyhanlp/static/data/test/plane-re.zip 100.00%, 0 MB, 552 KB/s, 还有 0 分 0 秒 米高扬/nrf 设计/v [米格/nr -/w 17/m PF/nx]/np :/w [米格/nr -/w 17/m]/np PF/n 型/k 战斗机/n 比/p [米格/nr -/w 17/m P/nx]/np 性能/n 更好/l 。/w [米格/nr -/w 阿帕奇/nrf -/w 666/m S/q]/np 横空出世/l 。/w ``` 这句话已经在语料库中出现过,能被正常识别并不意外。我们可以伪造一款“米格-阿帕奇-666S”战斗机,试试模型的繁华能力,发现依然能够正确识别。
github_jupyter
# Setting up vPython in Jupyterlab ## Version Compatibility At this time (10/20) vPython appears to be compatible with verisons of Jupyterlab through 1.2.6. You may need to remove your current install of Jupyterlab to do this. To install specific versions of Jupyterlab (and remove the application) go to the settings icon in the top right corner of the Jupyterlab frame on the Home tab in Anaconda Navigator. ![github menu](images/ChooseVersion.png) ![github menu](images/VersionMenu.png) The most effective instructions I have found for installing vpython in a Jupyterlab environment are [here - thanks to Bruce Sherwood](https://www.vpython.org/presentation2018/install.html). Here are the principle steps in that process. Preparation: Launch Juypterlab from Anaconda Navigator which should open a window in your browser. i: Update python by going to the Environments tab on Anaconda Navigator. Choose to display the Installed environments and scroll down to find your python package. To the right is a column which indicates the version number. If there is a blue arrow there indicating an update to the package you can click to install. At the time of this document my python version was 3.8.3. ![github menu](images/pythonupdate.png) ii: From the File drop down menu (in the Jupyterlab tab on your browser) select New Launcher and launch a Terminal window. ![github menu](images/Filedropdown.png) ![github menu](images/terminalwindow.png) iii: In that terminal window you will execute the following commands (from the document linked above). Anaconda will sort out what needs to be installed and updated. For the two installs it will ask you if you want to proceed at some point and you need to enter y (y/n are options). ```conda install -c vpython vpython``` ```conda install -c anaconda nodejs``` ![github menu](images/condacommand.png) iv: Assuming that you get no error messages in the previous installs then the next terminal command is connects vpython to the Jupyterlab environment. ```jupyter labextension install vpython``` If this labextension install gives you conflicts it may be that your installed version of Jupyterlab is too new. v: From the File drop down menu select Shut Down to exit from Jupyterlab and close the tab in your browser. vi: Relaunch Jupyterlab and the sample vPython code below should execute correctly. ## Downloading this Notebook If you are reading this notebook you probably found the [github](https://github.com/smithrockmaker/PH213) where it is hosted. Generally one clones the entire github repository but for physics classes we often wish to download only one of the notebooks hosted on the github. Here is the process that appears to work. i: Select the notebook of interest which will be rendered so that you can explore it. ii At the top right of the menu bar is a 'raw' button. 'Right click' on the raw button and select 'Save Link As..' ![github menu](images/rawmenu.png) ![github menu](images/rawsaveas.png) iii: The downloaded file will have an .ipynb suffix. To correctly display this page on your computer you will need to got to the images folder on the github and download the relevant images to an appropiate folder. ``` from vpython import * scene=canvas(title="Constant Velocity",x=0, y=0, width=800, height=600, autoscale=0, center=vector(0,0,0)) t = 0.0 dt = .1 s = vector(0.0,0.0,0.0) v = vector(5.0,0.0,0.0) a = vector(2.0, 0.0,0.0) cart = sphere(pos=s,radius = .3, color = color.blue) while t < 1.5: rate(10) s = s+v*dt v = v+a*dt print ('t =',t, 's =',s, 'v =',v) cart.pos=s ballghost = sphere(pos=s,radius=.1, color = color.yellow) t = t + dt ```
github_jupyter
<img src="https://upload.wikimedia.org/wikipedia/commons/4/47/Logo_UTFSM.png" width="200" alt="utfsm-logo" align="left"/> # MAT281 ### Aplicaciones de la Matemática en la Ingeniería ## Módulo 04 ## Laboratorio Clase 04: Métricas y selección de modelos ### Instrucciones * Completa tus datos personales (nombre y rol USM) en siguiente celda. * La escala es de 0 a 4 considerando solo valores enteros. * Debes _pushear_ tus cambios a tu repositorio personal del curso. * Como respaldo, debes enviar un archivo .zip con el siguiente formato `mXX_cYY_lab_apellido_nombre.zip` a alonso.ogueda@gmail.com, debe contener todo lo necesario para que se ejecute correctamente cada celda, ya sea datos, imágenes, scripts, etc. * Se evaluará: - Soluciones - Código - Que Binder esté bien configurado. - Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error. __Nombre__: Simón Masnú __Rol__: 201503026-K En este laboratorio utilizaremos el conjunto de datos _Abolone_. **Recuerdo** La base de datos contiene mediciones a 4177 abalones, donde las mediciones posibles son sexo ($S$), peso entero $W_1$, peso sin concha $W_2$, peso de visceras $W_3$, peso de concha $W_4$, largo ($L$), diametro $D$, altura $H$, y el número de anillos $A$. ``` import pandas as pd import numpy as np abalone = pd.read_csv( "data/abalone.data", header=None, names=["sex", "length", "diameter", "height", "whole_weight", "shucked_weight", "viscera_weight", "shell_weight", "rings"] ) abalone_data = ( abalone.assign(sex=lambda x: x["sex"].map({"M": 1, "I": 0, "F": -1})) .loc[lambda x: x.drop(columns="sex").gt(0).all(axis=1)] .astype(np.float) ) abalone_data.head() ``` #### Modelo A Consideramos 9 parámetros, llamados $\alpha_i$, para el siguiente modelo: $$ \log(A) = \alpha_0 + \alpha_1 W_1 + \alpha_2 W_2 +\alpha_3 W_3 +\alpha_4 W_4 + \alpha_5 S + \alpha_6 \log L + \alpha_7 \log D+ \alpha_8 \log H$$ ``` def train_model_A(data): y = np.log(data.loc[:, "rings"].values.ravel()) X = ( data.assign( intercept=1., length=lambda x: x["length"].apply(np.log), diameter=lambda x: x["diameter"].apply(np.log), height=lambda x: x["height"].apply(np.log), ) .loc[: , ["intercept", "whole_weight", "shucked_weight", "viscera_weight", "shell_weight", "sex", "length", "diameter", "height"]] .values ) coeffs = np.linalg.lstsq(X, y, rcond=None)[0] return coeffs def test_model_A(data, coeffs): X = ( data.assign( intercept=1., length=lambda x: x["length"].apply(np.log), diameter=lambda x: x["diameter"].apply(np.log), height=lambda x: x["height"].apply(np.log), ) .loc[: , ["intercept", "whole_weight", "shucked_weight", "viscera_weight", "shell_weight", "sex", "length", "diameter", "height"]] .values ) ln_anillos = np.dot(X, coeffs) return np.exp(ln_anillos) ``` #### Modelo B Consideramos 6 parámetros, llamados $\beta_i$, para el siguiente modelo: $$ \log(A) = \beta_0 + \beta_1 W_1 + \beta_2 W_2 +\beta_3 W_3 +\beta W_4 + \beta_5 \log( L D H ) $$ ``` def train_model_B(data): y = np.log(data.loc[:, "rings"].values.ravel()) X = ( data.assign( intercept=1., ldh=lambda x: (x["length"] * x["diameter"] * x["height"]).apply(np.log), ) .loc[: , ["intercept", "whole_weight", "shucked_weight", "viscera_weight", "shell_weight", "ldh"]] .values ) coeffs = np.linalg.lstsq(X, y, rcond=None)[0] return coeffs def test_model_B(data, coeffs): X = ( data.assign( intercept=1., ldh=lambda x: (x["length"] * x["diameter"] * x["height"]).apply(np.log), ) .loc[: , ["intercept", "whole_weight", "shucked_weight", "viscera_weight", "shell_weight", "ldh"]] .values ) ln_anillos = np.dot(X, coeffs) return np.exp(ln_anillos) ``` #### Modelo C Consideramos 12 parámetros, llamados $\theta_i^{k}$, con $k \in \{M, F, I\}$, para el siguiente modelo: Si $S=male$: $$ \log(A) = \theta_0^M + \theta_1^M W_2 + \theta_2^M W_4 + \theta_3^M \log( L D H ) $$ Si $S=female$ $$ \log(A) = \theta_0^F + \theta_1^F W_2 + \theta_2^F W_4 + \theta_3^F \log( L D H ) $$ Si $S=indefined$ $$ \log(A) = \theta_0^I + \theta_1^I W_2 + \theta_2^I W_4 + \theta_3^I \log( L D H ) $$ ``` def train_model_C(data): df = ( data.assign( intercept=1., ldh=lambda x: (x["length"] * x["diameter"] * x["height"]).apply(np.log), ) .loc[: , ["intercept", "shucked_weight", "shell_weight", "ldh", "sex", "rings"]] ) coeffs_dict = {} for sex, df_sex in df.groupby("sex"): X = df_sex.drop(columns=["sex", "rings"]) y = np.log(df_sex["rings"].values.ravel()) coeffs_dict[sex] = np.linalg.lstsq(X, y, rcond=None)[0] return coeffs_dict def test_model_C(data, coeffs_dict): df = ( data.assign( intercept=1., ldh=lambda x: (x["length"] * x["diameter"] * x["height"]).apply(np.log), ) .loc[: , ["intercept", "shucked_weight", "shell_weight", "ldh", "sex", "rings"]] ) pred_dict = {} for sex, df_sex in df.groupby("sex"): X = df_sex.drop(columns=["sex", "rings"]) ln_anillos = np.dot(X, coeffs_dict[sex]) pred_dict[sex] = np.exp(ln_anillos) return pred_dict ``` ### 1. Split Data (1 pto) Crea dos dataframes, uno de entrenamiento (80% de los datos) y otro de test (20% restante de los datos) a partir de `abalone_data`. _Hint:_ `sklearn.model_selection.train_test_split` funciona con dataframes! ``` from sklearn.model_selection import train_test_split abalone_train, abalone_test = train_test_split(abalone_data, test_size=0.20,random_state=42) #la misma seed de la clase abalone_train.head() ``` ### 2. Entrenamiento (1 pto) Utilice las funciones de entrenamiento definidas más arriba con tal de obtener los coeficientes para los datos de entrenamiento. Recuerde que para el modelo C se retorna un diccionario donde la llave corresponde a la columna `sex`. ``` coeffs_A = train_model_A(abalone_train) coeffs_B = train_model_B(abalone_train) coeffs_C = train_model_C(abalone_train) ``` ### 3. Predicción (1 pto) Con los coeficientes de los modelos realize la predicción utilizando el conjunto de test. El resultado debe ser un array de shape `(835, )` por lo que debes concatenar los resultados del modelo C. **Hint**: Usar `np.concatenate`. ``` y_pred_A = test_model_A(abalone_test,coeffs_A) y_pred_B = test_model_B(abalone_test,coeffs_B) y_pred_C = np.concatenate([test_model_C(abalone_test,coeffs_C)[-1],test_model_C(abalone_test,coeffs_C)[0],test_model_C(abalone_test,coeffs_C)[1]]) ``` ### 4. Cálculo del error (1 pto) Se utilizará el Error Cuadrático Medio (MSE) que se define como $$\textrm{MSE}(y,\hat{y}) =\dfrac{1}{n}\sum_{t=1}^{n}\left | y_{t}-\hat{y}_{t}\right |^2$$ Defina una la función `MSE` y el vectores `y_test_A`, `y_test_B` e `y_test_C` para luego calcular el error para cada modelo. **Ojo:** Nota que al calcular el error cuadrático medio se realiza una resta elemento por elemento, por lo que el orden del vector es importante, en particular para el modelo que separa por `sex`. ``` def MSE(y_real, y_pred): return sum(np.absolute(y_real-y_pred)**2)/len(y_real) y_test_A = abalone_test.loc[:,'rings'] y_test_B = abalone_test.loc[:,'rings'] y_test_C = np.concatenate([abalone_test[ abalone_test['sex'] == -1].loc[:,'rings'] ,abalone_test[ abalone_test['sex'] == 0].loc[:,'rings'] ,abalone_test[ abalone_test['sex'] == 1].loc[:,'rings']] ,axis=None) #perdon por el Hard-Coding error_A = MSE(y_test_A,y_pred_A) error_B = MSE(y_test_B,y_pred_B) error_C = MSE(y_test_C,y_pred_C) print(f"Error modelo A: {error_A:.2f}") print(f"Error modelo B: {error_B:.2f}") print(f"Error modelo C: {error_C:.2f}") ``` **¿Cuál es el mejor modelo considerando esta métrica?** El mejor modelo considerando como métrica el `MSE` es el modelo **B**.
github_jupyter
# Naive Bayes ## Bayes Theorem $P(A|B) = \frac{P(B|A) P(A)}{P(B)} $ In our case, given features $X = (x_1, ..., x_n)$, the class probability $P(y|X)$: $P(y|X) = \frac{P(X|y) P(y)}{P(X)}$ We're making an assumption all features are **mutually independent**. $P(y|X) = \frac{P(x_1|y) \cdot P(x_2|y) \cdot P(x_3|y) \cdots P(x_n|y) \cdot P(y)} {P(X)}$ Note that $P(y|X)$ called the posterior probability, $P(x_i|y)$ class conditional probability, and $P(y)$ prior probability of $y$. ## Select class with highest probability $y = argmax_yP(y|X) = argmax_y \frac{P(x_1|y) \cdot P(x_2|y) \cdot P(x_3|y) \cdots P(x_n|y) \cdot P(y)} {P(X)}$ Since $P(X)$ is certain, $y = argmax_y P(x_1|y) \cdot P(x_2|y) \cdot P(x_3|y) \cdots P(x_n|y) \cdot P(y)$ To avoid overfollow problem, we use a little trick: $y = argmax_y (\log(P(x_1|y)) + \log(P(x_2|y)) + \log(P(x_3|y)) \cdots \log(P(x_n|y)) + \log(P(y)) )$ ## Model class conditional probability $P(x_i|y)$ by Gaussian $P(x_i|y) = \frac{1}{\sqrt{2\pi \sigma^2}} \cdot e^{-\frac{(x_i - \mu_y)^2}{2 \sigma_y^2}}$ ``` import numpy as np class NaiveBayes: def fit(self, X, y): n_samples, n_features = X.shape self._classes = np.unique(y) n_classes = len(self._classes) self._mean = np.zeros((n_classes, n_features), dtype=np.float64) self._var = np.zeros((n_classes, n_features), dtype=np.float64) self._priors = np.zeros(n_classes, dtype=np.float64) for idx, c in enumerate(self._classes): X_c = X[y==c] self._mean[idx, :] = X_c.mean(axis=0) self._var[idx, :] = X_c.var(axis=0) # prior probability of y, or frequency, how often this class C occur self._priors[idx] = X_c.shape[0] / float(n_samples) print(self._classes) print(self._mean, self._var, self._priors) def predict(self, X): y_pred = [self._predict(x) for x in X] return y_pred def _predict(self, x): '''Make prediction on a single instance.''' posteriors = [] for idx, c in enumerate(self._classes): prior = np.log(self._priors[idx]) class_conditional = np.sum(np.log(self._probability_dense_function(idx, x))) _posterior = prior + class_conditional posteriors.append(_posterior) return self._classes[np.argmax(posteriors)] def _probability_dense_function(self, class_idx, x): mean = self._mean[class_idx] var = self._var[class_idx] numerator = np.exp(-(x - mean) ** 2 / (2 * var)) denominator = np.sqrt(2 * np.pi * var) return numerator / denominator from sklearn import datasets import xgboost from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.naive_bayes import MultinomialNB data = datasets.load_breast_cancer() data = datasets.load_iris() X, y = data.data, data.target y[y == 0] = -1 n_estimators=10 X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=0) nb = NaiveBayes() # nb = MultinomialNB() nb.fit(X_train, y_train) y_pred = nb.predict(X_val) print("Accuracy score ", accuracy_score(y_val, y_pred)) ```
github_jupyter
<font size="+5">#03. Data Manipulation & Visualization to Enhance the Discipline</font> - Book + Private Lessons [Here ↗](https://sotastica.com/reservar) - Subscribe to my [Blog ↗](https://blog.pythonassembly.com/) - Let's keep in touch on [LinkedIn ↗](www.linkedin.com/in/jsulopz) 😄 # Load the Data > - By executing the below lines of code, > - You will see a list of possible datasets that we can load to python by just typing the name in the function ``` import seaborn as sns sns.get_dataset_names() ``` > - For example, `mpg`: **PS**: It will be more challenging & fun for your learning to try other dataset than `mpg` ``` your_dataset = 'mpg' df = sns.load_dataset(name='mpg') df.head() ``` # Scatterplot with 2 Variables > - Variable in X Axis > - Variable in Y Axis # Scatterplot with 3 variables > - Variable in X Axis > - Variable in Y Axis > - Color each point regarding a different value in a column # Other Data Visualization Figures > We'll head over the 3 main libraries used in Python to visualize data: `matplotlib`, `seaborn` and `plotly`. > > We'll reproduce at least one example from the links provided below. Therefore, we need to: > > 1. Click in the link > 2. Pick up an example > 3. Copy-paste the lines of code > 4. Run the code ## Seaborn ### Seaborn Example Gallery > - https://seaborn.pydata.org/examples/index.html ### Repeat the same Visualization Figures > - Now with you `DataFrame` ↓ ``` df = sns.load_dataset(name=your_dataset) df.head() ``` ## Matplotlib ## Matplotlib Example Gallery > - https://towardsdatascience.com/matplotlib-tutorial-with-code-for-pythons-powerful-data-visualization-tool-8ec458423c5e ### Repeat the same Visualization Figures > - Now with you `DataFrame` ↓ ``` df = sns.load_dataset(name=your_dataset) df.head() ``` ## Plotly ## Plotly Example Gallery > - https://plotly.com/python/ ### Repeat the same Visualization Figures > - Now with you `DataFrame` ↓ ``` df = sns.load_dataset(name=your_dataset) df.head() ``` # Achieved Goals _Double click on **this cell** and place an `X` inside the square brackets (i.e., [X]) if you think you understand the goal:_ - [ ] All roads lead to Rome; We can achieve the **the same result with different lines of code**. - We reproduced a scatterplot using 3 different lines of code. - [ ] Different **`libraries`** may have `functions()` that do the **same thing**. - `matplotlib`, `seaborn` & `plotly` makes outstanding Visualization Figures - But you probably wouldn't use `plotly` on a paper. Unless we are on Harry Potter 😜 - [ ] Understand that **coding is a matter of necessity**. Not a serie of mechanical steps to achieve a goal. - You need to create art (code) by solving one problem at a time. - [ ] Understand that **there isn't a unique solution**. - We can achieve the same result with different approaches. - For example, changing the colors of the points.
github_jupyter
``` import requests from IPython.display import Markdown from tqdm import tqdm, tqdm_notebook import pandas as pd from matplotlib import pyplot as plt import numpy as np import altair as alt from requests.utils import quote import os from datetime import timedelta from mod import alt_theme fmt = "{:%Y-%m-%d}" # Can optionally use number of days to choose dates n_days = 60 end_date = fmt.format(pd.datetime.today()) start_date = fmt.format(pd.datetime.today() - timedelta(days=n_days)) renderer = "kaggle" github_orgs = ["jupyterhub", "jupyter", "jupyterlab", "jupyter-widgets", "ipython", "binder-examples", "nteract"] bot_names = ["stale", "codecov", "jupyterlab-dev-mode", "henchbot"] alt.renderers.enable(renderer); alt.themes.register('my_theme', alt_theme) alt.themes.enable("my_theme") # Discourse API key api = {'Api-Key': os.environ['DISCOURSE_API_KEY'], 'Api-Username': os.environ['DISCOURSE_API_USERNAME']} # Discourse def topics_to_markdown(topics, n_list=10): body = [] for _, topic in topics.iterrows(): title = topic['fancy_title'] slug = topic['slug'] posts_count = topic['posts_count'] url = f'https://discourse.jupyter.org/t/{slug}' body.append(f'* [{title}]({url}) ({posts_count} posts)') body = body[:n_list] return '\n'.join(body) def counts_from_activity(activity): counts = activity.groupby('category_id').count()['bookmarked'].reset_index() counts['parent_category'] = None for ii, irow in counts.iterrows(): if parent_categories[irow['category_id']] is not None: counts.loc[ii, 'parent_category'] = parent_categories[irow['category_id']] counts['category_id'] = counts['category_id'].map(lambda a: category_mapping[a]) counts['parent_category'] = counts['parent_category'].map(lambda a: category_mapping[a] if a is not None else 'parent') is_parent = counts['parent_category'] == 'parent' counts.loc[is_parent, 'parent_category'] = counts.loc[is_parent, 'category_id'] counts['parent/category'] = counts.apply(lambda a: a['parent_category']+'/'+a['category_id'], axis=1) counts = counts.sort_values(['parent_category', 'bookmarked'], ascending=False) return counts ``` # Community forum activity The [Jupyter Community Forum](https://discourse.jupyter.org) is a place for Jovyans across the community to talk about Jupyter tools in interactive computing and how they fit into their workflows. It's also a place for developers to share ideas, tools, tips, and help one another. Below are a few updates from activity in the Discourse. For more detailed information about the activity on the Community Forum, check out these links: * [The users page](https://discourse.jupyter.org/u) has information about user activity * [The top posts page](https://discourse.jupyter.org/top) contains a list of top posts, sorted by various metrics. ``` # Get categories for IDs url = "https://discourse.jupyter.org/site.json" resp = requests.get(url, headers=api) category_mapping = {cat['id']: cat['name'] for cat in resp.json()['categories']} parent_categories = {cat['id']: cat.get("parent_category_id", None) for cat in resp.json()['categories']} # Base URL to use url = "https://discourse.jupyter.org/latest.json" ``` ## Topics with lots of likes "Likes" are a way for community members to say thanks for a helpful post, show their support for an idea, or generally to share a little positivity with somebody else. These are topics that have generated lots of likes in recent history. ``` params = {"order": "likes", "ascending": "False"} resp = requests.get(url, headers=api, params=params) # Topics with the most likes in recent history liked = pd.DataFrame(resp.json()['topic_list']['topics']) Markdown(topics_to_markdown(liked)) ``` ## Active topics on the Community Forum These are topics with lots of activity in recent history. ``` params = {"order": "posts", "ascending": "False"} resp = requests.get(url, headers=api, params=params) # Topics with the most posts in recent history posts = pd.DataFrame(resp.json()['topic_list']['topics']) Markdown(topics_to_markdown(posts)) counts = counts_from_activity(posts) alt.Chart(data=counts, width=700, height=300, title="Activity by category").mark_bar().encode( x=alt.X("parent/category", sort=alt.Sort(counts['category_id'].values.tolist())), y="bookmarked", color="parent_category" ) ``` ## Recently-created topics These are topics that were recently created, sorted by the amount of activity in each one. ``` params = {"order": "created", "ascending": "False"} resp = requests.get(url, headers=api, params=params) # Sort created by the most posted for recently-created posts created = pd.DataFrame(resp.json()['topic_list']['topics']) created = created.sort_values('posts_count', ascending=False) Markdown(topics_to_markdown(created)) counts = counts_from_activity(created) alt.Chart(data=counts, width=700, height=300, title="Activity by category").mark_bar().encode( x=alt.X("parent/category", sort=alt.Sort(counts['category_id'].values.tolist())), y="bookmarked", color="parent_category" ) ``` ## User activity in the Community Forum **Top posters** These people have posted lots of comments, replies, answers, etc in the community forum. ``` def plot_user_data(users, column, sort=False): plt_data = users.sort_values(column, ascending=False).head(50) x = alt.X("username", sort=plt_data['username'].tolist()) if sort is True else 'username' ch = alt.Chart(data=plt_data).mark_bar().encode( x=x, y=column ) return ch url = "https://discourse.jupyter.org/directory_items.json" params = {"period": "quarterly", "order": "post_count"} resp = requests.get(url, headers=api, params=params) # Topics with the most likes in recent history users = pd.DataFrame(resp.json()['directory_items']) users['username'] = users['user'].map(lambda a: a['username']) plot_user_data(users.head(50), 'post_count') ``` **Forum users, sorted by likes given** These are Community Forum members that "liked" other people's posts. We appreciate anybody taking the time to tell someone else they like what they're shared! ``` plot_user_data(users.head(50), 'likes_given') ``` **Forum users, sorted by likes received** These are folks that posted things other people in the Community Forum liked. ``` plot_user_data(users.head(50), 'likes_received') %%html <script src="https://cdn.rawgit.com/parente/4c3e6936d0d7a46fd071/raw/65b816fb9bdd3c28b4ddf3af602bfd6015486383/code_toggle.js"></script> ```
github_jupyter
<a href="https://colab.research.google.com/github/sproboticworks/ml-course/blob/master/Cats%20and%20Dogs%20Classification%20with%20Augmentation%20and%20Dropout.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Import Packages ``` import tensorflow as tf import numpy as np import matplotlib.pyplot as plt ``` # Download Data ``` url = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip' zip_dir = tf.keras.utils.get_file('cats_and_dogs_filtered.zip', origin=url, extract=True) ``` The dataset we have downloaded has the following directory structure. <pre style="font-size: 10.0pt; font-family: Arial; line-height: 2; letter-spacing: 1.0pt;" > <b>cats_and_dogs_filtered</b> |__ <b>train</b> |______ <b>cats</b>: [cat.0.jpg, cat.1.jpg, cat.2.jpg ...] |______ <b>dogs</b>: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...] |__ <b>validation</b> |______ <b>cats</b>: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ...] |______ <b>dogs</b>: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] </pre> ## List the directories with the following terminal command: ``` import os zip_dir_base = os.path.dirname(zip_dir) !find $zip_dir_base -type d -print ``` ## Assign Directory Variables ``` base_dir = os.path.join(os.path.dirname(zip_dir), 'cats_and_dogs_filtered') train_dir = os.path.join(base_dir, 'train') validation_dir = os.path.join(base_dir, 'validation') # Directory with our training cat/dog pictures train_cats_dir = os.path.join(train_dir, 'cats') train_dogs_dir = os.path.join(train_dir, 'dogs') # Directory with our validation cat/dog pictures validation_cats_dir = os.path.join(validation_dir, 'cats') validation_dogs_dir = os.path.join(validation_dir, 'dogs') ``` ## Print Filenames ``` train_cat_fnames = os.listdir( train_cats_dir ) train_dog_fnames = os.listdir( train_dogs_dir ) print(train_cat_fnames[:10]) print(train_dog_fnames[:10]) ``` ## Print number of Training and Validation images ``` num_cats_tr = len(os.listdir(train_cats_dir)) num_dogs_tr = len(os.listdir(train_dogs_dir)) num_cats_val = len(os.listdir(validation_cats_dir)) num_dogs_val = len(os.listdir(validation_dogs_dir)) total_train = num_cats_tr + num_dogs_tr total_val = num_cats_val + num_dogs_val print('total training cat images :', len(os.listdir( train_cats_dir ) )) print('total training dog images :', len(os.listdir( train_dogs_dir ) )) print('total validation cat images :', len(os.listdir( validation_cats_dir ) )) print('total validation dog images :', len(os.listdir( validation_dogs_dir ) )) ``` # Data Preparation ``` BATCH_SIZE = 20 IMG_SHAPE = 150 from tensorflow.keras.preprocessing.image import ImageDataGenerator # All images will be rescaled by 1./255. train_datagen = ImageDataGenerator( rescale = 1.0/255. ) validation_datagen = ImageDataGenerator( rescale = 1.0/255. ) # -------------------- # Flow training images in batches of 20 using train_datagen generator # -------------------- train_generator = train_datagen.flow_from_directory(train_dir, batch_size=BATCH_SIZE, class_mode='binary', target_size=(IMG_SHAPE, IMG_SHAPE)) # -------------------- # Flow validation images in batches of 20 using test_datagen generator # -------------------- validation_generator = validation_datagen.flow_from_directory(validation_dir, batch_size=BATCH_SIZE, class_mode = 'binary', target_size = (IMG_SHAPE, IMG_SHAPE)) ``` ## Visualizing Training images ``` def plotImages(images_arr): fig, axes = plt.subplots(1, 5, figsize=(20,20)) axes = axes.flatten() for img, ax in zip(images_arr, axes): ax.imshow(img) plt.tight_layout() plt.show() sample_training_images, _ = next(train_generator) plotImages(sample_training_images[:5]) # Plot images 0-4 ``` # Image Augmentation ## Flipping the image horizontally ``` train_datagen = ImageDataGenerator(rescale=1./255, horizontal_flip=True) train_generator = train_datagen.flow_from_directory(batch_size=BATCH_SIZE, directory=train_dir, shuffle=True, target_size=(IMG_SHAPE,IMG_SHAPE)) augmented_images = [train_generator[0][0][0] for i in range(5)] plotImages(augmented_images) ``` ## Rotating the image ``` train_datagen = ImageDataGenerator(rescale=1./255, rotation_range=45) train_generator = train_datagen.flow_from_directory(batch_size=BATCH_SIZE, directory=train_dir, shuffle=True, target_size=(IMG_SHAPE, IMG_SHAPE)) augmented_images = [train_generator[0][0][0] for i in range(5)] plotImages(augmented_images) ``` ## Applying Zoom ``` train_datagen = ImageDataGenerator(rescale=1./255, zoom_range=0.5) train_generator = train_datagen.flow_from_directory(batch_size=BATCH_SIZE, directory=train_dir, shuffle=True, target_size=(IMG_SHAPE, IMG_SHAPE)) augmented_images = [train_generator[0][0][0] for i in range(5)] plotImages(augmented_images) ``` ## Putting it all together ``` train_datagen = ImageDataGenerator( rescale=1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') train_generator = train_datagen.flow_from_directory(batch_size=BATCH_SIZE, directory=train_dir, shuffle=True, target_size=(IMG_SHAPE,IMG_SHAPE), class_mode='binary') augmented_images = [train_generator[0][0][0] for i in range(5)] plotImages(augmented_images) ``` # Build Model ``` model = tf.keras.models.Sequential([ # Note the input shape is the desired size of the image 150x150 with 3 bytes color tf.keras.layers.Conv2D(16, (3,3), padding = 'same', activation='relu', input_shape=(150, 150, 3)), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(32, (3,3), padding = 'same', activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(64, (3,3), padding = 'same', activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # Dropout tf.keras.layers.Dropout(0.5), # Flatten the results to feed into a DNN tf.keras.layers.Flatten(), # 512 neuron hidden layer tf.keras.layers.Dense(512, activation='relu'), # Our last layer (our classifier) consists of a Dense layer with 2 output units and a softmax activation function # tf.keras.layers.Dense(2, activation='softmax') # Another popular approach when working with binary classification problems, is to use a classifier that consists of a Dense layer with 1 output unit and a sigmoid activation function # It will contain a value from 0-1 where 0 for 1 class ('cats') and 1 for the other ('dogs') tf.keras.layers.Dense(1, activation='sigmoid') ]) model.summary() from tensorflow.keras.optimizers import RMSprop model.compile(optimizer=RMSprop(lr=0.001), loss='binary_crossentropy', metrics = ['accuracy']) ``` # Training Model ``` EPOCHS = 100 history = model.fit(train_generator, validation_data=validation_generator, steps_per_epoch=int(np.ceil(total_train / float(BATCH_SIZE))), epochs=EPOCHS, validation_steps=int(np.ceil(total_val / float(BATCH_SIZE))), verbose=2) ``` # Visualizing results of the training ``` acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs_range = range(EPOCHS) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') #plt.savefig('./foo.png') plt.show() ``` # Prediction using the Model Let's now take a look at actually running a prediction using the model. ``` test_images, test_labels = next(validation_generator) classes = model.predict(test_images, 10) classes = classes.flatten() print(classes) print(test_labels) fig, axes = plt.subplots(4, 5, figsize=(20,20)) axes = axes.flatten() i = 0 for img, ax in zip(test_images, axes): ax.imshow(img) ax.axis('off') color = 'blue' if round(classes[i]) != test_labels[i] : color = 'red' if classes[i]>0.5: ax.set_title("Dog",fontdict = {'size' : 20, 'color' : color}); else : ax.set_title("Cat",fontdict = {'size' : 20, 'color' : color}); i+=1 plt.tight_layout() plt.show() ```
github_jupyter
# StyleGAN2 *Please note that this is an optional notebook that is meant to introduce more advanced concepts, if you're up for a challenge. So, don't worry if you don't completely follow every step! We provide external resources for extra base knowledge required to grasp some components of the advanced material.* In this notebook, you're going to learn about StyleGAN2, from the paper [Analyzing and Improving the Image Quality of StyleGAN](https://arxiv.org/abs/1912.04958) (Karras et al., 2019), and how it builds on StyleGAN. This is the V2 of StyleGAN, so be prepared for even more extraordinary outputs. Here's the quick version: 1. **Demodulation.** The instance normalization of AdaIN in the original StyleGAN actually was producing “droplet artifacts” that made the output images clearly fake. AdaIN is modified a bit in StyleGAN2 to make this not happen. Below, *Figure 1* from the StyleGAN2 paper is reproduced, showing the droplet artifacts in StyleGAN. ![droplet artifacts example](droplet_artifact.png) 2. **Path length regularization.** “Perceptual path length” (or PPL, which you can explore in [another optional notebook](https://www.coursera.org/learn/build-better-generative-adversarial-networks-gans/ungradedLab/BQjUq/optional-ppl)) was introduced in the original StyleGAN paper, as a metric for measuring the disentanglement of the intermediate noise space W. PPL measures the change in the output image, when interpolating between intermediate noise vectors $w$. You'd expect a good model to have a smooth transition during interpolation, where the same step size in $w$ maps onto the same amount of perceived change in the resulting image. Using this intuition, you can make the mapping from $W$ space to images smoother, by encouraging a given change in $w$ to correspond to a constant amount of change in the image. This is known as path length regularization, and as you might expect, included as a term in the loss function. This smoothness also made the generator model "significantly easier to invert"! Recall that inversion means going from a real or fake image to finding its $w$, so you can easily adapt the image's styles by controlling $w$. 3. **No progressive growing.** While progressive growing was seemingly helpful for training the network more efficiently and with greater stability at lower resolutions before progressing to higher resolutions, there's actually a better way. Instead, you can replace it with 1) a better neural network architecture with skip and residual connections (which you also see in Course 3 models, Pix2Pix and CycleGAN), and 2) training with all of the resolutions at once, but gradually moving the generator's _attention_ from lower-resolution to higher-resolution dimensions. So in a way, still being very careful about how to handle different resolutions to make training eaiser, from lower to higher scales. There are also a number of performance optimizations, like calculating the regularization less frequently. We won't focus on those in this notebook, but they are meaningful technical contributions. But first, some useful imports: ``` import torch import torch.nn as nn import torch.nn.functional as F from torchvision.utils import make_grid import matplotlib.pyplot as plt def show_tensor_images(image_tensor, num_images=16, size=(3, 64, 64), nrow=3): ''' Function for visualizing images: Given a tensor of images, number of images, size per image, and images per row, plots and prints the images in an uniform grid. ''' image_tensor = (image_tensor + 1) / 2 image_unflat = image_tensor.detach().cpu().clamp_(0, 1) image_grid = make_grid(image_unflat[:num_images], nrow=nrow, padding=2) plt.imshow(image_grid.permute(1, 2, 0).squeeze()) plt.axis('off') plt.show() ``` ## Fixing Instance Norm One issue with instance normalization is that it can lose important information that is typically communicated by relative magnitudes. In StyleGAN2, it was proposed that the droplet artifects are a way for the network to "sneak" this magnitude information with a single large spike. This issue was also highlighted in the paper which introduced GauGAN, [Semantic Image Synthesis with Spatially-Adaptive Normalization](https://arxiv.org/abs/1903.07291) (Park et al.), earlier in 2019. In that more extreme case, instance normalization could sometimes eliminate all semantic information, as shown in their paper's *Figure 3*: ![information loss by gaugan](gaugan_in.png) While removing normalization is technically possible, it reduces the controllability of the model, a major feature of StyleGAN. Here's one solution from the paper: ### Output Demodulation The first solution notes that the scaling the output of a convolutional layer by style has a consistent and numerically reproducible impact on the standard deviation of its output. By scaling down the standard deviation of the output to 1, the droplet effect can be reduced. More specifically, the style $s$, when applied as a multiple to convolutional weights $w$, resulting in weights $w'_{ijk}=s_i \cdot w_{ijk}$ will have standard deviation $\sigma_j = \sqrt{\sum_{i,k} w'^2_{ijk}}$. One can simply divide the output of the convolution by this factor. However, the authors note that dividing by this factor can also be incorporated directly into the the convolutional weights (with an added $\epsilon$ for numerical stability): $$w''_{ijk}=\frac{w'_{ijk}}{\sqrt{\sum_{i,k} w'^2_{ijk} + \epsilon}}$$ This makes it so that this entire operation can be baked into a single convolutional layer, making it easier to work with, implement, and integrate into the existing architecture of the model. ``` class ModulatedConv2d(nn.Module): ''' ModulatedConv2d Class, extends/subclass of nn.Module Values: channels: the number of channels the image has, a scalar w_dim: the dimension of the intermediate tensor, w, a scalar ''' def __init__(self, w_dim, in_channels, out_channels, kernel_size, padding=1): super().__init__() self.conv_weight = nn.Parameter( torch.randn(out_channels, in_channels, kernel_size, kernel_size) ) self.style_scale_transform = nn.Linear(w_dim, in_channels) self.eps = 1e-6 self.padding = padding def forward(self, image, w): # There is a more efficient (vectorized) way to do this using the group parameter of F.conv2d, # but for simplicity and readibility you will go through one image at a time. images = [] for i, w_cur in enumerate(w): # Calculate the style scale factor style_scale = self.style_scale_transform(w_cur) # Multiply it by the corresponding weight to get the new weights w_prime = self.conv_weight * style_scale[None, :, None, None] # Demodulate the new weights based on the above formula w_prime_prime = w_prime / torch.sqrt( (w_prime ** 2).sum([1, 2, 3])[:, None, None, None] + self.eps ) images.append(F.conv2d(image[i][None], w_prime_prime, padding=self.padding)) return torch.cat(images) def forward_efficient(self, image, w): # Here's the more efficient approach. It starts off mostly the same style_scale = self.style_scale_transform(w) w_prime = self.conv_weight[None] * style_scale[:, None, :, None, None] w_prime_prime = w_prime / torch.sqrt( (w_prime ** 2).sum([2, 3, 4])[:, :, None, None, None] + self.eps ) # Now, the trick is that we'll make the images into one image, and # all of the conv filters into one filter, and then use the "groups" # parameter of F.conv2d to apply them all at once batchsize, in_channels, height, width = image.shape out_channels = w_prime_prime.shape[2] # Create an "image" where all the channels of the images are in one sequence efficient_image = image.view(1, batchsize * in_channels, height, width) efficient_filter = w_prime_prime.view(batchsize * out_channels, in_channels, *w_prime_prime.shape[3:]) efficient_out = F.conv2d(efficient_image, efficient_filter, padding=self.padding, groups=batchsize) return efficient_out.view(batchsize, out_channels, *image.shape[2:]) example_modulated_conv = ModulatedConv2d(w_dim=128, in_channels=3, out_channels=3, kernel_size=3) num_ex = 2 image_size = 64 rand_image = torch.randn(num_ex, 3, image_size, image_size) # A 64x64 image with 3 channels rand_w = torch.randn(num_ex, 128) new_image = example_modulated_conv(rand_image, rand_w) second_modulated_conv = ModulatedConv2d(w_dim=128, in_channels=3, out_channels=3, kernel_size=3) second_image = second_modulated_conv(new_image, rand_w) print("Original noise (left), noise after modulated convolution (middle), noise after two modulated convolutions (right)") plt.rcParams['figure.figsize'] = [8, 8] show_tensor_images(torch.stack([rand_image, new_image, second_image], 1).view(-1, 3, image_size, image_size)) ``` ## Path Length Regularization Path length regularization was introduced based on the usefulness of PPL, or perceptual path length, a metric used of evaluating disentanglement proposed in the original StyleGAN paper -- feel free to check out the [optional notebook](https://www.coursera.org/learn/build-better-generative-adversarial-networks-gans/ungradedLab/BQjUq/optional-ppl) for a detailed overview! In essence, for a fixed-size step in any direction in $W$ space, the metric attempts to make the change in image space to have a constant magnitude $a$. This is accomplished (in theory) by first taking the Jacobian of the generator with respect to $w$, which is $\mathop{\mathrm{J}_{\mathrm{w}}}={\partial g(\mathrm{w})} / {\partial \mathrm{w}}$. Then, you take the L2 norm of Jacobian matrix and you multiply that by random images (that you sample from a normal distribution, as you often do): $\Vert \mathrm{J}_{\mathrm{w}}^T \mathrm{y} \Vert_2$. This captures the expected magnitude of the change in pixel space. From this, you get a loss term, which penalizes the distance between this magnitude and $a$. The paper notes that this has similarities to spectral normalization (discussed in [another optional notebook](https://www.coursera.org/learn/build-basic-generative-adversarial-networks-gans/ungradedLab/c2FPs/optional-sn-gan) in Course 1), because it constrains multiple norms. An additional optimization is also possible and ultimately used in the StyleGAN2 model: instead of directly computing $\mathrm{J}_{\mathrm{w}}^T \mathrm{y}$, you can more efficiently calculate the gradient $\nabla_{\mathrm{w}} (g(\mathrm{w}) \cdot \mathrm{y})$. Finally, a bit of talk on $a$: $a$ is not a fixed constant, but an exponentially decaying average of the magnitudes over various runs -- as with most times you see (decaying) averages being used, this is to smooth out the value of $a$ across multiple iterations, not just dependent on one. Notationally, with decay rate $\gamma$, $a$ at the next iteration $a_{t+1} = {a_t} * (1 - \gamma) + \Vert \mathrm{J}_{\mathrm{w}}^T \mathrm{y} \Vert_2 * \gamma$. However, for your one example iteration you can treat $a$ as a constant for simplicity. There is also an example of an update of $a$ after the calculation of the loss, so you can see what $a_{t+1}$ looks like with exponential decay. ``` # For convenience, we'll define a very simple generator here: class SimpleGenerator(nn.Module): ''' SimpleGenerator Class, for path length regularization demonstration purposes Values: channels: the number of channels the image has, a scalar w_dim: the dimension of the intermediate tensor, w, a scalar ''' def __init__(self, w_dim, in_channels, hid_channels, out_channels, kernel_size, padding=1, init_size=64): super().__init__() self.w_dim = w_dim self.init_size = init_size self.in_channels = in_channels self.c1 = ModulatedConv2d(w_dim, in_channels, hid_channels, kernel_size) self.activation = nn.ReLU() self.c2 = ModulatedConv2d(w_dim, hid_channels, out_channels, kernel_size) def forward(self, w): image = torch.randn(len(w), self.in_channels, self.init_size, self.init_size).to(w.device) y = self.c1(image, w) y = self.activation(y) y = self.c2(y, w) return y from torch.autograd import grad def path_length_regulization_loss(generator, w, a): # Generate the images from w fake_images = generator(w) # Get the corresponding random images random_images = torch.randn_like(fake_images) # Output variation that we'd like to regularize output_var = (fake_images * random_images).sum() # Calculate the gradient with respect to the inputs cur_grad = grad(outputs=output_var, inputs=w)[0] # Calculate the distance from a penalty = (((cur_grad - a) ** 2).sum()).sqrt() return penalty, output_var simple_gen = SimpleGenerator(w_dim=128, in_channels=3, hid_channels=64, out_channels=3, kernel_size=3) samples = 10 test_w = torch.randn(samples, 128).requires_grad_() a = 10 penalty, variation = path_length_regulization_loss(simple_gen, test_w, a=a) decay = 0.001 # How quickly a should decay new_a = a * (1 - decay) + variation * decay print(f"Old a: {a}; new a: {new_a.item()}") ``` ## No More Progressive Growing While the concepts behind progressive growing remain, you get to see how that is revamped and beefed up in StyleGAN2. This starts with generating all resolutions of images from the very start of training. You might be wondering why they didn't just do this in the first place: in the past, this has generally been unstable to do. However, by using residual or skip connections (there are two variants that both do better than without them), StyleGAN2 manages to replicate many of the dynamics of progressive growing in a less explicit way. Three architectures were considered for StyleGAN2 to replace the progressive growing. Note that in the following figure, *tRGB* and *fRGB* refer to the $1 \times 1$ convolutions which transform the noise with some number channels at a given layer into a three-channel image for the generator, and vice versa for the discriminator. ![architectures considered](stylegan_architectures.png) *The set of architectures considered for StyleGAN2 (from the paper). Ultimately, the skip generator and residual discriminator (highlighted in green) were chosen*. ### Option a: MSG-GAN [MSG-GAN](https://arxiv.org/abs/1903.06048) (from Karnewar and Wang 2019), proposed a somewhat natural approach: generate all resolutions of images, but also directly pass each corresponding resolution to a block of the discriminator responsible for dealing with that resolution. ### Option b: Skip Connections In the skip-connection approach, each block takes the previous noise as input and generates the next resolution of noise. For the generator, each noise is converted to an image, upscaled to the maximum size, and then summed together. For the discriminator, the images are downsampled to each block's size and converted to noises. ### Option c: Residual Nets In the residual network approach, each block adds residual detail to the noise, and the image conversion happens at the end for the generator and at the start for the discriminator. ### StyleGAN2: Skip Generator, Residual Discriminator By experiment, the skip generator and residual discriminator were chosen. One interesting effect is that, as the images for the skip generator are additive, you can explicitly see the contribution from each of them, and measure the magnitude of each block's contribution. If you're not 100% sure how to implement skip and residual models yet, don't worry - you'll get a lot of practice with that in Course 3! ![contribution by different resolutions over time](noise_contributions.png) *Figure 8 from StyleGAN2 paper, showing generator contributions by different resolution blocks of the generator over time. The y-axis is the standard deviation of the contributions, and the x-axis is the number of millions of images that the model has been trained on (training progress).* Now, you've seen the primary changes, and you understand the current state-of-the-art in image generation, StyleGAN2, congratulations! If you're the type of person who reads through the optional notebooks for fun, maybe you'll make the next state-of-the-art! Can't wait to cover your GAN in a new notebook :)
github_jupyter
# Visual Analysis This notebook has been created to support two main purposes: * Based on an input image and a set of models, display the action-space probability distribution. * Based on an input image and a set of models, visualize which parts of the image the model looks at. ## Usage The workbook requires the following: * A set of raw images captured from the front camera of the car * One or more static model files (`model_*.pb`) * The `model_metadata.json` ## Contributions As usual, your ideas are very welcome and encouraged so if you have any suggestions either bring them to [the AWS DeepRacer Community](http://join.deepracing.io) or share as code contributions. ## Requirements Before you start using the notebook, you will need to install some dependencies. If you haven't yet done so, have a look at [The README.md file](/edit/README.md#running-the-notebooks) to find what you need to install. This workbook will require `tensorflow` and `cv2` to work. ## Imports Run the imports block below: ``` import json import os import glob import numpy as np import pandas as pd import matplotlib.pyplot as plt import cv2 import tensorflow as tf from tensorflow.gfile import GFile from deepracer.model import load_session, visualize_gradcam_discrete_ppo, rgb2gray ``` ## Configure and load files Provide the paths where the image and models are stored. Also define which iterations you would like to review. ``` img_selection = 'logs/sample-model/pictures/*.png' model_path = 'logs/sample-model/model' iterations = [15, 30, 48] ``` Load the model metadata in, and define which sensor is in use. ``` with open("{}/model_metadata.json".format(model_path),"r") as jsonin: model_metadata=json.load(jsonin) my_sensor = [sensor for sensor in model_metadata['sensor'] if sensor != "LIDAR"][0] display(model_metadata) ``` Load in the pictures from the pre-defined path. ``` picture_files = sorted(glob.glob(img_selection)) display(picture_files) action_names = [] degree_sign= u'\N{DEGREE SIGN}' for action in model_metadata['action_space']: action_names.append(str(action['steering_angle'])+ degree_sign + " "+"%.1f"%action["speed"]) display(action_names) ``` ## Load the model files and process pictures We will now load in the models and process the pictures. Output is a nested list with size `n` models as the outer and `m` picture as the inner list. The inner list will contain a number of values equal to the ``` model_inference = [] models_file_path = [] for n in iterations: models_file_path.append("{}/model_{}.pb".format(model_path,n)) display(models_file_path) for model_file in models_file_path: model, obs, model_out = load_session(model_file, my_sensor) arr = [] for f in picture_files[:]: img = cv2.imread(f) img = cv2.resize(img, dsize=(160, 120), interpolation=cv2.INTER_CUBIC) img_arr = np.array(img) img_arr = rgb2gray(img_arr) img_arr = np.expand_dims(img_arr, axis=2) current_state = {"observation": img_arr} #(1, 120, 160, 1) y_output = model.run(model_out, feed_dict={obs:[img_arr]})[0] arr.append (y_output) model_inference.append(arr) model.close() tf.reset_default_graph() ``` ## Simulation Image Analysis - Probability distribution on decisions (actions) We will now show the probabilities per action for the selected picture and iterations. The higher the probability of one single action the more mature is the model. Comparing different models enables the developer to see how the model is becoming more certain over time. ``` PICTURE_INDEX=1 display(picture_files[PICTURE_INDEX]) x = list(range(1,len(action_names)+1)) num_plots = len(iterations) fig, ax = plt.subplots(num_plots,1,figsize=(20,3*num_plots),sharex=True,squeeze=False) for p in range(0, num_plots): ax[p][0].bar(x,model_inference[p][PICTURE_INDEX][::-1]) plt.setp(ax[p, 0], ylabel=os.path.basename(models_file_path[p])) plt.xticks(x,action_names[::-1],rotation='vertical') plt.show() ``` ## What is the model looking at? Gradcam: visual heatmap of where the model is looking to make its decisions. based on https://arxiv.org/pdf/1610.02391.pdf ``` heatmaps = [] view_models = models_file_path[1:3] for model_file in view_models: model, obs, model_out = load_session(model_file, my_sensor) arr = [] for f in picture_files: img = cv2.imread(f) img = cv2.resize(img, dsize=(160, 120), interpolation=cv2.INTER_CUBIC) heatmap = visualize_gradcam_discrete_ppo(model, img, category_index=0, num_of_actions=len(action_names)) heatmaps.append(heatmap) tf.reset_default_graph() fig, ax = plt.subplots(len(view_models),len(picture_files), figsize=(7*len(view_models),2.5*len(picture_files)), sharex=True, sharey=True, squeeze=False) for i in list(range(len(view_models))): plt.setp(ax[i, 0], ylabel=os.path.basename(view_models[i])) for j in list(range(len(picture_files))): ax[i][j].imshow(heatmaps[i * len(picture_files) + j]) plt.setp(ax[-1:, j], xlabel=os.path.basename(picture_files[j])) plt.show() ```
github_jupyter
``` import numpy as np import pandas as pd from copy import deepcopy from matplotlib import pyplot as plt import calculations import agent import environment def train(env, agent, num_iterations, algo, learning_rate, lam): weights = [] new_weight = np.zeros(agent.num_features) z = np.zeros(agent.num_features) cur_st = env.state cur_sib_st = cur_st for ctr in range(1, num_iterations+1): # Perform action in environment using current weights self.w act = agent.get_action(cur_st) next_st, reward = env.step(act) # Get next sibling state next_sib_st, _ = calculations.sibling(cur_st, act, next_st, env.transition_matrix) # Sample reward from current sibling state env_clone = deepcopy(env) env_clone.state = cur_sib_st _, reward_sib = env_clone.step(agent.get_action(cur_sib_st)) # Update eligibility traces and weights based on chosen algorithm if algo == 'TD': featdiff = env.gamma * agent.features[next_st] - agent.features[cur_st] d = reward + sum(featdiff * new_weight) new_weight = new_weight + learning_rate * d * z / ctr z = env.gamma * lam * z + agent.features[next_st] elif algo == 'STD-99': featdiff = env.gamma * (agent.features[next_st] - agent.features[next_sib_st]) - ( agent.features[cur_st] - agent.features[cur_sib_st]) d = reward + sum(featdiff * new_weight) new_weight = new_weight + learning_rate * d * z / ctr z = env.gamma * lam * z + agent.features[next_st] - agent.features[next_sib_st] elif algo == 'STD-01': featdiff = gamma * (agent.features[next_st] - agent.features[next_sib_st]) - ( agent.features[cur_st] - agent.features[cur_sib_st]) d = reward - reward_sib + sum(featdiff * new_weight) new_weight = new_weight + learning_rate * d * z / ctr z = gamma * lam * z + (agent.features[next_st] - agent.features[next_sib_st]) / \ env.transition_matrix[cur_st][act][next_st] # update historical sibling states cur_st = next_st cur_sib_st = next_sib_st weights.append(new_weight) agent.weight = new_weight return weights num_states = 2 num_actions = 2 # Randomly initialize transition_matrix, rewards, alpha, and features transition_matrix = np.random.rand(num_states,num_actions,num_states) for i in range(num_states): for j in range(num_actions): transition_matrix[i][j] = transition_matrix[i][j] / sum(transition_matrix[i][j]) rewards = np.random.rand(num_states,num_states)*2 - 1 gamma = np.random.rand(1)[0] features = np.random.rand(num_states,num_states-1)*2 # Calculate value function for each policy policy_values = calculations.policy_values(transition_matrix, rewards, gamma) optimal_policy = max(list(policy_values.items()), key=lambda x: sum(x[1]))[0] # Set up environment env = environment.Environment(transition_matrix, rewards, gamma) # Set up and run policy iteration on agent td_agent = agent.Agent(features, optimal_policy) td_weights = train(env, td_agent, num_iterations=10000, algo="TD", learning_rate=2.0, lam=1.0) td_agent.policy = calculations.make_new_policy(td_agent, td_weights[-1], transition_matrix, rewards, gamma) # Plot weight values at each iteration plt.plot(td_weights[50:]) plt.xlabel("Number of Transitions") plt.ylabel("w") plt.title("TD Weights") plt.show() print() # Plot value differences at each iteration stdAB = calculations.value_difference([1,-1], features, td_weights, optimal_policy, policy_values) plt.plot(stdAB[50:]) plt.xlabel("Number of Transitions") plt.title("TD Difference in Approximation and Real V(A) - V(B)") plt.show() # Set up environment env = environment.Environment(transition_matrix, rewards, gamma) # Set up and run policy iteration on agent std_agent = agent.Agent(features, optimal_policy) std_weights = train(env, std_agent, num_iterations=10000, algo="STD-01", learning_rate=2.0, lam=1.0) std_agent.policy = calculations.make_new_policy(std_agent, std_weights[-1], transition_matrix, rewards, gamma) # Plot weight values at each iteration plt.plot(std_weights[50:]) plt.xlabel("Number of Transitions") plt.ylabel("w") plt.title("STD Weights") plt.show() print() # Plot value differences at each iteration stdAB = calculations.value_difference([1,-1], features, std_weights, optimal_policy, policy_values) plt.plot(stdAB[50:]) plt.xlabel("Number of Transitions") plt.title("STD Difference in Approximation and Real V(A) - V(B)") plt.show() pd.DataFrame(np.array([[optimal_policy, td_agent.policy, std_agent.policy], list(map(lambda x: sum(policy_values[x]), [optimal_policy, td_agent.policy, std_agent.policy]))]), ["Policy", "Policy Value"], ["Optimal", "TD", "STD"]) ```
github_jupyter
# Self-Driving Car Engineer Nanodegree ## Deep Learning ## Project: Build a Traffic Sign Recognition Classifier In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/481/view) for this project. The [rubric](https://review.udacity.com/#!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file. >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. --- ## Step 0: Load The Data ``` # Load pickled data import pickle # TODO: Fill this in based on where you saved the training and testing data training_file = './traffic-signs-data/train.p' validation_file= './traffic-signs-data/valid.p' testing_file = './traffic-signs-data/test.p' with open(training_file, mode='rb') as f: train = pickle.load(f) with open(validation_file, mode='rb') as f: valid = pickle.load(f) with open(testing_file, mode='rb') as f: test = pickle.load(f) X_train, y_train = train['features'], train['labels'] X_valid, y_valid = valid['features'], valid['labels'] X_test, y_test = test['features'], test['labels'] ``` --- ## Step 1: Dataset Summary & Exploration The pickled data is a dictionary with 4 key/value pairs: - `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels). - `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id. - `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image. - `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES** Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. ### Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas ``` ### Replace each question mark with the appropriate value. ### Use python, pandas or numpy methods rather than hard coding the results import pandas as pd import numpy as np import tensorflow as tf # TODO: Number of training examples n_train = len(X_train) # TODO: Number of validation examples n_validation = len(X_valid) # TODO: Number of testing examples. n_test = len(X_test) # TODO: What's the shape of an traffic sign image? image_shape = np.shape(X_train[0]) # TODO: How many unique classes/labels there are in the dataset. n_classes = np.shape(np.unique(y_train))[0] print("Number of training examples =", n_train) print("Number of validation examples =", n_validation) print("Number of testing examples =", n_test) print("Image data shape =", image_shape) print("Number of classes =", n_classes) ``` ### Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python. **NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others? ``` ### Data exploration visualization code goes here. ### Feel free to use as many code cells as needed. import matplotlib.pyplot as plt import random # Visualizations will be shown in the notebook. %matplotlib inline index = random.randint(0, len(X_train)) image = X_train[index].squeeze() plt.figure(figsize=(1,1)) plt.imshow(image) print(y_train[index]) [n_classes, counts] = np.unique(y_train, return_counts=True) plt.figure(figsize=(5,5)) plt.bar(n_classes,counts) plt.xlabel('Traffic Sign Class ID') plt.title('Distribution of traffic sign classes in training input') plt.ylabel('Number of images') plt.show() ``` ---- ## Step 2: Design and Test a Model Architecture Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset). The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem: - Neural network architecture (is the network over or underfitting?) - Play around preprocessing techniques (normalization, rgb to grayscale, etc) - Number of examples per label (some have more than others). - Generate fake data. Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. ### Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. ``` ### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include ### converting to grayscale, etc. ### Feel free to use as many code cells as needed. from sklearn.utils import shuffle X_train, y_train = shuffle(X_train, y_train) X_train_norm = (X_train-128.0)/128.0 X_valid_norm = (X_valid - 128.0)/128.0 X_test_norm = (X_test - 128.0)/128.0 ``` ### Model Architecture ``` ### Define your architecture here. ### Feel free to use as many code cells as needed. from tensorflow.contrib.layers import flatten def LeNet(x, keep_probability): # Hyperparameters mu = 0 sigma = 0.1 # Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x12. conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 12), mean = mu, stddev = sigma)) conv1_b = tf.Variable(tf.zeros(12)) conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b # Activation. conv1 = tf.nn.relu(conv1) #Pooling. Input = 28x28x12. Output = 14x14x12. conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') #Layer 2: Convolutional. Output = 10x10x28. conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 12, 28), mean = mu, stddev = sigma)) conv2_b = tf.Variable(tf.zeros(28)) conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b #Activation. conv2 = tf.nn.relu(conv2) #Layer 3: Convolutional. Output = 6x6x36. conv3_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 28, 36), mean = mu, stddev = sigma)) conv3_b = tf.Variable(tf.zeros(36)) conv3 = tf.nn.conv2d(conv2, conv3_W, strides=[1, 1, 1, 1], padding='VALID') + conv3_b #Activation. conv3 = tf.nn.relu(conv3) # Pooling. Input = 6x6x36. Output = 3x3x36. conv3 = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') #Flatten. Input = 3x3x36. Output = 324. fc0 = flatten(conv3) #Layer 3: Fully Connected. Input = 324. Output = 240. fc1_W = tf.Variable(tf.truncated_normal(shape=(324, 240), mean = mu, stddev = sigma)) fc1_b = tf.Variable(tf.zeros(240)) fc1 = tf.matmul(fc0, fc1_W) + fc1_b # Activation. fc1 = tf.nn.relu(fc1) fc1 = tf.nn.dropout(fc1, keep_prob=keep_probability) # Layer 4: Fully Connected. Input = 240. Output = 98. fc2_W = tf.Variable(tf.truncated_normal(shape=(240, 98), mean = mu, stddev = sigma)) fc2_b = tf.Variable(tf.zeros(98)) fc2 = tf.matmul(fc1, fc2_W) + fc2_b # Activation. fc2 = tf.nn.relu(fc2) # Layer 5: Fully Connected. Input = 98. Output = 43. fc3_W = tf.Variable(tf.truncated_normal(shape=(98, 43), mean = mu, stddev = sigma)) fc3_b = tf.Variable(tf.zeros(43)) logits = tf.matmul(fc2, fc3_W) + fc3_b return logits ``` ### Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting. ``` ### Train your model here. ### Calculate and report the accuracy on the training and validation set. ### Once a final model architecture is selected, ### the accuracy on the test set should be calculated and reported as well. ### Feel free to use as many code cells as needed. EPOCHS = 10 BATCH_SIZE = 128 x = tf.placeholder(tf.float32, (None, 32, 32, 3)) y = tf.placeholder(tf.int32, (None)) keep_probability = tf.placeholder(tf.float32) one_hot_y = tf.one_hot(y, 43) rate = 0.001 logits = LeNet(x, keep_probability) cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits) loss_operation = tf.reduce_mean(cross_entropy) optimizer = tf.train.AdamOptimizer(learning_rate = rate) training_operation = optimizer.minimize(loss_operation) correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1)) accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) saver = tf.train.Saver() def evaluate(X_data, y_data): num_examples = len(X_data) total_accuracy = 0 sess = tf.get_default_session() for offset in range(0, num_examples, BATCH_SIZE): batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE] accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_probability: 1.0}) total_accuracy += (accuracy * len(batch_x)) return total_accuracy / num_examples with tf.Session() as sess: sess.run(tf.global_variables_initializer()) num_examples = len(X_train) print("Training...") print() for i in range(EPOCHS): X_train_norm, y_train = shuffle(X_train_norm, y_train) for offset in range(0, num_examples, BATCH_SIZE): end = offset + BATCH_SIZE batch_x, batch_y = X_train_norm[offset:end], y_train[offset:end] sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_probability: 0.7}) validation_accuracy = evaluate(X_valid_norm, y_valid) training_accuracy = evaluate(X_train_norm, y_train) print("EPOCH {} ...".format(i+1)) print("Training Accuracy = {:.3f}".format(training_accuracy)) print("Validation Accuracy = {:.3f}".format(validation_accuracy)) print() saver.save(sess, './lenet') print("Model saved") with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) test_accuracy = evaluate(X_test_norm, y_test) print("Test Accuracy = {:.3f}".format(test_accuracy)) ``` --- ## Step 3: Test a Model on New Images To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type. You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. ### Load and Output the Images ``` ### Load the images and plot them here. ### Feel free to use as many code cells as needed. from skimage.transform import resize PATH = "./test-traffic-signs/test{0:1d}.jpg" num_images = 5 test_images = np.ndarray((num_images,32,32,3)) fig = plt.figure(figsize=(5, 5)) test_outputs = [33, 13, 36, 4, 11] for i in range(1,num_images+1): p = PATH.format(i) image = plt.imread(p) image = (image-128.0)/128.0 resized_image = resize(image, (32,32)) test_images[i-1] = resized_image sub = fig.add_subplot(num_images, 1, i) sub.imshow(test_images[i-1,:,:,:].squeeze()) ``` ### Predict the Sign Type for Each Image ``` ### Run the predictions here and use the model to output the prediction for each image. ### Make sure to pre-process the images with the same pre-processing pipeline used earlier. ### Feel free to use as many code cells as needed. with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) test_predictions = sess.run(tf.argmax(logits,1), feed_dict={x: test_images, y: test_outputs, keep_probability: 1.0}) print(test_predictions) ``` ### Analyze Performance ``` ### Calculate the accuracy for these 5 new images. ### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images. with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) test_accuracy = evaluate(test_images, test_outputs) print("Test Accuracy = {:.3f}".format(test_accuracy)) ``` ### Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.html#top_k) could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image. `tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids. Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability: ``` # (5, 6) array a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]]) ``` Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces: ``` TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32)) ``` Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices. ``` ### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. ### Feel free to use as many code cells as needed. with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) test_top_5 = sess.run(tf.nn.top_k(tf.nn.softmax(logits),k=5), feed_dict = {x:test_images, y:test_outputs, keep_probability: 1.0}) print(test_top_5) ``` ### Project Writeup Once you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. --- ## Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable. For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. <figure> <img src="visualize_cnn.png" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your output should look something like this (above)</p> </figcaption> </figure> <p></p> ``` ### Visualize your network's feature maps here. ### Feel free to use as many code cells as needed. # image_input: the test image being fed into the network to produce the feature maps # tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer # activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output # plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1): # Here make sure to preprocess your image_input in a way your network expects # with size, normalization, ect if needed # image_input = # Note: x should be the same name as your network's tensorflow data placeholder variable # If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function activation = tf_activation.eval(session=sess,feed_dict={x : image_input}) featuremaps = activation.shape[3] plt.figure(plt_num, figsize=(15,15)) for featuremap in range(featuremaps): plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number if activation_min != -1 & activation_max != -1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray") elif activation_max != -1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray") elif activation_min !=-1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray") else: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray") ```
github_jupyter
``` from azure.common import AzureMissingResourceHttpError from azure.storage.blob import BlockBlobService, PublicAccess from azure.storage.file import FileService from azure.storage.table import TableService, Entity #Blob Service... def get_block_blob_service(account_name, storage_key): return BlockBlobService(account_name=account_name, account_key=storage_key) def blob_service_create_container(account_name, storage_key, container_name): containers = blob_service_list_containers(account_name, storage_key) if container_name not in containers: block_blob_service = get_block_blob_service(account_name, storage_key) block_blob_service.create_container(container_name) block_blob_service.set_container_acl(container_name, public_access=PublicAccess.Container) def blob_service_create_blob_from_bytes(account_name, storage_key, container_name, blob_name, blob): block_blob_service = get_block_blob_service(account_name, storage_key) block_blob_service.create_blob_from_bytes(container_name, blob_name, blob) def blob_service_get_blob_to_path(account_name, storage_key, container_name, blob_name, file_path): block_blob_service = get_block_blob_service(account_name, storage_key) block_blob_service.get_blob_to_path(container_name, blob_name, file_path) def blob_service_insert(account_name, storage_key, container_name, blob_name, text): block_blob_service = get_block_blob_service(account_name, storage_key) block_blob_service.create_blob_from_text(container_name, blob_name, text) def blob_service_list_blobs(account_name, storage_key, container_name): blobs = [] block_blob_service = get_block_blob_service(account_name, storage_key) generator = block_blob_service.list_blobs(container_name) for blob in generator: blobs.append(blob.name) return blobs def blob_service_list_containers(account_name, storage_key): containers = [] block_blob_service = get_block_blob_service(account_name, storage_key) generator = block_blob_service.list_containers() for container in generator: containers.append(container.name) return containers # File Service... def get_file_service(account_name, storage_key): return FileService(account_name=account_name, account_key=storage_key) def file_service_list_directories_and_files(account_name, storage_key, share_name, directory_name): file_or_dirs = [] file_service = get_file_service(account_name, storage_key) generator = file_service.list_directories_and_files(share_name, directory_name) for file_or_dir in generator: file_or_dirs.append(file_or_dir.name) return file_or_dirs # Table Service... def get_table_service(account_name, storage_key): return TableService(account_name=account_name, account_key=storage_key) def table_service_get_entity(account_name, storage_key, table, partition_key, row_key): table_service = get_table_service(account_name, storage_key) return table_service.get_entity(table, partition_key, row_key) def table_service_insert(account_name, storage_key, table, entity): table_service = get_table_service(account_name, storage_key) table_service.insert_entity(table, entity) def table_service_query_entities(account_name, storage_key, table, filter): table_service = get_table_service(account_name, storage_key) return table_service.query_entities(table, filter) ```
github_jupyter
# Project 1: Navigation ### Test 3 - DDQN model with Prioritized Experience Replay <sub>Uirá Caiado. August 23, 2018<sub> #### Abstract _In this notebook, I will use the Unity ML-Agents environment to train a DDQN model with PER for the first project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893)._ ## 1. What we are going to test Quoting the seminal [Prioritized Experience Replay](https://arxiv.org/abs/1511.05952) paper, from the Deep Mind team, experience replay lets online reinforcement learning agents remember and reuse experiences from the past. Bellow, I am going to test my implementation of the PER buffer in conjunction to Double DQN. Thus, let's begin by checking the environment where I am going to run these tests. ``` %load_ext version_information %version_information numpy, unityagents, torch, matplotlib, pandas, gym ``` Now, let's define some meta variables to use in this notebook ``` import os fig_prefix = 'figures/2018-08-23-' data_prefix = '../data/2018-08-23-' s_currentpath = os.getcwd() ``` Also, let's import some of the necessary packages for this experiment. ``` from unityagents import UnityEnvironment import sys import os sys.path.append("../") # include the root directory as the main import eda import pandas as pd import numpy as np ``` ## 2. Training the agent The environment used for this project is the Udacity version of the Banana Collector environment, from [Unity](https://youtu.be/heVMs3t9qSk). The goal of the agent is to collect as many yellow bananas as possible while avoiding blue bananas. Bellow, we are going to start this environment. ``` env = UnityEnvironment(file_name="../Banana_Linux_NoVis/Banana.x86_64") ``` Unity Environments contain brains which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python. ``` # get the default brain brain_name = env.brain_names[0] brain = env.brains[brain_name] ``` Now, we are going to collect some basic information about the environment. ``` # reset the environment env_info = env.reset(train_mode=True)[brain_name] # number of actions action_size = brain.vector_action_space_size # examine the state space state = env_info.vector_observations[0] state_size = len(state) ``` And finally, we are going to train the model. We will consider that this environment is solved if the agent is able to receive an average reward (over 100 episodes) of at least +13. ``` %%time import gym import pickle import random import torch import numpy as np from collections import deque from drlnd.dqn_agent import DQNAgent, DDQNAgent, DDQNPREAgent n_episodes = 2000 eps_start = 1. eps_end=0.01 eps_decay=0.995 max_t = 1000 s_model = 'ddqnpre' agent = DDQNPREAgent(state_size=state_size, action_size=action_size, seed=0) scores = [] # list containing scores from each episode scores_std = [] # List containing the std dev of the last 100 episodes scores_avg = [] # List containing the mean of the last 100 episodes scores_window = deque(maxlen=100) # last 100 scores eps = eps_start # initialize epsilon for i_episode in range(1, n_episodes+1): env_info = env.reset(train_mode=True)[brain_name] # reset the environment state = env_info.vector_observations[0] # get the current state score = 0 # initialize the score for t in range(max_t): # action = np.random.randint(action_size) # select an action action = agent.act(state, eps) env_info = env.step(action)[brain_name] # send the action to the environment next_state = env_info.vector_observations[0] # get the next state reward = env_info.rewards[0] # get the reward done = env_info.local_done[0] # see if episode has finished agent.step(state, action, reward, next_state, done) score += reward # update the score state = next_state # roll over the state to next time step if done: # exit loop if episode finished break scores_window.append(score) # save most recent score scores.append(score) # save most recent score scores_std.append(np.std(scores_window)) # save most recent std dev scores_avg.append(np.mean(scores_window)) # save most recent std dev eps = max(eps_end, eps_decay*eps) # decrease epsilon print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="") if i_episode % 100 == 0: print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window))) if np.mean(scores_window)>=13.0: s_msg = '\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}' print(s_msg.format(i_episode, np.mean(scores_window))) torch.save(agent.qnet.state_dict(), '%scheckpoint_%s.pth' % (data_prefix, s_model)) break # save data to use latter d_data = {'episodes': i_episode, 'scores': scores, 'scores_std': scores_std, 'scores_avg': scores_avg, 'scores_window': scores_window} pickle.dump(d_data, open('%ssim-data-%s.data' % (data_prefix, s_model), 'wb')) ``` ## 3. Results The agent using Double DQN with Prioritized Experience Replay was able to solve the Banana Collector environment in 562 episodes of 1000 steps, each. ``` import pickle d_data = pickle.load(open('../data/2018-08-23-sim-data-ddqnpre.data', 'rb')) s_msg = 'Environment solved in {:d} episodes!\tAverage Score: {:.2f} +- {:.2f}' print(s_msg.format(d_data['episodes'], np.mean(d_data['scores_window']), np.std(d_data['scores_window']))) ``` Now, let's plot the rewards per episode. In the right panel, we will plot the rolling average score over 100 episodes $\pm$ its standard deviation, as well as the goal of this project (13+ on average over the last 100 episodes). ``` import matplotlib.pyplot as plt import seaborn as sns import numpy as np %matplotlib inline #recover data na_raw = np.array(d_data['scores']) na_mu = np.array(d_data['scores_avg']) na_sigma = np.array(d_data['scores_std']) # plot the scores f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5), sharex=True, sharey=True) # plot the sores by episode ax1.plot(np.arange(len(na_raw)), na_raw) ax1.set_xlim(0, len(na_raw)+1) ax1.set_ylabel('Score') ax1.set_xlabel('Episode #') ax1.set_title('raw scores') # plot the average of these scores ax2.axhline(y=13., xmin=0.0, xmax=1.0, color='r', linestyle='--', linewidth=0.7, alpha=0.9) ax2.plot(np.arange(len(na_mu)), na_mu) ax2.fill_between(np.arange(len(na_mu)), na_mu+na_sigma, na_mu-na_sigma, facecolor='gray', alpha=0.1) ax2.set_ylabel('Average Score') ax2.set_xlabel('Episode #') ax2.set_title('average scores') f.tight_layout() # f.savefig(fig_prefix + 'ddqnpre-learning-curve.eps', format='eps', dpi=1200) f.savefig(fig_prefix + 'ddqnpre-learning-curve.jpg', format='jpg') env.close() ``` ## 4. Conclusion The Double Deep Q-learning agent using Prioritized Experience Replay was able to solve the environment in 562 episodes and was the worst performance among all implementations. However, something that is worth noting that this implementation is seen to present the most smooth learning curve. ``` import pickle d_ddqnper = pickle.load(open('../data/2018-08-23-sim-data-ddqnpre.data', 'rb')) d_ddqn = pickle.load(open('../data/2018-08-24-sim-data-ddqn.data', 'rb')) d_dqn = pickle.load(open('../data/2018-08-24-sim-data-dqn.data', 'rb')) import matplotlib.pyplot as plt import seaborn as sns import numpy as np %matplotlib inline def recover_data(d_data): #recover data na_raw = np.array(d_data['scores']) na_mu = np.array(d_data['scores_avg']) na_sigma = np.array(d_data['scores_std']) return na_raw, na_mu, na_sigma # plot the scores f, ax2 = plt.subplots(1, 1, figsize=(8, 4), sharex=True, sharey=True) for s_model, d_data in zip(['DQN', 'DDQN', 'DDQN with PER'], [d_ddqnper, d_ddqn, d_dqn]): na_raw, na_mu, na_sigma = recover_data(d_data) if s_model == 'DDQN with PER': ax2.set_xlim(0, 572) # plot the average of these scores ax2.axhline(y=13., xmin=0.0, xmax=1.0, color='r', linestyle='--', linewidth=0.7, alpha=0.9) ax2.plot(np.arange(len(na_mu)), na_mu, label=s_model) # ax2.fill_between(np.arange(len(na_mu)), na_mu+na_sigma, na_mu-na_sigma, alpha=0.15) # format axis ax2.legend() ax2.set_title('Learning Curves') ax2.set_ylabel('Average Score in 100 episodes') ax2.set_xlabel('Episode #') # Shrink current axis's height by 10% on the bottom box = ax2.get_position() ax2.set_position([box.x0, box.y0 + box.height * 0.1, box.width, box.height * 0.9]) # Put a legend below current axis lgd = ax2.legend(loc='upper center', bbox_to_anchor=(0.5, -0.10), fancybox=False, shadow=False, ncol=3) f.tight_layout() f.savefig(fig_prefix + 'final-comparition-2.eps', format='eps', bbox_extra_artists=(lgd,), bbox_inches='tight', dpi=1200) ``` Finally, let's compare the score distributions generated by the agents. I am going to perform the one-sided Welch's unequal variances t-test for the null hypothesis that the DDQN model has the expected score higher than the other agents on the final 100 episodes of each experiment. As the implementation of the t-test in the [Scipy](https://goo.gl/gs222c) assumes a two-sided t-test, to perform the one-sided test, we will divide the p-value by 2 to compare to a critical value of 0.05 and requires that the t-value is greater than zero. ``` import pandas as pd def extract_info(s, d_data): return {'model': s, 'episodes': d_data['episodes'], 'mean_score': np.mean(d_data['scores_window']), 'std_score': np.std(d_data['scores_window'])} l_data = [extract_info(s, d) for s, d in zip(['DDQN with PER', 'DDQN', 'DQN'], [d_ddqnper, d_ddqn, d_dqn])] df = pd.DataFrame(l_data) df.index = df.model df.drop('model', axis=1, inplace=True) print(df.sort_values(by='episodes')) import scipy #performs t-test a = [float(pd.DataFrame(d_dqn['scores']).iloc[-1].values)] * 2 b = list(pd.DataFrame(d_rtn_test_1r['pnl']['test']).fillna(method='ffill').iloc[-1].values) tval, p_value = scipy.stats.ttest_ind(a, b, equal_var=False) import scipy tval, p_value = scipy.stats.ttest_ind(d_ddqn['scores'], d_dqn['scores'], equal_var=False) print("DDQN vs. DQN: t-value = {:0.6f}, p-value = {:0.8f}".format(tval, p_value)) tval, p_value = scipy.stats.ttest_ind(d_ddqn['scores'], d_ddqnper['scores'], equal_var=False) print("DDQN vs. DDQNPRE: t-value = {:0.6f}, p-value = {:0.8f}".format(tval, p_value)) ``` There was no significant difference between the performances of the agents.
github_jupyter
# Convolutional Autoencoder Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data. ``` %matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt #from tqdm import tqdm from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', validation_size=0) img = mnist.train.images[2] plt.imshow(img.reshape((28, 28)), cmap='Greys_r') ``` ## Network Architecture The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below. ![Convolutional Autoencoder](assets/convolutional_autoencoder.png) Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. ### What's going on with the decoder Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **deconvolutional** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling. > **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). ``` learning_rate = 0.001 inputs_ = tf.placeholder(tf.float32, shape=(None,28,28,1), name='inputs') targets_ = tf.placeholder(tf.float32, shape=(None,28,28,1), name='targets') ### Encoder conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu) # Now 28x28x16 maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2)) # Now 14x14x16 conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 14x14x8 maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2)) # Now 7x7x8 conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 7x7x8 encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same') # Now 4x4x8 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(conv3, (7,7)) # Now 7x7x8 conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 7x7x8 upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14)) # Now 14x14x8 conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 14x14x8 upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28)) # Now 28x28x8 conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu) # Now 28x28x16 logits = tf.layers.conv2d(upsample3, 1, (3,3), padding='same', activation=None) #Now 28x28x1 # Pass logits through sigmoid to get reconstructed image decoded = tf.nn.sigmoid(logits, name='decoded') # Pass logits through sigmoid and calculate the cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) # Get cost and define the optimizer cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(learning_rate).minimize(cost) ``` ## Training As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays. ``` sess = tf.Session() epochs = 20 batch_size = 256 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) imgs = batch[0].reshape((-1, 28, 28, 1)) batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs, targets_: imgs}) if ii % 100 == 0: print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([in_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) sess.close() ``` ## Denoising As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images. ![Denoising autoencoder](assets/denoising.png) Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before. > **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers. ``` learning_rate = 0.001 inputs_ = tf.placeholder(tf.float32, shape=(None,28,28,1), name='inputs') targets_ = tf.placeholder(tf.float32, shape=(None,28,28,1), name='targets') ### Encoder conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu) # Now 28x28x16 maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2)) # Now 14x14x16 conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 14x14x8 maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2)) # Now 7x7x8 conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 7x7x8 encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same') # Now 4x4x8 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(conv3, (7,7)) # Now 7x7x8 conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 7x7x8 upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14)) # Now 14x14x8 conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu) # Now 14x14x8 upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28)) # Now 28x28x8 conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu) # Now 28x28x16 logits = tf.layers.conv2d(upsample3, 1, (3,3), padding='same', activation=None) #Now 28x28x1 # Pass logits through sigmoid to get reconstructed image decoded = tf.nn.sigmoid(logits, name='decoded') # Pass logits through sigmoid and calculate the cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) # Get cost and define the optimizer cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(learning_rate).minimize(cost) sess = tf.Session() epochs = 100 batch_size = 256 # Set's how much noise we're adding to the MNIST images noise_factor = 0.5 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images from the batch imgs = batch[0].reshape((-1, 28, 28, 1)) # Add random noise to the input images noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape) # Clip the images to be between 0 and 1 noisy_imgs = np.clip(noisy_imgs, 0., 1.) # Noisy images as inputs, original images as targets batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs, targets_: imgs}) if ii % 100 == 0: print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) ``` ## Checking out the performance Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is. ``` fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[50:60] noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape) noisy_imgs = np.clip(noisy_imgs, 0., 1.) reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([noisy_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) ```
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Training and Evaluation with TensorFlow Keras <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/alpha/guide/keras/training_and_evaluation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/guide/keras/training_and_evaluation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/guide/keras/training_and_evaluation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> This guide covers training, evaluation, and prediction (inference) models in TensorFlow 2.0 in two broad situations: - When using built-in APIs for training & validation (such as `model.fit()`, `model.evaluate()`, `model.predict()`). This is covered in the section **"Using build-in training & evaluation loops"**. - When writing custom loops from scratch using eager execution and the `GradientTape` object. This is covered in the section **"Writing your own training & evaluation loops from scratch"**. In general, whether you are using built-in loops or writing your own, model training & evaluation works strictly in the same way across every kind of Keras model -- Sequential models, models built with the Functional API, and models written from scratch via model subclassing. This guide doesn't cover distributed training. ## Setup ``` !pip install pydot !apt-get install graphviz from __future__ import absolute_import, division, print_function !pip install tensorflow-gpu==2.0.0-alpha0 import tensorflow as tf tf.keras.backend.clear_session() # For easy reset of notebook state. ``` ## Part I: Using build-in training & evaluation loops When passing data to the built-in training loops of a model, you should either use **Numpy arrays** (if your data is small and fits in memory) or **tf.data Dataset** objects. In the next few paragraphs, we'll use the MNIST dataset as Numpy arrays, in order to demonstrate how to use optimizers, losses, and metrics. ### API overview: a first end-to-end example Let's consider the following model (here, we build in with the Functional API, but it could be a Sequential model or a subclassed model as well): ``` from tensorflow import keras from tensorflow.keras import layers inputs = keras.Input(shape=(784,), name='digits') x = layers.Dense(64, activation='relu', name='dense_1')(inputs) x = layers.Dense(64, activation='relu', name='dense_2')(x) outputs = layers.Dense(10, activation='softmax', name='predictions')(x) model = keras.Model(inputs=inputs, outputs=outputs) ``` Here's what the typical end-to-end workflow looks like, consisting of training, validation on a holdout set generated from the original training data, and finally evaluation on the test data: ``` # Load a toy dataset for the sake of this example (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() # Preprocess the data (these are Numpy arrays) x_train = x_train.reshape(60000, 784).astype('float32') / 255 x_test = x_test.reshape(10000, 784).astype('float32') / 255 # Reserve 10,000 samples for validation x_val = x_train[-10000:] y_val = y_train[-10000:] x_train = x_train[:-10000] y_train = y_train[:-10000] # Specify the training configuration (optimizer, loss, metrics) model.compile(optimizer=keras.optimizers.RMSprop(), # Optimizer # Loss function to minimize loss=keras.losses.SparseCategoricalCrossentropy(), # List of metrics to monitor metrics=[keras.metrics.SparseCategoricalAccuracy()]) # Train the model by slicing the data into "batches" # of size "batch_size", and repeatedly iterating over # the entire dataset for a given number of "epochs" print('# Fit model on training data') history = model.fit(x_train, y_train, batch_size=64, epochs=3, # We pass some validation for # monitoring validation loss and metrics # at the end of each epoch validation_data=(x_val, y_val)) # The returned "history" object holds a record # of the loss values and metric values during training print('\nhistory dict:', history.history) # Evaluate the model on the test data using `evaluate` print('\n# Evaluate on test data') results = model.evaluate(x_test, y_test, batch_size=128) print('test loss, test acc:', results) # Generate predictions (probabilities -- the output of the last layer) # on new data using `predict` print('\n# Generate predictions for 3 samples') predictions = model.predict(x_test[:3]) print('predictions shape:', predictions.shape) ``` ### Specifying a loss, metrics, and an optimizer To train a model with `fit`, you need to specify a loss function, an optimizer, and optionally, some metrics to monitor. You pass these to the model as arguments to the `compile()` method: ``` model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3), loss=keras.losses.SparseCategoricalCrossentropy(), metrics=[keras.metrics.SparseCategoricalAccuracy()]) ``` The `metrics` argument should be a list -- you model can have any number of metrics. If your model has multiple outputs, you can specify different losses and metrics for each output, and you can modulate to contribution of each output to the total loss of the model. You will find more details about this in the section "**Passing data to multi-input, multi-output models**". Note that in many cases, the loss and metrics are specified via string identifiers, as a shortcut: ``` model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3), loss='sparse_categorical_crossentropy', metrics=['sparse_categorical_accuracy']) ``` For later reuse, let's put our model definition and compile step in functions; we will call them several times across different examples in this guide. ``` def get_uncompiled_model(): inputs = keras.Input(shape=(784,), name='digits') x = layers.Dense(64, activation='relu', name='dense_1')(inputs) x = layers.Dense(64, activation='relu', name='dense_2')(x) outputs = layers.Dense(10, activation='softmax', name='predictions')(x) model = keras.Model(inputs=inputs, outputs=outputs) return model def get_compiled_model(): model = get_uncompiled_model() model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3), loss='sparse_categorical_crossentropy', metrics=['sparse_categorical_accuracy']) return model ``` #### Many built-in optimizers, losses, and metrics are available In general, you won't have to create from scratch your own losses, metrics, or optimizers, because what you need is likely already part of the Keras API: Optimizers: - `SGD()` (with or without momentum) - `RMSprop()` - `Adam()` - etc. Losses: - `MeanSquaredError()` - `KLDivergence()` - `CosineSimilarity()` - etc. Metrics: - `AUC()` - `Precision()` - `Recall()` - etc. #### Writing custom losses and metrics If you need a metric that isn't part of the API, you can easily create custom metrics by subclassing the `Metric` class. You will need to implement 4 methods: - `__init__(self)`, in which you will create state variables for your metric. - `update_state(self, y_true, y_pred, sample_weight=None)`, which uses the targets `y_true` and the model predictions `y_pred` to update the state variables. - `result(self)`, which uses the state variables to compute the final results. - `reset_states(self)`, which reinitializes the state of the metric. State update and results computation are kept separate (in `update_state()` and `result()`, respectively) because in some cases, results computation might be very expensive, and would only be done periodically. Here's a simple example showing how to implement a `CatgoricalTruePositives` metric, that counts how many samples where correctly classified as belonging to a given class: ``` class CatgoricalTruePositives(keras.metrics.Metric): def __init__(self, name='binary_true_positives', **kwargs): super(CatgoricalTruePositives, self).__init__(name=name, **kwargs) self.true_positives = self.add_weight(name='tp', initializer='zeros') def update_state(self, y_true, y_pred, sample_weight=None): y_pred = tf.argmax(y_pred) values = tf.equal(tf.cast(y_true, 'int32'), tf.cast(y_pred, 'int32')) values = tf.cast(values, 'float32') if sample_weight is not None: sample_weight = tf.cast(sample_weight, 'float32') values = tf.multiply(values, sample_weight) return self.true_positives.assign_add(tf.reduce_sum(values)) # TODO: fix def result(self): return tf.identity(self.true_positives) # TODO: fix def reset_states(self): # The state of the metric will be reset at the start of each epoch. self.true_positives.assign(0.) model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3), loss=keras.losses.SparseCategoricalCrossentropy(), metrics=[CatgoricalTruePositives()]) model.fit(x_train, y_train, batch_size=64, epochs=3) ``` #### Handling losses and metrics that don't fit the standard signature The overwhelming majority of losses and metrics can be computed from `y_true` and `y_pred`, where `y_pred` is an output of your model. But not all of them. For instance, a regularization loss may only require the activation of a layer (there are no targets in this case), and this activation may not be a model output. In such cases, you can call `self.add_loss(loss_value)` from inside the `call` method of a custom layer. Here's a simple example that adds activity regularization (note that activity regularization is built-in in all Keras layers -- this layer is just for the sake of providing a concrete example): ``` class ActivityRegularizationLayer(layers.Layer): def call(self, inputs): self.add_loss(tf.reduce_sum(inputs) * 0.1) return inputs # Pass-through layer. inputs = keras.Input(shape=(784,), name='digits') x = layers.Dense(64, activation='relu', name='dense_1')(inputs) # Insert activity regularization as a layer x = ActivityRegularizationLayer()(x) x = layers.Dense(64, activation='relu', name='dense_2')(x) outputs = layers.Dense(10, activation='softmax', name='predictions')(x) model = keras.Model(inputs=inputs, outputs=outputs) model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3), loss='sparse_categorical_crossentropy') # The displayed loss will be much higher than before # due to the regularization component. model.fit(x_train, y_train, batch_size=64, epochs=1) ``` You can do the same for logging metric values: ``` class MetricLoggingLayer(layers.Layer): def call(self, inputs): # The `aggregation` argument defines # how to aggregate the per-batch values # over each epoch: # in this case we simply average them. self.add_metric(keras.backend.std(inputs), name='std_of_activation', aggregation='mean') return inputs # Pass-through layer. inputs = keras.Input(shape=(784,), name='digits') x = layers.Dense(64, activation='relu', name='dense_1')(inputs) # Insert std logging as a layer. x = MetricLoggingLayer()(x) x = layers.Dense(64, activation='relu', name='dense_2')(x) outputs = layers.Dense(10, activation='softmax', name='predictions')(x) model = keras.Model(inputs=inputs, outputs=outputs) model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=1e-3), loss='sparse_categorical_crossentropy') model.fit(x_train, y_train, batch_size=64, epochs=1) ``` In the [Functional API](functional.ipynb), you can also call `model.add_loss(loss_tensor)`, or `model.add_metric(metric_tensor, name, aggregation)`. Here's a simple example: ``` inputs = keras.Input(shape=(784,), name='digits') x1 = layers.Dense(64, activation='relu', name='dense_1')(inputs) x2 = layers.Dense(64, activation='relu', name='dense_2')(x1) outputs = layers.Dense(10, activation='softmax', name='predictions')(x2) model = keras.Model(inputs=inputs, outputs=outputs) model.add_loss(tf.reduce_sum(x1) * 0.1) model.add_metric(keras.backend.std(x1), name='std_of_activation', aggregation='mean') model.compile(optimizer=keras.optimizers.RMSprop(1e-3), loss='sparse_categorical_crossentropy') model.fit(x_train, y_train, batch_size=64, epochs=1) ``` #### Automatically setting apart a validation holdout set In the first end-to-end example you saw, we used the `validation_data` argument to pass a tuple of Numpy arrays `(x_val, y_val)` to the model for evaluating a validation loss and validation metrics at the end of each epoch. Here's another option: the argument `validation_split` allows you to automatically reserve part of your training data for validation. The argument value represents the fraction of the data to be reserved for validation, so it should be set to a number higher than 0 and lower than 1. For instance, `validation_split=0.2` means "use 20% of the data for validation", and `validation_split=0.6` means "use 60% of the data for validation". The way the validation is computed is by *taking the last x% samples of the arrays received by the `fit` call, before any shuffling*. You can only use `validation_split` when training with Numpy data. ``` model = get_compiled_model() model.fit(x_train, y_train, batch_size=64, validation_split=0.2, epochs=3) ``` ### Training & evaluation from tf.data Datasets In the past few paragraphs, you've seen how to handle losses, metrics, and optimizers, and you've seen how to use the `validation_data` and `validation_split` arguments in `fit`, when your data is passed as Numpy arrays. Let's now take a look at the case where your data comes in the form of a tf.data Dataset. The tf.data API is a set of utilities in TensorFlow 2.0 for loading and preprocessing data in a way that's fast and scalable. For a complete guide about creating Datasets, see [the tf.data documentation](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf). You can pass a Dataset instance directly to the methods `fit()`, `evaluate()`, and `predict()`: ``` model = get_compiled_model() # First, let's create a training Dataset instance. # For the sake of our example, we'll use the same MNIST data as before. train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) # Shuffle and slice the dataset. train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) # Now we get a test dataset. test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)) test_dataset = test_dataset.batch(64) # Since the dataset already takes care of batching, # we don't pass a `batch_size` argument. model.fit(train_dataset, epochs=3) # You can also evaluate or predict on a dataset. print('\n# Evaluate') model.evaluate(test_dataset) ``` Note that the Dataset is reset at the end of each epoch, so it can be reused of the next epoch. If you want to run training only on a specific number of batches from this Dataset, you can pass the `steps_per_epoch` argument, which specifies how many training steps the model should run using this Dataset before moving on to the next epoch. If you do this, the dataset is not reset at the end of each epoch, instead we just keep drawing the next batches. The dataset will eventually run out of data (unless it is an infinitely-looping dataset). ``` model = get_compiled_model() # Prepare the training dataset train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) # Only use the 100 batches per epoch (that's 64 * 100 samples) model.fit(train_dataset, epochs=3, steps_per_epoch=100) ``` #### Using a validation dataset You can pass a Dataset instance as the `validation_data` argument in `fit`: ``` model = get_compiled_model() # Prepare the training dataset train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) # Prepare the validation dataset val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)) val_dataset = val_dataset.batch(64) model.fit(train_dataset, epochs=3, validation_data=val_dataset) ``` At the end of each epoch, the model will iterate over the validation Dataset and compute the validation loss and validation metrics. If you want to run validation only on a specific number of batches from this Dataset, you can pass the `validation_steps` argument, which specifies how many validation steps the model should run with the validation Dataset before interrupting validation and moving on to the next epoch: ``` model = get_compiled_model() # Prepare the training dataset train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) # Prepare the validation dataset val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)) val_dataset = val_dataset.batch(64) model.fit(train_dataset, epochs=3, # Only run validation using the first 10 batches of the dataset # using the `validation_steps` argument validation_data=val_dataset, validation_steps=10) ``` Note that the validation Dataset will be reset after each use (so that you will always be evaluating on the same samples from epoch to epoch). The argument `validation_split` (generating a holdout set from the training data) is not supported when training from Dataset objects, since this features requires the ability to index the samples of the datasets, which is not possible in general with the Dataset API. ### Other input formats supported Besides Numpy arrays and TensorFlow Datasets, it's possible to train a Keras model using Pandas dataframes, or from Python generators that yield batches. In general, we recommend that you use Numpy input data if your data is small and fits in memory, and Datasets otherwise. ### Using sample weighting and class weighting Besides input data and target data, it is possible to pass sample weights or class weights to a model when using `fit`: - When training from Numpy data: via the `sample_weight` and `class_weight` arguments. - When training from Datasets: by having the Dataset return a tuple `(input_batch, target_batch, sample_weight_batch)` . A "sample weights" array is an array of numbers that specify how much weight each sample in a batch should have in computing the total loss. It is commonly used in imbalanced classification problems (the idea being to give more weight to rarely-seen classes). When the weights used are ones and zeros, the array can be used as a *mask* for the loss function (entirely discarding the contribution of certain samples to the total loss). A "class weights" dict is a more specific instance of the same concept: it maps class indices to the sample weight that should be used for samples belonging to this class. For instance, if class "0" is twice less represented than class "1" in your data, you could use `class_weight={0: 1., 1: 0.5}`. Here's a Numpy example where we use class weights or sample weights to give more importance to the correct classification of class #5 (which is the digit "5" in the MNIST dataset). ``` import numpy as np class_weight = {0: 1., 1: 1., 2: 1., 3: 1., 4: 1., # Set weight "2" for class "5", # making this class 2x more important 5: 2., 6: 1., 7: 1., 8: 1., 9: 1.} model.fit(x_train, y_train, class_weight=class_weight, batch_size=64, epochs=4) # Here's the same example using `sample_weight` instead: sample_weight = np.ones(shape=(len(y_train),)) sample_weight[y_train == 5] = 2. model = get_compiled_model() model.fit(x_train, y_train, sample_weight=sample_weight, batch_size=64, epochs=4) ``` Here's a matching Dataset example: ``` sample_weight = np.ones(shape=(len(y_train),)) sample_weight[y_train == 5] = 2. # Create a Dataset that includes sample weights # (3rd element in the return tuple). train_dataset = tf.data.Dataset.from_tensor_slices( (x_train, y_train, sample_weight)) # Shuffle and slice the dataset. train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) model = get_compiled_model() model.fit(train_dataset, epochs=3) ``` ### Passing data to multi-input, multi-output models In the previous examples, we were considering a model with a single input (a tensor of shape `(764,)`) and a single output (a prediction tensor of shape `(10,)`). But what about models that have multiple inputs or outputs? Consider the following model, which has an image input of shape `(32, 32, 3)` (that's `(height, width, channels)`) and a timeseries input of shape `(None, 10)` (that's `(timesteps, features)`). Our model will have two outputs computed from the combination of these inputs: a "score" (of shape `(1,)`) and a probability distribution over 5 classes (of shape `(10,)`). ``` from tensorflow import keras from tensorflow.keras import layers image_input = keras.Input(shape=(32, 32, 3), name='img_input') timeseries_input = keras.Input(shape=(None, 10), name='ts_input') x1 = layers.Conv2D(3, 3)(image_input) x1 = layers.GlobalMaxPooling2D()(x1) x2 = layers.Conv1D(3, 3)(timeseries_input) x2 = layers.GlobalMaxPooling1D()(x2) x = layers.concatenate([x1, x2]) score_output = layers.Dense(1, name='score_output')(x) class_output = layers.Dense(5, activation='softmax', name='class_output')(x) model = keras.Model(inputs=[image_input, timeseries_input], outputs=[score_output, class_output]) ``` Let's plot this model, so you can clearly see what we're doing here (note that the shapes shown in the plot are batch shapes, rather than per-sample shapes). ``` keras.utils.plot_model(model, 'multi_input_and_output_model.png', show_shapes=True) ``` At compilation time, we can specify different losses to different ouptuts, by passing the loss functions as a list: ``` model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()]) ``` If we only passed a single loss function to the model, the same loss function would be applied to every output, which is not appropriate here. Likewise for metrics: ``` model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()], metrics=[[keras.metrics.MeanAbsolutePercentageError(), keras.metrics.MeanAbsoluteError()], [keras.metrics.CategoricalAccuracy()]]) ``` Since we gave names to our output layers, we coud also specify per-output losses and metrics via a dict: ``` model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss={'score_output': keras.losses.MeanSquaredError(), 'class_output': keras.losses.CategoricalCrossentropy()}, metrics={'score_output': [keras.metrics.MeanAbsolutePercentageError(), keras.metrics.MeanAbsoluteError()], 'class_output': [keras.metrics.CategoricalAccuracy()]}) ``` We recommend the use of explicit names and dicts if you have more than 2 outputs. It's possible to give different weights to different output-specific losses (for instance, one might wish to privilege the "score" loss in our example, by giving to 2x the importance of the class loss), using the `loss_weight` argument: ``` model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss={'score_output': keras.losses.MeanSquaredError(), 'class_output': keras.losses.CategoricalCrossentropy()}, metrics={'score_output': [keras.metrics.MeanAbsolutePercentageError(), keras.metrics.MeanAbsoluteError()], 'class_output': [keras.metrics.CategoricalAccuracy()]}, loss_weight={'score_output': 2., 'class_output': 1.}) ``` You could also chose not to compute a loss for certain outputs, if these outputs meant for prediction but not for training: ``` # List loss version model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss=[None, keras.losses.CategoricalCrossentropy()]) # Or dict loss version model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss={'class_output': keras.losses.CategoricalCrossentropy()}) ``` Passing data to a multi-input or multi-output model in `fit` works in a similar way as specifying a loss function in `compile`: you can pass *lists of Numpy arrays (with 1:1 mapping to the outputs that received a loss function)* or *dicts mapping output names to Numpy arrays of training data*. ``` model.compile( optimizer=keras.optimizers.RMSprop(1e-3), loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()]) # Generate dummy Numpy data img_data = np.random.random_sample(size=(100, 32, 32, 3)) ts_data = np.random.random_sample(size=(100, 20, 10)) score_targets = np.random.random_sample(size=(100, 1)) class_targets = np.random.random_sample(size=(100, 5)) # Fit on lists model.fit([img_data, ts_data], [score_targets, class_targets], batch_size=32, epochs=3) # Alernatively, fit on dicts model.fit({'img_input': img_data, 'ts_input': ts_data}, {'score_output': score_targets, 'class_output': class_targets}, batch_size=32, epochs=3) ``` Here's the Dataset use case: similarly as what we did for Numpy arrays, the Dataset should return a tuple of dicts. ``` train_dataset = tf.data.Dataset.from_tensor_slices( ({'img_input': img_data, 'ts_input': ts_data}, {'score_output': score_targets, 'class_output': class_targets})) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) model.fit(train_dataset, epochs=3) ``` ### Using callbacks Callbacks in Keras are objects that are called at different point during training (at the start of an epoch, at the end of a batch, at the end of an epoch, etc.) and which can be used to implement behaviors such as: - Doing validation at different points during training (beyond the built-in per-epoch validation) - Checkpointing the model at regular intervals or when it exceeds a certain accuracy threshold - Changing the learning rate of the model when training seems to be plateauing - Doing fine-tuning of the top layers when training seems to be plateauing - Sending email or instant message notifications when training ends or where a certain performance threshold is exceeded - Etc. Callbacks can be passed as a list to your call to `fit`: ``` model = get_compiled_model() callbacks = [ keras.callbacks.EarlyStopping( # Stop training when `val_loss` is no longer improving monitor='val_loss', # "no longer improving" being defined as "no better than 1e-2 less" min_delta=1e-2, # "no longer improving" being further defined as "for at least 2 epochs" patience=2, verbose=1) ] model.fit(x_train, y_train, epochs=20, batch_size=64, callbacks=callbacks, validation_split=0.2) ``` #### Many built-in callbacks are available - `ModelCheckpoint`: Periodically save the model. - `EarlyStopping`: Stop training when training is no longer improving the validation metrics. - `TensorBoard`: periodically write model logs that can be visualized in TensorBoard (more details in the section "Visualization"). - `CSVLogger`: streams loss and metrics data to a CSV file. - etc. #### Writing your own callback You can create a custom callback by extending the base class keras.callbacks.Callback. A callback has access to its associated model through the class property `self.model`. Here's a simple example saving a list of per-batch loss values during training: ```python class LossHistory(keras.callbacks.Callback): def on_train_begin(self, logs): self.losses = [] def on_batch_end(self, batch, logs): self.losses.append(logs.get('loss')) ``` ### Checkpointing models When you're training model on relatively large datasets, it's crucial to save checkpoints of your model at frequent intervals. The easiest way to achieve this is with the `ModelCheckpoint` callback: ``` model = get_compiled_model() callbacks = [ keras.callbacks.ModelCheckpoint( filepath='mymodel_{epoch}.h5', # Path where to save the model # The two parameters below mean that we will overwrite # the current checkpoint if and only if # the `val_loss` score has improved. save_best_only=True, monitor='val_loss', verbose=1) ] model.fit(x_train, y_train, epochs=3, batch_size=64, callbacks=callbacks, validation_split=0.2) ``` You call also write your own callback for saving and restoring models. For a complete guide on serialization and saving, see [Guide to Saving and Serializing Models](./saving_and_serializing.ipynb). ### Using learning rate schedules A common pattern when training deep learning models is to gradually reduce the learning as training progresses. This is generally known as "learning rate decay". The learning decay schedule could be static (fixed in advance, as a function of the current epoch or the current batch index), or dynamic (responding to the current behavior of the model, in particular the validation loss). #### Passing a schedule to an optimizer You can easily use a static learning rate decay schedule by passing a schedule object as the `learning_rate` argument in your optimizer: ``` initial_learning_rate = 0.1 lr_schedule = keras.optimizers.schedules.ExponentialDecay( initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True) optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule) ``` Several built-in schedules are available: `ExponentialDecay`, `PiecewiseConstantDecay`, `PolynomialDecay`, and `InverseTimeDecay`. #### Using callbacks to implement a dynamic learning rate schedule A dynamic learning rate schedule (for instance, decreasing the learning rate when the validation loss is no longer improving) cannot be achieved with these schedule objects since the optimizer does not have access to validation metrics. However, callbacks do have access to all metrics, including validation metrics! You can thus achieve this pattern by using a callback that modifies the current learning rate on the optimizer. In fact, this is even built-in as the `ReduceLROnPlateau` callback. ### Visualizing loss and metrics during training The best way to keep an eye on your model during training is to use [TensorBoard](https://www.tensorflow.org/tensorboard), a browser-based application that you can run locally that provides you with: - Live plots of the loss and metrics for training and evaluation - (optionally) Visualizations of the histograms of your layer activations - (optionally) 3D visualizations of the embedding spaces learned by your `Embedding` layers If you have installed TensorFlow with pip, you should be able to launch TensorBoard from the command line: ``` tensorboard --logdir=/full_path_to_your_logs ``` #### Using the TensorBoard callback The easiest way to use TensorBoard with a Keras model and the `fit` method is the `TensorBoard` callback. In the simplest case, just specify where you want te callback to write logs, and you're good to go: ```python tensorboard_cbk = keras.callbacks.TensorBoard(log_dir='/full_path_to_your_logs') model.fit(dataset, epochs=10, callbacks=[tensorboard_cbk]) ``` The `TensorBoard` callback has many useful options, including whether to log embeddings, histograms, and how often to write logs: ```python keras.callbacks.TensorBoard( log_dir='/full_path_to_your_logs', histogram_freq=0, # How often to log histogram visualizations embeddings_freq=0, # How often to log embedding visualizations update_freq='epoch') # How often to write logs (default: once per epoch) ``` ## Part II: Writing your own training & evaluation loops from scratch If you want lower-level over your training & evaluation loops than what `fit()` and `evaluate()` provide, you should write your own. It's actually pretty simple! But you should be ready to have a lot more debugging to do on your own. ### Using the GradientTape: a first end-to-end example Calling a model inside a `GradientTape` scope enables you to retrieve the gradients of the trainable weights of the layer with respect to a loss value. Using an optimizer instance, you can use these gradients to update these variables (which you can retrieve using `model.trainable_variables`). Let's reuse our initial MNIST model from Part I, and let's train it using mini-batch gradient with a custom training loop. ``` # Get the model. inputs = keras.Input(shape=(784,), name='digits') x = layers.Dense(64, activation='relu', name='dense_1')(inputs) x = layers.Dense(64, activation='relu', name='dense_2')(x) outputs = layers.Dense(10, activation='softmax', name='predictions')(x) model = keras.Model(inputs=inputs, outputs=outputs) # Instantiate an optimizer. optimizer = keras.optimizers.SGD(learning_rate=1e-3) # Instantiate a loss function. loss_fn = keras.losses.SparseCategoricalCrossentropy() # Prepare the training dataset. batch_size = 64 train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size) # Iterate over epochs. for epoch in range(3): print('Start of epoch %d' % (epoch,)) # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): # Open a GradientTape to record the operations run # during the forward pass, which enables autodifferentiation. with tf.GradientTape() as tape: # Run the forward pass of the layer. # The operations that the layer applies # to its inputs are going to be recorded # on the GradientTape. logits = model(x_batch_train) # Logits for this minibatch # Compute the loss value for this minibatch. loss_value = loss_fn(y_batch_train, logits) # Use the gradient tape to automatically retrieve # the gradients of the trainable variables with respect to the loss. grads = tape.gradient(loss_value, model.trainable_variables) # Run one step of gradient descent by updating # the value of the variables to minimize the loss. optimizer.apply_gradients(zip(grads, model.trainable_variables)) # Log every 200 batches. if step % 200 == 0: print('Training loss (for one batch) at step %s: %s' % (step, float(loss_value))) print('Seen so far: %s samples' % ((step + 1) * 64)) ``` ### Low-level handling of metrics Let's add metrics to the mix. You can readily reuse the built-in metrics (or custom ones you wrote) in such training loops written from scratch. Here's the flow: - Instantiate the metric at the start of the loop - Call `metric.update_state()` after each batch - Call `metric.result()` when you need to display the current value of the metric - Call `metric.reset_states()` when you need to clear the state of the metric (typically at the end of an epoch) Let's use this knowledge to compute `SparseCategoricalAccuracy` on validation data at the end of each epoch: ``` # Get model inputs = keras.Input(shape=(784,), name='digits') x = layers.Dense(64, activation='relu', name='dense_1')(inputs) x = layers.Dense(64, activation='relu', name='dense_2')(x) outputs = layers.Dense(10, activation='softmax', name='predictions')(x) model = keras.Model(inputs=inputs, outputs=outputs) # Instantiate an optimizer to train the model. optimizer = keras.optimizers.SGD(learning_rate=1e-3) # Instantiate a loss function. loss_fn = keras.losses.SparseCategoricalCrossentropy() # Prepare the metrics. train_acc_metric = keras.metrics.SparseCategoricalAccuracy() val_acc_metric = keras.metrics.SparseCategoricalAccuracy() # Prepare the training dataset. batch_size = 64 train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size) # Prepare the validation dataset. val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)) val_dataset = val_dataset.batch(64) # Iterate over epochs. for epoch in range(3): print('Start of epoch %d' % (epoch,)) # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): with tf.GradientTape() as tape: logits = model(x_batch_train) loss_value = loss_fn(y_batch_train, logits) grads = tape.gradient(loss_value, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) # Update training metric. train_acc_metric(y_batch_train, logits) # Log every 200 batches. if step % 200 == 0: print('Training loss (for one batch) at step %s: %s' % (step, float(loss_value))) print('Seen so far: %s samples' % ((step + 1) * 64)) # Display metrics at the end of each epoch. train_acc = train_acc_metric.result() print('Training acc over epoch: %s' % (float(train_acc),)) # Reset training metrics at the end of each epoch train_acc_metric.reset_states() # Run a validation loop at the end of each epoch. for x_batch_val, y_batch_val in val_dataset: val_logits = model(x_batch_val) # Update val metrics val_acc_metric(y_batch_val, val_logits) val_acc = val_acc_metric.result() val_acc_metric.reset_states() print('Validation acc: %s' % (float(val_acc),)) ``` ### Low-level handling of extra losses You saw in the previous section that it is possible for regularization losses to be added by a layer by calling `self.add_loss(value)` in the `call` method. In the general case, you will want to take these losses into account in your custom training loops (unless you've written the model yourself and you already know that it creates no such losses). Recall this example from the previous section, featuring a layer that creates a regularization loss: ``` class ActivityRegularizationLayer(layers.Layer): def call(self, inputs): self.add_loss(1e-2 * tf.reduce_sum(inputs)) return inputs inputs = keras.Input(shape=(784,), name='digits') x = layers.Dense(64, activation='relu', name='dense_1')(inputs) # Insert activity regularization as a layer x = ActivityRegularizationLayer()(x) x = layers.Dense(64, activation='relu', name='dense_2')(x) outputs = layers.Dense(10, activation='softmax', name='predictions')(x) model = keras.Model(inputs=inputs, outputs=outputs) ``` When you call a model, like this: ```python logits = model(x_train) ``` the losses it creates during the forward pass are added to the `model.losses` attribute: ``` logits = model(x_train[:64]) print(model.losses) ``` The tracked losses are first cleared at the start of the model `__call__`, so you will only see the losses created during this one forward pass. For instance, calling the model repeatedly and then querying `losses` only displays the latest losses, created during the last call: ``` logits = model(x_train[:64]) logits = model(x_train[64: 128]) logits = model(x_train[128: 192]) print(model.losses) ``` To take these losses into account during training, all you have to do is to modify your training loop to add `sum(model.losses)` to your total loss: ``` optimizer = keras.optimizers.SGD(learning_rate=1e-3) for epoch in range(3): print('Start of epoch %d' % (epoch,)) for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): with tf.GradientTape() as tape: logits = model(x_batch_train) loss_value = loss_fn(y_batch_train, logits) # Add extra losses created during this forward pass: loss_value += sum(model.losses) grads = tape.gradient(loss_value, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) # Log every 200 batches. if step % 200 == 0: print('Training loss (for one batch) at step %s: %s' % (step, float(loss_value))) print('Seen so far: %s samples' % ((step + 1) * 64)) ``` That was the last piece of the puzzle! You've reached the end of this guide. Now you know everything there is to know about using built-in training loops and writing your own from scratch.
github_jupyter
# Inverted Pendulum: Reinforcement learning Meichen Lu (meichenlu91@gmail.com) 26th April 2018 Source: CS229: PS4Q6 Starting code: http://cs229.stanford.edu/ps/ps4/q6/ Reference: https://github.com/zyxue/stanford-cs229/blob/master/Problem-set-4/6-reinforcement-learning-the-inverted-pendulum/control.py ``` from cart_pole import CartPole, Physics import numpy as np from scipy.signal import lfilter import matplotlib.pyplot as plt %matplotlib inline # Simulation parameters pause_time = 0.0001 min_trial_length_to_start_display = 100 display_started = min_trial_length_to_start_display == 0 NUM_STATES = 163 NUM_ACTIONS = 2 GAMMA = 0.995 TOLERANCE = 0.01 NO_LEARNING_THRESHOLD = 20 # Time cycle of the simulation time = 0 # These variables perform bookkeeping (how many cycles was the pole # balanced for before it fell). Useful for plotting learning curves. time_steps_to_failure = [] num_failures = 0 time_at_start_of_current_trial = 0 # You should reach convergence well before this max_failures = 500 # Initialize a cart pole cart_pole = CartPole(Physics()) # Starting `state_tuple` is (0, 0, 0, 0) # x, x_dot, theta, theta_dot represents the actual continuous state vector x, x_dot, theta, theta_dot = 0.0, 0.0, 0.0, 0.0 state_tuple = (x, x_dot, theta, theta_dot) # `state` is the number given to this state, you only need to consider # this representation of the state state = cart_pole.get_state(state_tuple) # if min_trial_length_to_start_display == 0 or display_started == 1: # cart_pole.show_cart(state_tuple, pause_time) # Perform all your initializations here: # Assume no transitions or rewards have been observed. # Initialize the value function array to small random values (0 to 0.10, # say). # Initialize the transition probabilities uniformly (ie, probability of # transitioning for state x to state y using action a is exactly # 1/NUM_STATES). # Initialize all state rewards to zero. ###### BEGIN YOUR CODE ###### V_s = np.random.rand(NUM_STATES) P_sa = np.ones((NUM_STATES,NUM_ACTIONS, NUM_STATES))/NUM_STATES R_s = np.zeros((NUM_STATES)) # Initialise intermediate variables state_transition_count = np.zeros((NUM_STATES,NUM_ACTIONS, NUM_STATES)) new_state_count = np.zeros(NUM_STATES) R_new_state = np.zeros(NUM_STATES) ###### END YOUR CODE ###### # This is the criterion to end the simulation. # You should change it to terminate when the previous # 'NO_LEARNING_THRESHOLD' consecutive value function computations all # converged within one value function iteration. Intuitively, it seems # like there will be little learning after this, so end the simulation # here, and say the overall algorithm has converged. consecutive_no_learning_trials = 0 while consecutive_no_learning_trials < NO_LEARNING_THRESHOLD: # Write code to choose action (0 or 1). # This action choice algorithm is just for illustration. It may # convince you that reinforcement learning is nice for control # problems!Replace it with your code to choose an action that is # optimal according to the current value function, and the current MDP # model. ###### BEGIN YOUR CODE ###### # TODO: action = np.argmax(np.sum(P_sa[state]*V_s, axis = 1)) ###### END YOUR CODE ###### # Get the next state by simulating the dynamics state_tuple = cart_pole.simulate(action, state_tuple) # Increment simulation time time = time + 1 # Get the state number corresponding to new state vector new_state = cart_pole.get_state(state_tuple) # if display_started == 1: # cart_pole.show_cart(state_tuple, pause_time) # reward function to use - do not change this! if new_state == NUM_STATES - 1: R = -1 else: R = 0 # Perform model updates here. # A transition from `state` to `new_state` has just been made using # `action`. The reward observed in `new_state` (note) is `R`. # Write code to update your statistics about the MDP i.e. the # information you are storing on the transitions and on the rewards # observed. Do not change the actual MDP parameters, except when the # pole falls (the next if block)! ###### BEGIN YOUR CODE ###### # record the number of times `state, action, new_state` occurs state_transition_count[state, action, new_state] += 1 # record the rewards for every `new_state` R_new_state[new_state] += R # record the number of time `new_state` was reached new_state_count[new_state] += 1 ###### END YOUR CODE ###### # Recompute MDP model whenever pole falls # Compute the value function V for the new model if new_state == NUM_STATES - 1: # Update MDP model using the current accumulated statistics about the # MDP - transitions and rewards. # Make sure you account for the case when a state-action pair has never # been tried before, or the state has never been visited before. In that # case, you must not change that component (and thus keep it at the # initialized uniform distribution). ###### BEGIN YOUR CODE ###### # TODO: sum_state = np.sum(state_transition_count, axis = 2) mask = sum_state > 0 P_sa[mask] = state_transition_count[mask]/sum_state[mask].reshape(-1, 1) # Update reward function mask = new_state_count>0 R_s[mask] = R_new_state[mask]/new_state_count[mask] ###### END YOUR CODE ###### # Perform value iteration using the new estimated model for the MDP. # The convergence criterion should be based on `TOLERANCE` as described # at the top of the file. # If it converges within one iteration, you may want to update your # variable that checks when the whole simulation must end. ###### BEGIN YOUR CODE ###### iter = 0 tol = 1 while tol > TOLERANCE: V_old = V_s V_s = R_s + GAMMA * np.max(np.sum(P_sa*V_s, axis = 2), axis = 1) tol = np.max(np.abs(V_s - V_old)) iter = iter + 1 if iter == 1: consecutive_no_learning_trials += 1 else: # Reset consecutive_no_learning_trials = 0 ###### END YOUR CODE ###### # Do NOT change this code: Controls the simulation, and handles the case # when the pole fell and the state must be reinitialized. if new_state == NUM_STATES - 1: num_failures += 1 if num_failures >= max_failures: break print('[INFO] Failure number {}'.format(num_failures)) time_steps_to_failure.append(time - time_at_start_of_current_trial) # time_steps_to_failure[num_failures] = time - time_at_start_of_current_trial time_at_start_of_current_trial = time if time_steps_to_failure[num_failures - 1] > min_trial_length_to_start_display: display_started = 1 # Reinitialize state # x = 0.0 x = -1.1 + np.random.uniform() * 2.2 x_dot, theta, theta_dot = 0.0, 0.0, 0.0 state_tuple = (x, x_dot, theta, theta_dot) state = cart_pole.get_state(state_tuple) else: state = new_state # plot the learning curve (time balanced vs. trial) log_tstf = np.log(np.array(time_steps_to_failure)) plt.plot(np.arange(len(time_steps_to_failure)), log_tstf, 'k') window = 30 w = np.array([1/window for _ in range(window)]) weights = lfilter(w, 1, log_tstf) x = np.arange(window//2, len(log_tstf) - window//2) plt.plot(x, weights[window:len(log_tstf)], 'r--') plt.xlabel('Num failures') plt.ylabel('Num steps to failure') plt.show() ```
github_jupyter
![image](resources/qgss-header.png) # Lab 5: Quantum error correction You can do actual insightful science with IBMQ devices and the knowledge you have about quantum error correction. All you need are a few tools from Qiskit. ``` !pip install -U -r grading_tools/requirements.txt from qiskit import * from IPython.display import clear_output clear_output() ``` ## Using a noise model In this lab we are going to deal with noisy quantum systems, or at least simulations of them. To deal with this in Qiskit, we need to import some things. ``` from qiskit.providers.aer.noise import NoiseModel from qiskit.providers.aer.noise.errors import pauli_error, depolarizing_error from qiskit.providers.aer.noise import thermal_relaxation_error ``` The following function is designed to create a noise model which will be good for what we are doing here. It has two types of noise: * Errors on `cx` gates in which an `x`, `y` or `z` is randomly applied to each qubit. * Errors in measurement which simulated a thermal process happening over time. ``` def make_noise(p_cx=0,T1T2Tm=(1,1,0)): ''' Returns a noise model specified by the inputs - p_cx: probability of depolarizing noise on each qubit during a cx - T1T2Tm: tuple with (T1,T2,Tm), the T1 and T2 times and the measurement time ''' noise_model = NoiseModel() # depolarizing error for cx error_cx = depolarizing_error(p_cx, 1) error_cx = error_cx.tensor(error_cx) noise_model.add_all_qubit_quantum_error(error_cx, ["cx"]) # thermal error for measurement (T1,T2,Tm) = T1T2Tm error_meas = thermal_relaxation_error(T1, T2, Tm) noise_model.add_all_qubit_quantum_error(error_meas, "measure") return noise_model ``` Let's check it out on a simple four qubit circuit. One qubit has an `x` applied. Two others has a `cx`. One has nothing. Then all are measured. ``` qc = QuantumCircuit(4) qc.x(0) qc.cx(1,2) qc.measure_all() qc.draw(output='mpl') ``` This is a simple circuit with a simple output, as we'll see when we run it. ``` execute( qc, Aer.get_backend('qasm_simulator'), shots=8192).result().get_counts() ``` Now let's run it with noise on the `cx` gates only. ``` noise_model = make_noise(p_cx=0.1) execute( qc, Aer.get_backend('qasm_simulator'), noise_model=noise_model, shots=8192).result().get_counts() ``` The measurement noise depends on three numbers: $T_1$, $T_2$ and $T_m$. The first two describe the timescale for certain noise processes. The last describes how long measurements take. For simplicity we'll set $T_1=T_2=1$ and vary $T_m$. For $T_m=0$, the measurement is too fast to see any noise. The longer it takes, the more noise we'll see. ``` for Tm in (0.01,0.1,1,10): noise_model = make_noise(p_cx=0, T1T2Tm=(1,1,Tm)) counts = execute( qc, Aer.get_backend('qasm_simulator'), noise_model=noise_model, shots=8192).result().get_counts() print('Tm =',Tm,', counts =',counts) ``` The most notable effect of this noise is that it causes `1` values to relax down to `0`. # Running repetition codes Qiskit has tools to make it easy to set up, run and analyze repetition codes. ``` from qiskit.ignis.verification.topological_codes import RepetitionCode from qiskit.ignis.verification.topological_codes import GraphDecoder from qiskit.ignis.verification.topological_codes import lookuptable_decoding, postselection_decoding ``` Here's one with four repetitions and a single measurement round. ``` d = 4 T = 1 code = RepetitionCode(d,T) ``` The repetition code object contains a couple of circuits: for encoded logical values of `0` and `1`. ``` code.circuit ``` Here's the one for `0`. ``` code.circuit['0'].draw(output='text') ``` And for `1`. ``` code.circuit['1'].draw(output='text') ``` We can run both circuits at once by first converting them into a list. ``` circuits = code.get_circuit_list() job = execute( circuits, Aer.get_backend('qasm_simulator'), noise_model=noise_model, shots=8192) ``` Once they've run, we can extract the results and convert them into a form that allows us to more easily look at syndrome changes. ``` raw_results = {} for log in ['0','1']: raw_results[log] = job.result().get_counts(log) results = code.process_results( raw_results ) ``` It's easiest to just package this up into a function. ``` def get_results(code, noise_model, shots=8192): circuits = code.get_circuit_list() job = execute( circuits, Aer.get_backend('qasm_simulator'), noise_model=noise_model, shots=shots) raw_results = {} for log in ['0','1']: raw_results[log] = job.result().get_counts(log) results = code.process_results( raw_results ) return results ``` First let's look at an example without any noise, to keep things simple. ``` noise_model = make_noise() # noise model with no noise results = get_results(code, noise_model) results ``` Here's an example with some `cx` noise. ``` noise_model = make_noise(p_cx=0.01) results = get_results(code, noise_model) for log in results: print('\nMost common results for a stored',log) for output in results[log]: if results[log][output]>100: print(output,'ocurred for',results[log][output],'samples.') ``` The main thing we need to know is the probability of a logical error. By setting up and using a decoder, we can find out! ``` decoder = GraphDecoder(code) decoder.get_logical_prob(results) ``` By calculating these value for different sizes of code and noise models, we can learn more about how the noise will affect large circuits. This is important for error correction, but also for the applications that we'll try to run before error correction is possible. Even more importantly, running these codes on real devices allows us to see the effects of real noise. Small-scale quantum error correction experiments like these will allow us to study the devices we have access to, understand what they do and why they do it, and test their abilities. This is the most important exercise that you can try: doing real and insightful experiments on cutting-edge quantum hardware. It's the kind of thing that professional researchers do and write papers about. I know this because I'm one of those researchers. See the following examples: * ["A repetition code of 15 qubits", James R. Wootton and Daniel Loss, Phys. Rev. A 97, 052313 (2018)](https://arxiv.org/abs/1709.00990) * ["Benchmarking near-term devices with quantum error correction", James R. Wootton, Quantum Science and Technology (2020)](https://arxiv.org/abs/2004.11037) As well as the relevant chapter of the Qiskit textbook: [5.1 Introduction to Quantum Error Correction using Repetition Codes](https://qiskit.org/textbook/ch-quantum-hardware/error-correction-repetition-code.html). By running repetition codes on the IBM quantum devices available to you, looking at the results and figuring out why they look like they do, you could soon know things about them that no-one else does! ## Transpiling for real devices The first step toward using a real quantum device is to load your IBMQ account and set up the provider. ``` # IBMQ.save_account("d25b4b26f7725b7768cc6394319cf4d7528c7d037bd8b2752f51b9be9da98ff1cff30053a2c2ef65bef631496cd60c7b15b658be2953a7eb14a13fe71e8eafeb") IBMQ.load_account() provider = IBMQ.get_provider(hub='ibm-q') ``` Now you can set up a backend object for your device of choice. We'll go for the biggest device on offer: Melbourne. ``` backend = provider.get_backend('ibmq_16_melbourne') ``` Using the Jupyter tools, we can take a closer look. ``` import qiskit.tools.jupyter %matplotlib inline backend ``` This has enough qubits to run a $d=8$ repetition code. Let's set this up and get the circuits to run. ``` d = 8 code = RepetitionCode(8,1) raw_circuits = code.get_circuit_list() ``` Rather than show such a big circuit, let's just look at how many of each type of gate there are. For example, repetition codes should have $2(d-1)$ `cx` gates in, which means 14 in this case. ``` raw_circuits[1].count_ops() ``` Before running on a real device we need to transpile. This is the process of turning the circuits into ones that the device can actually run. It is usually done automatically before running, but we can also do it ourself using the code below. ``` circuits = [] for qc in raw_circuits: circuits.append( transpile(qc, backend=backend) ) ``` Let's check what this process did to the gates in the circuit. ``` circuits[1].count_ops() ``` Note that this has `u3` gates (which the circuit previously didn't) and the `x` gates have disappeared. The solution to this is simple. The `x` gates have just been described as specific forms of `u3` gates, which is the way that the hardware understands single qubit operations. More concerning is what has happened to the `cx` gates. There are now 74!. This is due to connectivity. If you ask for a combination of `cx` gates that cannot be directly implemented, the transpiler will do some fancy tricks to make a circuit which is effectively the same as the one you want. This comes at the cost of inserting `cx` gates. For more information, see [2.4 More Circuit-Identities](https://qiskit.org/textbook/ch-gates/more-circuit-identities.html). However, here our circuit *is* something that can be directly implemented. The transpiler just didn't realize (and figuring it out is a hard problem). We can solve the problem by telling the transpiler exactly which qubits on the device should be used as the qubits in our code. This is done by setting up an `initial_layout` as follows. ``` def get_initial_layout(code,line): initial_layout = {} for j in range(code.d): initial_layout[code.code_qubit[j]] = line[2*j] for j in range(code.d-1): initial_layout[code.link_qubit[j]] = line[2*j+1] return initial_layout line = [6,5,4,3,2,1,0,14,13,12,11,10,9,8,7] initial_layout = get_initial_layout(code,line) initial_layout ``` With this, let's try transpilation again. ``` circuits = [] for qc in raw_circuits: circuits.append( transpile(qc, backend=backend, initial_layout=initial_layout) ) circuits[1].count_ops() ``` Perfect! Now try for yourself on one of the devices that we've now retired: Tokyo. ``` from qiskit.test.mock import FakeTokyo backend = FakeTokyo() backend ``` The largest repetition code this can handle is one with $d=10$. ``` d = 10 code = RepetitionCode(d,1) raw_circuits = code.get_circuit_list() raw_circuits[1].count_ops() ``` For this we need to find a line of 19 qubits across the coupling map. ``` line = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19] initial_layout = get_initial_layout(code,line) circuits = [] for qc in raw_circuits: circuits.append(transpile(qc, backend=backend, initial_layout=initial_layout) ) circuits[1].count_ops() ``` Clearly, the line chosen in the cell above was not a good example. Find a line such that the transpiled circuit `circuits[1]` has exactly 18 `cx` gates. ``` line = None # define line variable so the transpiled circuit has exactly 18 CNOTs. ### WRITE YOUR CODE BETWEEN THESE LINES - START line = 0,1,2,3,9,4,8,7,6,5,10,15,16,17,11,12,13,18,14,19 ### WRITE YOUR CODE BETWEEN THESE LINES - END initial_layout = get_initial_layout(code,line) circuits = [] for qc in raw_circuits: circuits.append(transpile(qc, backend=backend, initial_layout=initial_layout) ) circuits[1].count_ops() ```
github_jupyter
# Lab Three --- For this lab we're going to be making and using a bunch of functions. Our Goals are: - Searching our Documentation - Using built in functions - Making our own functions - Combining functions - Structuring solutions ``` # For the following built in functions we didn't touch on them in class. I want you to look for them in the python documentation and implement them. # I want you to find a built in function to SWAP CASE on a string. Print it. # For example the string "HeY thERe HowS iT GoING" turns into "hEy THerE hOWs It gOing" sample_string = "HeY thERe HowS iT GoING" print(sample_string.swapcase()) # I want you to find a built in function to CENTER a string and pad the sides with 4 dashes(-) a side. Print it. # For example the string "Hey There" becomes "----Hey There----" sample_string = "Hey There" print(sample_string.center(17,"-")) # I want you to find a built in function to PARTITION a string. Print it. # For example the string "abcdefg.hijklmnop" would come out to be ["abcdefg",".","hijklmnop"] sample_string = "abcdefg.hijklmnop" print(sample_string.partition(".")) # I want you to write a function that will take in a number and raise it to the power given. # For example if given the numbers 2 and 3. The math that the function should do is 2^3 and should print out or return 8. Print the output. def power(number,exponent) -> int: return number ** exponent example = power(2,3) print(example) # I want you to write a function that will take in a list and see how many times a given number is in the list. # For example if the array given is [2,3,5,2,3,6,7,8,2] and the number given is 2 the function should print out or return 3. Print the output. array = [2,3,4,2,3,6,7,8,2] def multiplicity(array,target): count = 0 for number in array: if number == target: count += 1 return count example = multiplicity(array, 2) print(example) # Use the functions given to create a slope function. The function should be named slope and have 4 parameters. # If you don't remember the slope formula is (y2 - y1) / (x2 - x1) If this doesn't make sense look up `Slope Formula` on google. def division(x, y): return x / y def subtraction(x, y): return x - y def slope(x1, x2, y1, y2): return division(subtraction(y2,y1), subtraction(x2,x1)) example = slope(1, 3, 2, 6) print(example) # Use the functions given to create a distance function. The function should be named function and have 4 parameters. # HINT: You'll need a built in function here too. You'll also be able to use functions written earlier in the notebook as long as you've run those cells. # If you don't remember the distance formula it is the square root of the following ((x2 - x1)^2 + (y2 - y1)^2). If this doesn't make sense look up `Distance Formula` on google. import math def addition(x, y): return x + y def distance(x1, x2, y1, y2): x_side = power(subtraction(x2, x1), 2) y_side = power(subtraction(y2, y1), 2) combined_sides = addition(x_side, y_side) return math.sqrt(combined_sides) print(distance(1, 3, 2, 6)) ```
github_jupyter
# Variable Relationship Tests (correlation) - Pearson’s Correlation Coefficient - Spearman’s Rank Correlation - Kendall’s Rank Correlation - Chi-Squared Test ## Correlation Test Correlation Measures whether greater values of one variable correspond to greater values in the other. Scaled to always lie between +1 and −1 - Correlation is Positive when the values increase together. - Correlation is Negative when one value decreases as the other increases. - A correlation is assumed to be linear. - 1 is a perfect positive correlation - 0 is no correlation (the values don’t seem linked at all) - -1 is a perfect negative correlation ## Correlation Methods - **Pearson's Correlation Test:** assumes the data is normally distributed and measures linear correlation. - **Spearman's Correlation Test:** does not assume normality and measures non-linear correlation. - **Kendall's Correlation Test:** similarly does not assume normality and measures non-linear correlation, but it less commonly used. ## Difference Between Pearson's and Spearman's Pearson's Test | Spearman's Test ---------------|---------------- Paramentric Correlation | Non-parametric Linear relationship | Non-linear relationship Continuous variables | continuous or ordinal variables Propotional change | Change not at constant rate ``` import statsmodels.api as sm import matplotlib.pyplot as plt import pandas as pd import numpy as np import seaborn as sns sns.set(font_scale=2, palette= "viridis") from sklearn.preprocessing import scale import researchpy as rp from scipy import stats data = pd.read_csv('../data/pulse_data.csv') data.head() ``` ## Pearson’s Correlation Coefficient Tests whether two samples have a linear relationship. ### Assumptions - Observations in each sample are independent and identically distributed (iid). - Observations in each sample are normally distributed. - Observations in each sample have the same variance. ### Interpretation - H0: There is a relationship between two variables - Ha: There is no relationship between two variables __Question: Is there any relationship between height and weight?__ ``` data.Height.corr(data.Weight) data.Height.corr(data.Weight, method="pearson") data.Height.corr(data.Weight, method="spearman") plt.figure(figsize=(10,8)) sns.scatterplot(data=data, x='Height', y="Weight") plt.show() plt.figure(figsize=(10,8)) sns.regplot(data=data, x='Height', y="Weight") plt.show() stat, p_value = stats.shapiro(data['Height']) print(f'statistic = {stat}, p-value = {p_value}') alpha = 0.05 if p_value > alpha: print("The sample has normal distribution(Fail to reject the null hypothesis, the result is not significant)") else: print("The sample does not have a normal distribution(Reject the null hypothesis, the result is significant)") stat, p_value = stats.shapiro(data['Weight']) print(f'statistic = {stat}, p-value = {p_value}') alpha = 0.05 if p_value > alpha: print("The sample has normal distribution(Fail to reject the null hypothesis, the result is not significant)") else: print("The sample does not have a normal distribution(Reject the null hypothesis, the result is significant)") # Checking for normality by Q-Q plot graph plt.figure(figsize=(12, 8)) stats.probplot(data['Height'], plot=plt, dist='norm') plt.show() # Checking for normality by Q-Q plot graph plt.figure(figsize=(12, 8)) stats.probplot(data['Weight'], plot=plt, dist='norm') plt.show() stats.levene(data['Height'], data['Weight']) stat, p, = stats.levene(data['Height'], data['Weight']) print(f'stat={stat}, p-value={p}') alpha = 0.05 if p > alpha: print('The variances are equal between two variables(reject H0, not significant)') else: print('The variances are not equal between two variables(reject H0, significant)') stats.pearsonr(data['Height'], data['Weight']) stat, p, = stats.pearsonr(data['Height'], data['Weight']) print(f'stat={stat}, p-value={p}') alpha = 0.05 if p > alpha: print('There is a relationship between two variables(fail to reject H0, not significant)') else: print('There is a no relationship between two variables(reject H0, significant)') ``` ## Spearman’s Rank Correlation Test Tests whether two samples have a monotonic relationship. ### Assumptions - Observations in each sample are independent and identically distributed (iid). - Observations in each sample can be ranked. ### Interpretation - **H0 hypothesis:** There is is relationship between variable 1 and variable 2 - **H1 hypothesis:** There is no relationship between variable 1 and variable 2 ``` stats.spearmanr(data['Height'], data['Weight']) stat, p = stats.spearmanr(data['Height'], data['Weight']) print(f'stat={stat}, p-value={p}') alpha = 0.05 if p > alpha: print('There is a relationship between two variables(fail to reject H0, not significant)') else: print('There is a no relationship between two variables(reject H0, significant)') ``` ## Kendall’s Rank Correlation Test ### Assumptions - Observations in each sample are independent and identically distributed (iid). - Observations in each sample can be ranked. ### Interpretation ### Interpretation - **H0 hypothesis:** There is a relationship between variable 1 and variable 2 - **H1 hypothesis:** There is no relationship between variable 1 and variable 2 ``` stats.spearmanr(data['Height'], data['Weight']) stat, p, = stats.kendalltau(data['Height'], data['Weight']) print(f'stat={stat}, p-value={p}') alpha = 0.05 if p > alpha: print('Accept null hypothesis; there is a relationship between Height and Weight(fail to reject H0, not significant)') else: print('Reject the null hypothesis; there is no relationship between Height and Weight (reject H0, significant)') ``` ## Chi-Squared Test - The Chi-square test of independence tests if there is a significant relationship between two categorical variables - The test is comparing the observed observations to the expected observations. - The data is usually displayed in a cross-tabulation format with each row representing a category for one variable and each column representing a category for another variable. - Chi-square test of independence is an omnibus test. Meaning it tests the data as a whole. This means that one will not be able to tell which levels (categories) of the variables are responsible for the relationship if the Chi-square table is larger than 2×2 - If the test is larger than 2×2, it requires post hoc testing. If this doesn’t make much sense right now, don’t worry. Further explanation will be provided when we start working with the data. ### Assumptions - It should be two categorical variables(e.g; Gender) - Each variables should have at leats two groups(e.g; Gender = Female or Male) - There should be independence of observations(between and within subjects) - Large sample size - The expected frequencies should be at least 1 for each cell. - The expected frequencies for the majority(80%) of the cells should be at least 5. If the sample size is small, we have to use **Fisher's Exact Test** **Fisher's Exact Test** is similar to Chi-squared test, but it is used for small-sized samples. ## Interpretation - The H0 (Null Hypothesis): There is a relationship between variable one and variable two. - The Ha (Alternative Hypothesis): There is no relationship between variable 1 and variable 2. ### Contingency Table Contingency table is a table with at least two rows and two columns(2x2) and its use to present categorical data in terms of frequency counts. ``` data = pd.read_csv('../data/KosteckiDillon.csv', usecols=['id', 'time', 'dos', 'hatype', 'age', 'airq', 'medication', 'headache', 'sex']) data.head() table = pd.crosstab(data['sex'], data['headache']) table stats.chi2_contingency(table) stat, p, dof, expected = stats.chi2_contingency(table) print(f'stat={stat}, p-value={p}') alpha = 0.05 if p > alpha: print('There is a relationship between sex and headache(fail to reject Ho, not significant)') else: print('There is no relationship between sex and headache.(reject H0, significant)') ``` ## Fisher’s Test ``` stat, p, = stats.fisher_exact(table) print(f'stat={stat}, p-value={p}') if p > 0.05: print('Probably independent') else: print('Probably dependent') ```
github_jupyter
## VotingClassifier - [ BaggingClassifier, RandomForestClassifier, XGBClassifier ] ``` import pandas as pd from xgboost import XGBClassifier, XGBRegressor from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score from sklearn.ensemble import BaggingClassifier, RandomForestClassifier, VotingClassifier from sklearn.multiclass import OneVsOneClassifier,OneVsRestClassifier from sklearn.svm import LinearSVC, SVC from sklearn.linear_model import LinearRegression, LogisticRegression, RidgeClassifier from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor from sklearn.neighbors import KNeighborsClassifier from imblearn.ensemble import BalancedBaggingClassifier, BalancedRandomForestClassifier training_data = pd.read_csv('train.csv') print(training_data.shape) training_data.columns ``` ### Dropping columns -- [ 'ID','Team_Value','Playing_Style','Won_Championship','Previous_SB_Wins' ] ``` y = training_data.Won_Championship training_data = training_data.drop(columns=['Won_Championship','ID','Team_Value','Playing_Style','Previous_SB_Wins'],axis=1) le_Number_Of_Injured_Players = LabelEncoder() training_data['Number_Of_Injured_Players'] = le_Number_Of_Injured_Players.fit_transform(training_data['Number_Of_Injured_Players']) le_Coach_Experience_Level = LabelEncoder() training_data['Coach_Experience_Level'] = le_Coach_Experience_Level.fit_transform(training_data['Coach_Experience_Level']) training_data.head() x_train,x_test, y_train, y_test = train_test_split(training_data,y,test_size=0.2) bags = BalancedBaggingClassifier(n_estimators=100,oob_score=True,bootstrap_features=True,replacement=True) bags.fit(x_train,y_train) #bags.fit(training_data,y) prediction = bags.predict(x_test) acc = 100 * (f1_score(y_test,prediction,average='binary')) acc bal_rfc = BalancedRandomForestClassifier(class_weight='balanced_subsample',criterion='entropy') bal_rfc.fit(x_train,y_train) #bal_rfc.fit(training_data,y) prediction = bal_rfc.predict(x_test) acc = 100 * (f1_score(y_test,prediction,average='binary')) acc xgb = XGBClassifier(n_estimators=500,learning_rate=0.1,max_depth=10,reg_lambda=0.1,importance_type='total_gain') xgb.fit(x_train,y_train) #xgb.fit(training_data,y) prediction = xgb.predict(x_test) acc = 100 * (f1_score(y_test,prediction,average='binary')) acc bag = BalancedBaggingClassifier(n_estimators=100,oob_score=True,bootstrap_features=True,replacement=True) xgb = XGBClassifier(n_estimators=500,learning_rate=0.1,max_depth=10,reg_lambda=0.1,importance_type='total_gain') bal_rfc = BalancedRandomForestClassifier(class_weight='balanced_subsample',criterion='entropy') voting = VotingClassifier(estimators=[ ('bag', bag), ('rfc', bal_rfc), ('xgb', xgb)], voting='hard') voting.fit(training_data, y) prediction = voting.predict(x_test) acc = 100 * (f1_score(y_test,prediction,average='binary')) acc cols = training_data.columns test_data = pd.read_csv('test.csv') event_id = test_data['ID'] print(test_data.shape) test_data = test_data.drop(columns=['ID','Team_Value','Playing_Style','Previous_SB_Wins'],axis=1) test_data['Number_Of_Injured_Players'] = le_Number_Of_Injured_Players.fit_transform(test_data['Number_Of_Injured_Players']) test_data['Coach_Experience_Level'] = le_Coach_Experience_Level.fit_transform(test_data['Coach_Experience_Level']) predictions = voting.predict(test_data) result_df = pd.DataFrame({'ID':event_id,'Won_Championship':predictions}) result_df.to_csv('Prediction.csv',index=False) ``` #### Online ACCURACY - 76.4, when local accuracy - 78.9 on whole data (VotingClassifier)
github_jupyter
# Views - Views are nothing but widget only but having capability to hold widgets. ``` from webdriver_kaifuku import BrowserManager from widgetastic.widget import Browser command_executor = "http://localhost:4444/wd/hub" config = { "webdriver": "Remote", "webdriver_options": {"desired_capabilities": {"browserName": "firefox"}, "command_executor": command_executor, } } mgr = BrowserManager.from_conf(config) sel = mgr.ensure_open() class MyBrowser(Browser): pass browser = MyBrowser(selenium=sel) browser.url = "http://0.0.0.0:8000/test_page.html" from widgetastic.widget import View, Text, TextInput, Checkbox, ColourInput, Select # Example-1 class BasicWidgetView(View): text_input = TextInput(id="text_input") checkbox = Checkbox(id="checkbox_input") button = Text(locator=".//button[@id='a_button']") color_input = ColourInput(id="color_input") view = BasicWidgetView(browser) ``` ### Nested Views ``` # Example-2 class MyNestedView(View): @View.nested class basic(View): #noqa text_input = TextInput(id="text_input") checkbox = Checkbox(id="checkbox_input") @View.nested class conditional(View): select_input = Select(id="select_lang") view = MyNestedView(browser) view.fill({'basic': {'text_input': 'hi', 'checkbox': True}, 'conditional': {'select_input': 'Go'}}) # Example-3 class Basic(View): text_input = TextInput(id="text_input") checkbox = Checkbox(id="checkbox_input") class Conditional(View): select_input = Select(id="select_lang") class MyNestedView(View): basic = View.nested(Basic) conditional = View.nested(Conditional) view = MyNestedView(browser) view.read() ``` ### Switchable Conditional Views ``` from widgetastic.widget import ConditionalSwitchableView # Example-4: Switchable widgets class MyConditionalWidgetView(View): select_input = Select(id="select_lang") lang_label = ConditionalSwitchableView(reference="select_input") lang_label.register("Python", default=True, widget=Text(locator=".//h3[@id='lang-1']")) lang_label.register("Go", widget=Text(locator=".//h3[@id='lang-2']")) view = MyConditionalWidgetView(browser) # Example-5: Switchable Views class MyConditionalView(View): select_input = Select(id="select_lang") lang = ConditionalSwitchableView(reference="select_input") @lang.register("Python", default=True) class PythonView(View): # some more widgets lang_label = Text(locator=".//h3[@id='lang-1']") @lang.register("Go") class GoView(View): lang_label = Text(locator=".//h3[@id='lang-2']") view = MyConditionalView(browser) ``` ### Parametrized Views ``` from widgetastic.widget import ParametrizedView from widgetastic.utils import ParametrizedLocator # Example-6 class MyParametrizedView(ParametrizedView): PARAMETERS = ('name',) ROOT = ParametrizedLocator(".//div[contains(label, {name|quote})]") widget = Checkbox(locator=".//input") view = MyParametrizedView(browser, additional_context={'name': 'widget 1'}) # Example-7: Nested Parametrized View class MyNestedParametrizedView(View): @View.nested class widget_selector(ParametrizedView): PARAMETERS = ('name',) ROOT = ParametrizedLocator(".//div[contains(label, {name|quote})]") widget = Checkbox(locator=".//input") view = MyNestedParametrizedView(browser) ```
github_jupyter
``` import tensorflow as tf import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder from sklearn.feature_extraction.text import CountVectorizer from nltk.stem import PorterStemmer from autocorrect import spell import os from six.moves import cPickle import re MAX_LEN = 25 BATCH_SIZE = 64 stemmer = PorterStemmer() def process_str(string, bot_input=False, bot_output=False): string = string.strip().lower() string = re.sub(r"[^A-Za-z0-9(),!?\'\`:]", " ", string) string = re.sub(r"\'s", " \'s", string) string = re.sub(r"\'ve", " \'ve", string) string = re.sub(r"n\'t", " n\'t", string) string = re.sub(r"\'re", " \'re", string) string = re.sub(r"\'d", " \'d", string) string = re.sub(r"\'ll", " \'ll", string) string = re.sub(r",", " , ", string) string = re.sub(r"!", " ! ", string) string = re.sub(r"\s{2,}", " ", string) string = string.split(" ") string = [re.sub(r"[0-9]+", "NUM", token) for token in string] string = [stemmer.stem(re.sub(r'(.)\1+', r'\1\1', token)) for token in string] string = [spell(token).lower() for token in string] # Truncate string while True: try: string.remove("") except: break if(not bot_input and not bot_output): string = string[0:MAX_LEN] elif(bot_input): string = string[0:MAX_LEN-1] string.insert(0, "</start>") else: string = string[0:MAX_LEN-1] string.insert(len(string), "</end>") old_len = len(string) for i in range((MAX_LEN) - len(string)): string.append(" </pad> ") string = re.sub("\s+", " ", " ".join(string)).strip() return string, old_len imported_graph = tf.train.import_meta_graph('checkpoints/best_validation.meta') sess = tf.InteractiveSession() imported_graph.restore(sess, "checkpoints/best_validation") sess.run(tf.tables_initializer()) graph = tf.get_default_graph() def test(text): text, text_len = process_str(text) text = [text] + ["hi"] * (BATCH_SIZE-1) text_len = [text_len] + [1] * (BATCH_SIZE-1) return text, text_len test_init_op = graph.get_operation_by_name('data/dataset_init') user_ph = graph.get_tensor_by_name("user_placeholder:0") bot_inp_ph = graph.get_tensor_by_name("bot_inp_placeholder:0") bot_out_ph = graph.get_tensor_by_name("bot_out_placeholder:0") user_lens_ph = graph.get_tensor_by_name("user_len_placeholder:0") bot_inp_lens_ph = graph.get_tensor_by_name("bot_inp_lens_placeholder:0") bot_out_lens_ph = graph.get_tensor_by_name("bot_out_lens_placeholder:0") words = graph.get_tensor_by_name("inference/words:0") def chat(text): user, user_lens = test(text) sess.run(test_init_op, feed_dict={ user_ph: user, bot_inp_ph: ["hi"] * BATCH_SIZE, bot_out_ph: ["hi"] * BATCH_SIZE, user_lens_ph: user_lens, bot_inp_lens_ph: [1] * BATCH_SIZE, bot_out_lens_ph: [1] * BATCH_SIZE }) translations_text = sess.run(words) output = [item.decode() for item in translations_text[0]] if("</end>" in output): end_idx = output.index("</end>") output = output[0:end_idx] output = " ".join(output) print("BOT: " + output) while True: chat(input()) ```
github_jupyter
# Programming Assignment: Линейная регрессия: прогноз оклада по описанию вакансии ## Введение Линейные методы хорошо подходят для работы с разреженными данными — к таковым относятся, например, тексты. Это можно объяснить высокой скоростью обучения и небольшим количеством параметров, благодаря чему удается избежать переобучения. Линейная регрессия имеет несколько разновидностей в зависимости от того, какой регуляризатор используется. Мы будем работать с гребневой регрессией, где применяется квадратичный, или `L2`-регуляризатор. ## Реализация в Scikit-Learn Для извлечения `TF-IDF`-признаков из текстов воспользуйтесь классом `sklearn.feature_extraction.text.TfidfVectorizer`. Для предсказания целевой переменной мы будем использовать гребневую регрессию, которая реализована в классе `sklearn.linear_model.Ridge`. Обратите внимание, что признаки `LocationNormalized` и `ContractTime` являются строковыми, и поэтому с ними нельзя работать напрямую. Такие нечисловые признаки с неупорядоченными значениями называют категориальными или номинальными. Типичный подход к их обработке — кодирование категориального признака с m возможными значениями с помощью m бинарных признаков. Каждый бинарный признак соответствует одному из возможных значений категориального признака и является индикатором того, что на данном объекте он принимает данное значение. Данный подход иногда называют `one-hot`-кодированием. Воспользуйтесь им, чтобы перекодировать признаки `LocationNormalized` и `ContractTime`. Он уже реализован в классе `sklearn.feature_extraction.DictVectorizer`. Пример использования: Вам понадобится производить замену пропущенных значений на специальные строковые величины (например, `'nan'`). Для этого подходит следующий код: ## Инструкция по выполнению ### Шаг 1: Загрузите данные об описаниях вакансий и соответствующих годовых зарплатах из файла `salary-train.csv` (либо его заархивированную версию `salary-train.zip`). ``` import pandas as pd from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.feature_extraction import DictVectorizer from sklearn.linear_model import Ridge from scipy.sparse import hstack train = pd.read_csv('salary-train.csv') test = pd.read_csv('salary-test-mini.csv') train.head() test.head() ``` ### Шаг 2: Проведите предобработку: * Приведите тексты к нижнему регистру (`text.lower()`). * Замените все, кроме букв и цифр, на пробелы — это облегчит дальнейшее разделение текста на слова. Для такой замены в строке `text` подходит следующий вызов: `re.sub('[^a-zA-Z0-9]', ' ', text)`. Также можно воспользоваться методом `replace` у `DataFrame`, чтобы сразу преобразовать все тексты: * Примените `TfidfVectorizer` для преобразования текстов в векторы признаков. Оставьте только те слова, которые встречаются хотя бы в 5 объектах (параметр `min_df` у `TfidfVectorizer`). * Замените пропуски в столбцах `LocationNormalized` и `ContractTime` на специальную строку `'nan'`. Код для этого был приведен выше. * Примените `DictVectorizer` для получения `one-hot`-кодирования признаков `LocationNormalized` и `ContractTime`. * Объедините все полученные признаки в одну матрицу "объекты-признаки". Обратите внимание, что матрицы для текстов и категориальных признаков являются разреженными. Для объединения их столбцов нужно воспользоваться функцией `scipy.sparse.hstack`. ``` train['FullDescription'] = train['FullDescription'].replace('[^a-zA-Z0-9]', ' ', regex = True).str.lower() test['FullDescription'] = test['FullDescription'].replace('[^a-zA-Z0-9]', ' ', regex = True).str.lower() train.head() test.head() vectorizer = TfidfVectorizer(min_df = 5) X_train = vectorizer.fit_transform(train['FullDescription']) X_test = vectorizer.transform(test['FullDescription']) train['LocationNormalized'].fillna('nan', inplace=True) train['ContractTime'].fillna('nan', inplace=True) enc = DictVectorizer() X_train_categ = enc.fit_transform(train[['LocationNormalized', 'ContractTime']].to_dict('records')) X_test_categ = enc.transform(test[['LocationNormalized', 'ContractTime']].to_dict('records')) X = hstack([X_train, X_train_categ]) y = train['SalaryNormalized'] ``` ### Шаг 3: Обучите гребневую регрессию с параметрами `alpha=1` и `random_state=241`. Целевая переменная записана в столбце `SalaryNormalized`. ``` clf = Ridge(alpha=1, random_state=241) clf.fit(X, y) ``` ### Шаг 4: Постройте прогнозы для двух примеров из файла `salary-test-mini.csv`. Значения полученных прогнозов являются ответом на задание. Укажите их через пробел. ``` X = hstack([X_test, X_test_categ]) ans = clf.predict(X) ans def write_answer(ans, n): with open("ans{}.txt".format(n), "w") as fout: fout.write(str(ans)) write_answer(str(ans)[1:-1], 1) ``` Если ответом является нецелое число, то целую и дробную часть необходимо разграничивать точкой, например, 0.42. При необходимости округляйте дробную часть до двух знаков.
github_jupyter
# 你的第一个神经网络 在此项目中,你将构建你的第一个神经网络,并用该网络预测每日自行车租客人数。我们提供了一些代码,但是需要你来实现神经网络(大部分内容)。提交此项目后,欢迎进一步探索该数据和模型。 ``` %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt ``` ## 加载和准备数据 构建神经网络的关键一步是正确地准备数据。不同尺度级别的变量使网络难以高效地掌握正确的权重。我们在下方已经提供了加载和准备数据的代码。你很快将进一步学习这些代码! ``` data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() ``` ## 数据简介 此数据集包含的是从 2011 年 1 月 1 日到 2012 年 12 月 31 日期间每天每小时的骑车人数。骑车用户分成临时用户和注册用户,cnt 列是骑车用户数汇总列。你可以在上方看到前几行数据。 下图展示的是数据集中前 10 天左右的骑车人数(某些天不一定是 24 个条目,所以不是精确的 10 天)。你可以在这里看到每小时租金。这些数据很复杂!周末的骑行人数少些,工作日上下班期间是骑行高峰期。我们还可以从上方的数据中看到温度、湿度和风速信息,所有这些信息都会影响骑行人数。你需要用你的模型展示所有这些数据。 ``` rides[:24*10].plot(x='dteday', y='cnt') ``` ### 虚拟变量(哑变量) 下面是一些分类变量,例如季节、天气、月份。要在我们的模型中包含这些数据,我们需要创建二进制虚拟变量。用 Pandas 库中的 `get_dummies()` 就可以轻松实现。 ``` dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head() ``` ### 调整目标变量 为了更轻松地训练网络,我们将对每个连续变量标准化,即转换和调整变量,使它们的均值为 0,标准差为 1。 我们会保存换算因子,以便当我们使用网络进行预测时可以还原数据。 ``` quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std ``` ### 将数据拆分为训练、测试和验证数据集 我们将大约最后 21 天的数据保存为测试数据集,这些数据集会在训练完网络后使用。我们将使用该数据集进行预测,并与实际的骑行人数进行对比。 ``` # Save data for approximately the last 21 days test_data = data[-21*24:] # Now remove the test data from the data set data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields] ``` 我们将数据拆分为两个数据集,一个用作训练,一个在网络训练完后用来验证网络。因为数据是有时间序列特性的,所以我们用历史数据进行训练,然后尝试预测未来数据(验证数据集)。 ``` # Hold out the last 60 days or so of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:] ``` ## 开始构建网络 下面你将构建自己的网络。我们已经构建好结构和反向传递部分。你将实现网络的前向传递部分。还需要设置超参数:学习速率、隐藏单元的数量,以及训练传递数量。 <img src="assets/neural_network.png" width=300px> 该网络有两个层级,一个隐藏层和一个输出层。隐藏层级将使用 S 型函数作为激活函数。输出层只有一个节点,用于递归,节点的输出和节点的输入相同。即激活函数是 $f(x)=x$。这种函数获得输入信号,并生成输出信号,但是会考虑阈值,称为激活函数。我们完成网络的每个层级,并计算每个神经元的输出。一个层级的所有输出变成下一层级神经元的输入。这一流程叫做前向传播(forward propagation)。 我们在神经网络中使用权重将信号从输入层传播到输出层。我们还使用权重将错误从输出层传播回网络,以便更新权重。这叫做反向传播(backpropagation)。 > **提示**:你需要为反向传播实现计算输出激活函数 ($f(x) = x$) 的导数。如果你不熟悉微积分,其实该函数就等同于等式 $y = x$。该等式的斜率是多少?也就是导数 $f(x)$。 你需要完成以下任务: 1. 实现 S 型激活函数。将 `__init__` 中的 `self.activation_function` 设为你的 S 型函数。 2. 在 `train` 方法中实现前向传递。 3. 在 `train` 方法中实现反向传播算法,包括计算输出错误。 4. 在 `run` 方法中实现前向传递。 ``` class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, (self.input_nodes, self.hidden_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) self.lr = learning_rate #### TODO: Set self.activation_function to your implemented sigmoid function #### # # Note: in Python, you can define a function with a lambda expression, # as shown below. self.activation_function = lambda x : 1/(1 + np.exp(-x)) # Replace 0 with your sigmoid calculation. ### If the lambda code above is not something you're familiar with, # You can uncomment out the following three lines and put your # implementation there instead. # #def sigmoid(x): # return 0 # Replace 0 with your sigmoid calculation here #self.activation_function = sigmoid def train(self, features, targets): ''' Train the network on batch of features and targets. Arguments --------- features: 2D array, each row is one data record, each column is a feature targets: 1D array of target values ''' n_records = features.shape[0] delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape) delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape) for X, y in zip(features, targets): #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer - Replace these values with your calculations. hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer - Replace these values with your calculations. final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer final_outputs = final_inputs # signals from final output layer #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error - Replace this value with your calculations. error = y - final_outputs # Output layer error is the difference between desired target and actual output. # TODO: Calculate the hidden layer's contribution to the error hidden_error = np.dot(error[0], self.weights_hidden_to_output.T[0]) # TODO: Backpropagated error terms - Replace these values with your calculations. output_error_term = error hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs) # Weight step (input to hidden) delta_weights_i_h += hidden_error_term * X[:, None] # Weight step (hidden to output) s = output_error_term * hidden_outputs delta_weights_h_o += s[:,None] # TODO: Update the weights - Replace these values with your calculations. self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step def run(self, features): ''' Run a forward pass through the network with input features Arguments --------- features: 1D array of feature values ''' #### Implement the forward pass here #### # TODO: Hidden layer - replace these values with the appropriate calculations. hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer - Replace these values with the appropriate calculations. final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer final_outputs = final_inputs # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2) ``` ## 单元测试 运行这些单元测试,检查你的网络实现是否正确。这样可以帮助你确保网络已正确实现,然后再开始训练网络。这些测试必须成功才能通过此项目。 ``` import unittest inputs = np.array([[0.5, -0.2, 0.1]]) targets = np.array([[0.4]]) test_w_i_h = np.array([[0.1, -0.2], [0.4, 0.5], [-0.3, 0.2]]) test_w_h_o = np.array([[0.3],[-0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328], [-0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, -0.20185996], [0.39775194, 0.50074398], [-0.29887597, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite) ``` ## 训练网络 现在你将设置网络的超参数。策略是设置的超参数使训练集上的错误很小但是数据不会过拟合。如果网络训练时间太长,或者有太多的隐藏节点,可能就会过于针对特定训练集,无法泛化到验证数据集。即当训练集的损失降低时,验证集的损失将开始增大。 你还将采用随机梯度下降 (SGD) 方法训练网络。对于每次训练,都获取随机样本数据,而不是整个数据集。与普通梯度下降相比,训练次数要更多,但是每次时间更短。这样的话,网络训练效率更高。稍后你将详细了解 SGD。 ### 选择迭代次数 也就是训练网络时从训练数据中抽样的批次数量。迭代次数越多,模型就与数据越拟合。但是,如果迭代次数太多,模型就无法很好地泛化到其他数据,这叫做过拟合。你需要选择一个使训练损失很低并且验证损失保持中等水平的数字。当你开始过拟合时,你会发现训练损失继续下降,但是验证损失开始上升。 ### 选择学习速率 速率可以调整权重更新幅度。如果速率太大,权重就会太大,导致网络无法与数据相拟合。建议从 0.1 开始。如果网络在与数据拟合时遇到问题,尝试降低学习速率。注意,学习速率越低,权重更新的步长就越小,神经网络收敛的时间就越长。 ### 选择隐藏节点数量 隐藏节点越多,模型的预测结果就越准确。尝试不同的隐藏节点的数量,看看对性能有何影响。你可以查看损失字典,寻找网络性能指标。如果隐藏单元的数量太少,那么模型就没有足够的空间进行学习,如果太多,则学习方向就有太多的选择。选择隐藏单元数量的技巧在于找到合适的平衡点。 ``` import sys ### TODO:Set the hyperparameters here, you need to change the defalut to get a better solution ### iterations = 5000 learning_rate = 0.8 hidden_nodes = 15 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt'] network.train(X, y) # Printing out the training progress train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values) val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values) sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) sys.stdout.flush() losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() _ = plt.ylim() ``` ## 检查预测结果 使用测试数据看看网络对数据建模的效果如何。如果完全错了,请确保网络中的每步都正确实现。 ``` fig, ax = plt.subplots(figsize=(15,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features).T*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) ``` ## 可选:思考下你的结果(我们不会评估这道题的答案) 请针对你的结果回答以下问题。模型对数据的预测效果如何?哪里出现问题了?为何出现问题呢? > **注意**:你可以通过双击该单元编辑文本。如果想要预览文本,请按 Control + Enter #### 请将你的答案填写在下方 预测效果大体符合实际情况,但是22-26日圣诞节前后的预测与实际值相差较大。由于一年只有一次圣诞节,第二年圣诞节作为测试集,训练数据过少所以在这个日期预测的不准确
github_jupyter
<a href="https://colab.research.google.com/github/Amro-source/Deep-Learning/blob/main/Copy_of_keras_wide_deep.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> To run this model directly in the browser with zero setup, open it in [Colab here](https://colab.research.google.com/github/sararob/keras-wine-model/blob/master/keras-wide-deep.ipynb). ``` from __future__ import absolute_import from __future__ import division from __future__ import print_function # Install the latest version of TensorFlow !pip install -q -U tensorflow==1.7.0 import itertools import os import math import numpy as np import pandas as pd import tensorflow as tf from sklearn.preprocessing import LabelEncoder from tensorflow import keras layers = keras.layers # This code was tested with TensorFlow v1.7 print("You have TensorFlow version", tf.__version__) # Get the data: original source is here: https://www.kaggle.com/zynicide/wine-reviews/data URL = "https://storage.googleapis.com/sara-cloud-ml/wine_data.csv" path = tf.keras.utils.get_file(URL.split('/')[-1], URL) # Convert the data to a Pandas data frame data = pd.read_csv(path) # Shuffle the data data = data.sample(frac=1) # Print the first 5 rows data.head() # Do some preprocessing to limit the # of wine varities in the dataset data = data[pd.notnull(data['country'])] data = data[pd.notnull(data['price'])] data = data.drop(data.columns[0], axis=1) variety_threshold = 500 # Anything that occurs less than this will be removed. value_counts = data['variety'].value_counts() to_remove = value_counts[value_counts <= variety_threshold].index data.replace(to_remove, np.nan, inplace=True) data = data[pd.notnull(data['variety'])] # Split data into train and test train_size = int(len(data) * .8) print ("Train size: %d" % train_size) print ("Test size: %d" % (len(data) - train_size)) # Train features description_train = data['description'][:train_size] variety_train = data['variety'][:train_size] # Train labels labels_train = data['price'][:train_size] # Test features description_test = data['description'][train_size:] variety_test = data['variety'][train_size:] # Test labels labels_test = data['price'][train_size:] # Create a tokenizer to preprocess our text descriptions vocab_size = 12000 # This is a hyperparameter, experiment with different values for your dataset tokenize = keras.preprocessing.text.Tokenizer(num_words=vocab_size, char_level=False) tokenize.fit_on_texts(description_train) # only fit on train # Wide feature 1: sparse bag of words (bow) vocab_size vector description_bow_train = tokenize.texts_to_matrix(description_train) description_bow_test = tokenize.texts_to_matrix(description_test) # Wide feature 2: one-hot vector of variety categories # Use sklearn utility to convert label strings to numbered index encoder = LabelEncoder() encoder.fit(variety_train) variety_train = encoder.transform(variety_train) variety_test = encoder.transform(variety_test) num_classes = np.max(variety_train) + 1 # Convert labels to one hot variety_train = keras.utils.to_categorical(variety_train, num_classes) variety_test = keras.utils.to_categorical(variety_test, num_classes) # Define our wide model with the functional API bow_inputs = layers.Input(shape=(vocab_size,)) variety_inputs = layers.Input(shape=(num_classes,)) merged_layer = layers.concatenate([bow_inputs, variety_inputs]) merged_layer = layers.Dense(256, activation='relu')(merged_layer) predictions = layers.Dense(1)(merged_layer) wide_model = keras.Model(inputs=[bow_inputs, variety_inputs], outputs=predictions) wide_model.compile(loss='mse', optimizer='adam', metrics=['accuracy']) print(wide_model.summary()) # Deep model feature: word embeddings of wine descriptions train_embed = tokenize.texts_to_sequences(description_train) test_embed = tokenize.texts_to_sequences(description_test) max_seq_length = 170 train_embed = keras.preprocessing.sequence.pad_sequences( train_embed, maxlen=max_seq_length, padding="post") test_embed = keras.preprocessing.sequence.pad_sequences( test_embed, maxlen=max_seq_length, padding="post") # Define our deep model with the Functional API deep_inputs = layers.Input(shape=(max_seq_length,)) embedding = layers.Embedding(vocab_size, 8, input_length=max_seq_length)(deep_inputs) embedding = layers.Flatten()(embedding) embed_out = layers.Dense(1)(embedding) deep_model = keras.Model(inputs=deep_inputs, outputs=embed_out) print(deep_model.summary()) deep_model.compile(loss='mse', optimizer='adam', metrics=['accuracy']) # Combine wide and deep into one model merged_out = layers.concatenate([wide_model.output, deep_model.output]) merged_out = layers.Dense(1)(merged_out) combined_model = keras.Model(wide_model.input + [deep_model.input], merged_out) print(combined_model.summary()) combined_model.compile(loss='mse', optimizer='adam', metrics=['accuracy']) # Run training combined_model.fit([description_bow_train, variety_train] + [train_embed], labels_train, epochs=10, batch_size=128) combined_model.evaluate([description_bow_test, variety_test] + [test_embed], labels_test, batch_size=128) # Generate predictions predictions = combined_model.predict([description_bow_test, variety_test] + [test_embed]) # Compare predictions with actual values for the first few items in our test dataset num_predictions = 40 diff = 0 for i in range(num_predictions): val = predictions[i] print(description_test.iloc[i]) print('Predicted: ', val[0], 'Actual: ', labels_test.iloc[i], '\n') diff += abs(val[0] - labels_test.iloc[i]) # Compare the average difference between actual price and the model's predicted price print('Average prediction difference: ', diff / num_predictions) ```
github_jupyter
``` import requests # 585247235 # 245719505、 # 观视频工作室 房价 543149108 # https://api.bilibili.com/x/v2/reply/reply?callback=jQuery17206010832908607249_1608646550339&jsonp=jsonp&pn=1&type=1&oid=585247235& header = {"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:22.0) Gecko/20100101 Firefox/22.0", "Cookie": ""} comments = [] original_url = "https://api.bilibili.com/x/v2/reply?jsonp=jsonp&type=1&oid=543149108&sort=2&pn=" for page in range(1, 60): # 页码这里就简单处理了 url = original_url + str(page) print(url) try: html = requests.get(url, headers=header) data = html.json() if data['data']['replies']: for i in data['data']['replies']: comments.append(i['content']['message']) except Exception as err: print(url) print(err) url = 'https://m.weibo.cn/comments/hotflow?id=4595898757681897&mid=4595898757681897&max_id_type=0' # url = 'https://m.weibo.cn/profile/info?uid=5393135816' header = {"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:22.0) Gecko/20100101 Firefox/22.0", "Cookie": ""} html = requests.get(url, headers=header) html.json() for i in range(100): print(data['data']['replies'][i]['member']['vip']['vipType']) comments[:5] type(comments) len(comments) from gensim.models import word2vec import jieba from gensim.models.keyedvectors import KeyedVectors name = 'fangjia' with open(f'comments_{name}.txt', 'w') as f: for item in comments: f.write("%s\n" % item) stopwords = stopwordslist('hit.txt') # stopwords #打开文件夹,读取内容,并进行分词 final = [] stopwords = stopwordslist('hit.txt') with open(f'comments_{name}.txt') as f: for line in f.readlines(): word = jieba.cut(line) for i in word: if i not in stopwords: # final = final + i +" " final.append(i) # final with open(f'fenci_comments_{name}.txt', 'w') as f: for item in final: f.write("%s\n" % item) with open(f'fenci_comments_{name}.txt', 'r') as f: sentences = f.readlines() sentences = [s.split() for s in sentences] # size表示词向量维度 iter表示迭代次数 model = word2vec.Word2Vec(sentences, window=5,size=300, min_count=2, iter=500, workers=3) #model = Word2Vec.load("word2vec.model") #model.wv.save('vectors_300d_word2vec') # 保存训练过程 #model.save('vectors_300d_word2vec') # 保存训练过程 model.wv.save_word2vec_format('vectors_chenping_word2vec.txt') # 仅保留词向量 sentences=word2vec.Text8Corpus('comments_chenping.txt') word_vectors = KeyedVectors.load_word2vec_format('', binary=False) word_vectors.most_similar('房价') model.wv.similar_by_word('内卷', topn =100) model.wv.similar_by_word('战争', topn =10) model.wv.similar_by_word('战争', topn =10) model.wv.similar_by_word('中国', topn =10) model.wv.similar_by_word('中国', topn =10) model.wv.similar_by_word('美国', topn =10) model.wv.similar_by_word('美国', topn =10) model.wv.similar_by_word('欧洲', topn =10) model.wv.similar_by_word('联合国', topn =10) model.wv.similar_by_word('历史', topn =10) model.wv.similar_by_word('朝鲜', topn =10) model.wv.similar_by_word('高晓松', topn =100) model.wv.similar_by_word('公知', topn =10) from collections import Counter import jieba jieba.load_userdict('userdict.txt') # 创建停用词list def stopwordslist(filepath): stopwords = [line.strip() for line in open(filepath, 'r').readlines()] return stopwords # 对句子进行分词 def seg_sentence(sentence): sentence_seged = jieba.cut(sentence.strip()) stopwords = stopwordslist('G:\\哈工大停用词表.txt') # 这里加载停用词的路径 outstr = '' for word in sentence_seged: if word not in stopwords: if word != '\t': outstr += word outstr += " " return outstr inputs = open('hebing_wenben\\wenben.txt', 'r') #加载要处理的文件的路径 outputs = open('output.txt', 'w') #加载处理后的文件路径 for line in inputs: line_seg = seg_sentence(line) # 这里的返回值是字符串 outputs.write(line_seg) outputs.close() inputs.close() # WordCount with open('output.txt', 'r') as fr: #读入已经去除停用词的文件 data = jieba.cut(fr.read()) data = dict(Counter(data)) with open('cipin.txt', 'w') as fw: #读入存储wordcount的文件路径 for k, v in data.items(): fw.write('%s,%d\n' % (k, v)) ```
github_jupyter
``` #入力 a,b=map(int, input().split()) c=list(map(int, input().split())) print(a,b,c) #初期化 a=[0]*5 b=a b2=a[:] a[1]=3 print('b:{}, b2:{}'.format(b,b2)) import copy a= [[0]*3 for i in range(5)] #2次元配列はこう準備、[[0]*5]*5 b=copy.deepcopy(a) a[1][0]=5 print(b) #内包表記奇数のみ odd=[i for i in range(20) if i%2==1] print(a) #ソート w=[[1, 2], [2, 6] , [3, 6], [4, 5], [5, 7]] w.sort() print(w) w.sort(key=lambda x:x[1],reverse=True) print(w) #二重ループ n,y=1000, 1234000 for i in range(n+1): for j in range(n-i+1): if y-10000*i-5000*j==1000*(n-i-j): print(i, j, n-i-j) break else: continue break else: print('-1 -1 -1') #二部探索 import bisect a = [1, 2, 3, 5, 6, 7, 8, 9] b=bisect.bisect_left(a, 8) bisect.insort_left(a, 4) print(a,b) %%time #素数 n = 10**6 primes = set(range(2, n+1)) for i in range(2, int(n**0.5+1)): primes.difference_update(range(i*2, n+1, i)) primes=list(primes) #print(primes) #combinations、組み合わせ、順列 from itertools import permutations, combinations,combinations_with_replacement,product a=['a','b','C'] print(list(permutations(a))) print(list(combinations(a,2))) print(list(combinations_with_replacement(a,3))) print(list(product(['a','b','C'],repeat=2))) # 0埋め a=100 b=0.987654321 print('{0:06d}-{1:6f}'.format(a,b)) #最大公約数、最小公倍数、階乗 import fractions, math a,b=map(int, input().split()) f=fractions.gcd(a,b) f2=a*b//f print(f,f2) print(math.factorial(5)) #文字式を評価 a=eval('1*2*3') print(a) from collections import Counter a=[2,2,2,3,4,3,1,2,1,3,1,2,1,2,2,1,2,1] a=Counter(a) for i in a.most_common(3):print(i) import numpy as np s=[list(input()) for i in range(4)] s=np.array(s) s=s[::-1,:].T i=0 j=1 np.sum(s[:3,:2]=='5') #最短経路 ys,xs=2,2 yg,xg=4,5 a=['########', '#......#', '#.######', '#..#...#', '#..##..#', '##.....#', '########'] n=[(ys-1,xs-1)] route={n[0]:0} p=[[1,0],[0,1],[-1,0],[0,-1]] count=1 while route.get((yg-1,xg-1),0)==0 and count != 10000: n2=[] for i in n: for j in p: np=(i[0]+j[0],i[1]+j[1]) if a[np[0]][np[1]]=='.' and route.get(np,-1)==-1: n2.append(np) route[np]=count n=n2 count+=1 print(n,route) a #最短経路 W,H=4,5 step={(2,2):0} route=[[(2,2)]] #map=[[0 for i in range(W)] for j in range(H)] map=[[0, 0, 0, 0], [1, 1, 0, 0], [0, 0, 0, 1], [0, 0, 1, 1], [0, 0, 0, 0]] print(map) for i in range(15): next_list=[] for j in range(len(route[i])): next_list.append((route[i][j][0]+1,route[i][j][1])) next_list.append((route[i][j][0] ,route[i][j][1]+1)) next_list.append((route[i][j][0]-1,route[i][j][1])) next_list.append((route[i][j][0] ,route[i][j][1]-1)) s=set(next_list) #print(s) n_list=[] for l, (j,k) in enumerate(next_list): if W>j>=0 and H>k>=0 : #print(map[j][k]) if map[k][j]==0: n_list.append(next_list[l]) n_list=sorted(set(n_list),key=n_list.index) remove=[] for l in n_list: if step.setdefault(l, i+1) <i+1: remove.append(l) for l in remove: n_list.remove(l) route.append(n_list) print(i) if (3,4) in step: break print(step) print(route) print(len(step.keys())) print(step.get((3,4),-1)) #深さ探索 #q=list() #for i in range(5): # q.append(set(map(int, input().split()))) q=[{1, 3}, {1, 4}, {9, 5}, {5, 2}, {6, 5},{3,5},{8,9},{7,9}] count=0 while count!=10000: a=q.pop() for j in q: if len(j&a) != 0: j |=a count=0 break else:q=[a]+q if count>len(q): break count+=1 print(count,q) #深さ探索2 #n=int(input()) #pt=[[] for i in range(n)] #for i in range(n-1): # a,b=map(int,input().split()) # pt[a-1].append(b-1) # pt[b-1].append(a-1) n=7 pt=[[1, 2, 3], [0], [5, 0], [6, 0], [6], [2], [3, 4]] def dfs(v): d=[-1 for i in range(n)] q=[] d[v]=0 q.append(v) while q: v=q.pop() for i in pt[v]: if d[i]==-1: d[i]=d[v]+1 q.append(i) print(d,q) return d print(dfs(0)) #幅優先探索 n=7 pt=[[1, 2, 3], [0], [5, 0], [6, 0], [6], [2], [3, 4]] def bfs(v): d=[-1]*n d[v]=0 q=[v] c=1 while q: q1=[] for i in q: for j in pt[i]: if d[j]==-1: d[j]=c q1.append(j) q=q1 c+=1 print(d,q) return d print(bfs(0)) #周辺埋め h,w=map(int, input().split()) s = ["."*(w+2)]+["."+input()+"." for i in range(h)]+["."*(w+2)] print(s) h,w=map(int, input().split()) s = ["."*(w+2)]+["."+input()+"." for i in range(h)]+["."*(w+2)] s2 = [[0 for i in range(w+2)] for i in range(h+2)] print(s) for i in range(h+2): for j in range(w+2): if s[i][j]=='#': s2[i][j]=1 print(s2) import random print([random.randint(0,1) for i in range(10)]) import random count=0 for i in range(1000000): a=1 while a==1: a=random.randint(0,1) #print(a,end=(' ')) if a==1: count+=1 print(count) #しゃくとり方 n=int(input()) a=list(map(int, input().split())) count=0 right=0 for left in range(n): if right==left: right+=1 while right<n and a[right-1]<a[right]: right+=1 count+=right-left print(count) #しゃくとり、同じものを保存 n=int(input()) a=list(map(int, input().split())) count=0 right=0 m=dict() for left in range(n): while right<n and m.get(a[right],0)==0: m[a[right]]=m.get(a[right],0)+1 right+=1 count=max(count,right-left) m[a[left]]=m.get(a[left],1)-1 print(count) # 累積和 b=list(range(1,30)) import numpy b2=numpy.cumsum([0]+b) a2=[0] for i in b:a2.append(a2[-1]+i) print(b2) print(a2) #DP1 n=6 w=8 weight=[2,1,3,2,1,5] value=[3,2,6,1,3,85] dp=[[0 for i in range(w+1)] for j in range(n+1)] for i in range(n): for j in range(w+1): if j>=weight[i] : dp[i+1][j]=max(dp[i][j-weight[i]]+value[i],dp[i][j]) else: dp[i+1][j]=dp[i][j] print(dp[:i+2]) print(dp[n][w]) #DP2 n=5 a=[7,5,3,1,8] A=12 dp=[[30 for i in range(A+1)] for j in range(n+1)] dp[0][0]=0 for i in range(n): for j in range(A+1): if j>=a[i] : dp[i+1][j]=min(dp[i][j-a[i]]+1,dp[i][j]) else: dp[i+1][j]=dp[i][j] print(dp[:i+2]) print(dp[n][A]) #ビット演算、式を計算 from itertools import permutations, combinations,combinations_with_replacement,product a=list(product(['+','-'],repeat=3)) s=['5', '5', '3', '4'] for i in a: ans=s[0]+i[0]+s[1]+i[1]+s[2]+i[2]+s[3] if eval(ans)==7: print(ans+'=7') break #ワーシャルフロイト法 import random n=int(input()) #c=[list(map(int, input().split())) for i in range(n)] c=[[random.randint(1, 10) for i in range(n)] for i in range(n)] c[0][4]=0 c[0][3]=0 c[0][2]=0 print(c) for k in range(n): for i in range(n): for j in range(n): c[i][j]=min(c[i][j],c[i][k]+c[k][j]) for i in c: print(i) from scipy.sparse.csgraph import floyd_warshall cost=floyd_warshall(c) cost #ベルマンフォード法 def BF(p,n,s): inf=float("inf") d=[inf for i in range(n)] d[s-1]=0 for i in range(n+1): for e in p: if e[0]!=inf and d[e[1]-1]>d[e[0]-1]+e[2]: d[e[1]-1] = d[e[0]-1] + e[2] if i==n-1:t=d[-1] if i==n and t!=d[-1]: return [0,'inf'] return list(map(lambda x:-x, d)) n,m=map(int, input().split()) a=[list(map(int, input().split())) for i in range(m)] a=[[x,y,-z] for x,y,z in a] print(BF(a, n, 1)[-1]) # ダイクストラ法 mp2=[[2, 4, 2], [3, 4, 5], [3, 2, 1], [1, 3, 2], [2, 0, 8], [0, 2, 8], [1, 2, 4], [0, 1, 3]] from heapq import heappop, heappush inf=float('inf') d=[inf for i in range(5)] d[0]=0 prev=[None]*5 p=dict() for i,j,k in mp2: p[i]=p.get(i,[])+[(j,k)] print(p) q=[] heappush(q,(d[0],0)) while q: print(q,d,prev) du, u = heappop(q) if d[u]<du: continue for v,weight in p.get(u,[]): alt=du+weight if d[v]>alt: d[v]=alt prev[v]=u heappush(q, (alt,v)) print('p',p) p=[4] while prev[p[-1]]!=None: p.append(prev[p[-1]]) print('最短経路',*p[::-1]) print('最短距離',d) #ダイクストラscipy from scipy.sparse import csr_matrix from scipy.sparse.csgraph import dijkstra import numpy as np mp2=[[2, 4, 2], [3, 4, 5], [3, 2, 1], [1, 3, 2], [2, 0, 8], [0, 2, 8], [1, 2, 4], [0, 1, 3]] in_,out,weight=zip(*mp2) graph = csr_matrix((weight, (in_, out)), shape=(5, 5), dtype=np.int64) dists = dijkstra(graph, indices=0) print(dists) p #複数の文字列を変換 S='54IZSB' S = S.translate(str.maketrans("ODIZSB","001258")) print(S) #素因数分解 pf={} m=341555136 for i in range(2,int(m**0.5)+1): while m%i==0: pf[i]=pf.get(i,0)+1 m//=i if m>1:pf[m]=1 print(pf) #組み合わせのmod対応 def framod(n, mod, a=1): for i in range(1,n+1): a=a * i % mod return a def power(n, r, mod): if r == 0: return 1 if r%2 == 0: return power(n*n % mod, r//2, mod) % mod if r%2 == 1: return n * power(n, r-1, mod) % mod def comb(n, k, mod): a=framod(n, mod) b=framod(k, mod) c=framod(n-k, mod) return (a * power(b, mod-2, mod) * power(c, mod-2, mod)) % mod print(comb(10**5,5000,10**9+7)) print(comb(100000, 50000,10**9+7)) #ヒープ 最小値の取り出し from heapq import heappop,heappush an=[[1,3],[2,1],[3,4]] plus,h=[],[] for i,(a,b) in enumerate(an): plus.append(b) heappush(h,(a,i)) ans,k=0,7 for i in range(k): x,i=heappop(h) ans+=x heappush(h,(x+plus[i],i)) print(ans,x) #アルファベット al=[chr(ord('a') + i) for i in range(26)] print(''.join(al)) #2で何回割れるか n=8896 print(bin(n),len(bin(n)),bin(n).rfind("1")) print(len(bin(n)) - bin(n).rfind("1") - 1) while not n%2: n/=2 print(n) #union find class UnionFind(object): def __init__(self, n=1): self.par = [i for i in range(n)] self.rank = [0 for _ in range(n)] def find(self, x): if self.par[x] == x: return x else: self.par[x] = self.find(self.par[x]) return self.par[x] def union(self, x, y): x = self.find(x) y = self.find(y) if x != y: if self.rank[x] < self.rank[y]: x, y = y, x if self.rank[x] == self.rank[y]: self.rank[x] += 1 self.par[y] = x def is_same(self, x, y): return self.find(x) == self.find(y) n,m=map(int, input().split()) p=list(map(int, input().split())) uf1=UnionFind(n) for i in range(m): a,b=map(int, input().split()) uf1.union(a-1,b-1) count=0 for i in range(n): if uf1.is_same(i,p[i]-1):count+=1 print(count) print(uf1.par) #組み合わせの数 from scipy.special import comb comb(10**5, 100, exact=True)%(10**9+7) #マラソンマッチ from time import time st=time() now=0 while time() - st<2: now=(now+1)%10**10 print(now) #n進数 n=64 k=-3 bi='' while n!=0: bi+=str(n%abs(k)) if k<0:n=-(-n//k) else:n=n//k print(bi[::-1]) #マンハッタン距離重心&cost a=[5,2,7,2,12,5] import numpy as np b=np.int64([0]+a).cumsum().cumsum()[:-1] c=np.int64([0]+a[::-1]).cumsum().cumsum()[:-1] print(b+c[::-1], (b+c[::-1]).min(), (b+c[::-1]).argmin()) b=sum(a) c=0 for j,i in enumerate(a): c+=i if c>b//2:break print(sum([a[i]*abs(i-j) for i in range(len(a))]),j) #入力 N, x, *A = map(int, open(0).read().split()) #メモ化再帰 from functools import lru_cache import sys sys.setrecursionlimit(10000) @lru_cache(maxsize=10000) def fibm(n): if n<2:return n else:return (fibm(n-1) + fibm(n-2))%(10**9+7) print(fibm(1200)%(10**9+7)) #上下左右 幅優先探索 h,w=5,5 s=[['#', '.', '.', '.', '#'], ['#', '.', '.', '#', '#'], ['.', '.', '.', '.', '.'], ['#', '#', '#', '.', '#'], ['.', '.', '.', '.', '#']] sta=(2,2) p=[[-1]*w for i in range(h)] np=[(1,0),(-1,0),(0,1),(0,-1)] q={sta} p[sta[0]][sta[1]]=0 step=0 while step<k and q: step+=1 nq=set() while q: now=q.pop() for i,j in np: nx,ny=now[0]+i,now[1]+j if nx<0 or nx==h or ny<0 or ny==w:continue if s[nx][ny]=='.' and p[nx][ny]==-1: p[nx][ny]=step nq.add((nx,ny)) q=nq.copy() print(q) for i in range(h): for j in range(w): print('{:2d}'.format(p[i][j]),end=' ') print() ```
github_jupyter
<a href="https://colab.research.google.com/github/domsjcsn/Linear-ALgebra---Python/blob/main/LinALgPython_JOCSON.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # **WELCOME TO PYTHON FUNDAMENTALS** In this module, we are going to establish our skills in Python programming. In this notebook we are going to cover: * Variable and Data Types * Operations * Input and Output Operations * Logic Control * Iterables * Functions ## **Variables and Data Types** ![image](https://static.javatpoint.com/python/images/python-data-types.png) <blockquote> Variable and data types may be utilized by the user for giving definition, declaring, and executing mathematical terms and functions. Also, variables and data types are the given different values. You can store a value in a variable, which is also a memory location. And the variable and data types will depend on what the user will input. [1] ``` x = 1 a,b = 3, -2 type(x) y = 3.0 type(y) x = float(x) type(x) x s, t, u = "1", '3', 'three' type(s) ``` ## **Operations** ![image](https://us.123rf.com/450wm/sabuhinovruzov/sabuhinovruzov2006/sabuhinovruzov200603084/150600486-add-subtract-divide-multiply-symbols-thin-line-education-concept-math-calculate-sign-on-white-backgr.jpg?ver=6_) <blockquote> In order to solve and calculate for a mathematical problem, mathematical process is applied. There are numerous operations that one can use, including the basics, which are the Addition, Multiplication, Subtradction, and Division. [2] ## **Arithmetic** ========================================= Arithmetic are symbols that indicate that a mathematical operation is required. ``` w, x, y,z = 4.0, -3.0, 2, -32 ### Addition S = w + x S ### Subtraction D = y - z D ### Multiplication P = w*z P ### Division Q = y/x Q ``` ![image](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSEu9oWAinwh6gdScpIdxtjdP5AUX3USjFV_g&usqp=CAU) <blockquote> Floor division returns the highest possible integer. The floor division is indicated by the "⌊ ⌋" symbol. [3] ``` ### Floor Division Qf = w//x Qf ### Exponentiation E = w**w E ``` ![image](https://www.computerhope.com/jargon/m/modulo_animation.gif) <blockquote> Modelo operations is also a mathematical operation like the Floor division. The modelo operator is utilized to find the remainder of two divided numbers. It is indicated by the symbol "%". [4] ``` ### Modulo mod = z%x mod ``` ## ***Assignment*** The Assignment operator is the operator used to assign a new value to a variable, property, event or indexer element. ``` A, B, C, D, E = 0, 100, 2, 1, 2 A += w A B -= x B C *= w C D /= x D E **= y E ``` ## ***Comparators*** Comparators are used to compare different types of values or objects based on one's liking. [6] ![image](https://cdn.educba.com/academy/wp-content/uploads/2019/09/Python-Comparison-Operators-1.png) ``` size_1, size_2, size_3 = 1,2.0, "1" true_size = 1.0 ## Equality size_1 == true_size ## Non-Equality size_2 != true_size ## Inequality s1 = size_1 > size_2 s2 = size_1 < size_2/2 s3 = true_size >= size_1 s4 = size_2 <= true_size ``` ## ***Logical*** This is a set of principles that depicts how components should be lay out so your device can execute tasks without failures. ``` size_1 == true_size size_1 size_1 is true_size size_1 is not true_size P, Q = True, False conj = P and Q conj disj = P or Q disj nand = not(P and Q) nand xor = (not P and Q) or (P and not Q) xor ``` ## **Input & Output** Input refers to what the user loads to the program. And Output refers to what the program gives back to the user. ``` print("Hello World!") cnt = 14000 string= "Hello World!" print(string, ", Current COVID count is:", cnt) cnt += 10000 print(f"{string}, current count is: {cnt}") sem_grade = 86.25 name = "Franz" print("Hello {}, your semestral grade is: {}".format(name, sem_grade)) pg, mg, fg = 0.3, 0.3, 0.4 print("The weights of your semestral grades are:\ \n\t {:.2%} for Prelim\ \n\t {:.2%} for Midterms, and\ \n\t {:.2%} for Finals.".format(pg, mg, fg)) e = input("Enter an number: ") e name = input("Enter your name: ") prelimsgrade =int(input("Enter your prelim grade: ")) midtermsgrade =int(input("Enter your midterm grade: ")) finalsgrade =int(input("Enter your finals grade: ")) sum = int(prelimsgrade + midtermsgrade + finalsgrade) semester_grade = sum/3 print("Hello {}, your semestral grade is: {}".format(name, semester_grade)) ``` ## ***Looping Statements*** <blockquote> Looping statements performs for a number of repetitions until specific conditions are met. ![image](https://javatutoring.com/wp-content/uploads/2016/12/loops-in-java.jpg) ## ***While Statement*** While statement executes for a number of repetition until it reaches a specific condition that is false. ``` ## while loops i, j = 0, 10 while(i<=j): print(f"{i}\t|\t{j}") i += 1 ``` ### ***For Statement*** For statement enables the looping of sets for a number of repetition until it achieves a false condition. ``` # for(int = 0; i<10; i++){ # printf(i) # } i = 0 for i in range(11): print(i) playlist = ["Beside You", "Meet me at our spot", "Sandali"] print('Now playing:\n') for song in playlist: print(song) ``` ## **FLOW CONTROL** ## ***Conditional Statement*** The conditional statement decides whether certain statements need to be executed or not based on a given condition. ``` num_1, num_2 = 12, 12 if(num_1 == num_2): print("HAHA") elif(num_1>num_2): print("HOHO") else: print("HUHU") ``` ## ***FUNCTIONS*** A function is a block of code that is utilized to reach a specific result or to execute a single action. ``` # void DeleteUser (int userid){ # delete(userid); # } def delete_user (userid): print("Successfully delete user: {}".format(userid)) user = 202014736 delete_user(202014736) def add(addend1, addend2): sum = addend1 + addend2 return sum add(3, 4) ``` ## **REFERENCES** [1] Priya Pedamkar (2020). [Variable Types](https://www.educba.com/python-variable-types) [2] w3Schools (2021). [Operations](https://www.w3schools.com/python/python_operators.asp) [3] Python Tutorial (2021). [Floor Division](https://www.pythontutorial.net/advanced-python/python-floor-division/) [4] Tutorials Point (2021). [Comparators](https://www.tutorialspoint.com/python/python_basic_operators.htm) ![image](https://i.pinimg.com/originals/f0/57/45/f05745097ea6273709bfe2e727989488.jpg)
github_jupyter
<a href="https://colab.research.google.com/github/ppiont/tensor-flow-state/blob/master/onestop_data_clean.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` from google.colab import drive drive.mount("/gdrive", force_remount = True) %cd "/gdrive/My Drive/tensor-flow-state/tensor-flow-state" import pandas as pd import numpy as np data_dir = "data/" # Define sensors to process sensor_name_list = ["RWS01_MONIBAS_0021hrl0414ra", "RWS01_MONIBAS_0021hrl0403ra", "RWS01_MONIBAS_0021hrl0409ra", "RWS01_MONIBAS_0021hrl0420ra", "RWS01_MONIBAS_0021hrl0426ra"] ``` ### ------------------------------------------------------------ START OF MESSING AROUND ------------------------------------------------------------ ``` ``` ### ------------------------------------------------------------ END OF MESSING AROUND ------------------------------------------------------------ ### Clean sensor data ``` import datetime def dateparse (time_in_secs): # Unix/epoch time to "YYYY-MM-DD HH:MM:SS" return datetime.datetime.fromtimestamp(float(time_in_secs)) def repair_datetime_index(df, freq = "T"): df = df.loc[~df.index.duplicated(keep = "first")] # remove duplicate date time indexes df = df.reindex(pd.date_range(start = df.index.min(), end = df.index.max(), freq = freq)) # add missing date time indexes df.index = df.index.tz_localize("UTC").tz_convert("Europe/Amsterdam") return df def fix_values(df): # The order of these operations is currently important! Pay attention when making changes df["speed_limit"] = np.where((df.index.hour < 19) & (df.index.hour >= 6), 100, 130) df.loc[df.flow < 0, "flow"] = np.nan # flow is either -2 (missing data) or 0 or positive. -2 to nan df.loc[df.speed < -1, "speed"] = np.nan # -2 (missing data) as well as oddities (-1.33, an average over -2 and -1 lanes?) to nan df.speed.mask(df.speed == -1, df.speed_limit, inplace = True) # -1 means no cars, setting it to speed limit df.loc[(df.speed < 0) & (df.speed > -1), "speed"] = 0 # anything else below zero is between 0 and -1, occuring when some lanes have non-moving cars while others have have no cars. df.speed.mask(df.speed > df.speed_limit, df.speed_limit, inplace = True) # cap speed at speed_limit, since higher speed dosn't add to representation return df import os def reduce_cols(sensors, path_in = "data/ndw_raw/", path_out = "data/"): sensor_df_list = list() for sensor in sensors: df = pd.read_csv(os.path.join(path_in, sensor + ".csv"), header = None, \ usecols = [0, 86, 87], names = ["timestamp", "speed", "flow"], \ index_col = "timestamp", parse_dates = True, date_parser = dateparse) df.flow /= 60 # change flow unit to min^-1 df = repair_datetime_index(df) df = fix_values(df) #df.to_csv(path_out + sensor) sensor_df_list.append(df) return sensor_df_list sensor_df_list = reduce_cols(sensor_name_list) ``` ### Join Sensors ``` def join_sensors(sensor_df_list, sensor_name_list): combined_df = pd.DataFrame({"timestamp": pd.date_range(start = "2011-01-01", end = "2019-12-31", freq = "T")}) combined_df.set_index("timestamp", drop = True, inplace = True) combined_df.index = combined_df.index.tz_localize("UTC").tz_convert("Europe/Amsterdam") d = {} for i, sensor in enumerate(sensor_df_list): # only add speed limit on the final sensor if i == len(sensor_df_list) - 1: d[sensor_name_list[i]] = sensor_df_list[i] combined_df = combined_df.join(d[sensor_name_list[i]], how = "outer", rsuffix = '_' + sensor_name_list[i]) else: d[sensor_name_list[i]] = sensor_df_list[i].iloc[:, :2] combined_df = combined_df.join(d[sensor_name_list[i]], how = "outer", rsuffix = "_" + sensor_name_list[i]) combined_df.dropna(how = "all", axis = 0, inplace = True) # this works in all cases because speed_limit is never NA on a sensor df return combined_df # Join sensors to one table df = join_sensors(sensor_df_list, sensor_name_list) # Rename and reorder columns df.rename({"speed_RWS01_MONIBAS_0021hrl0403ra": "speed_-2", "speed_RWS01_MONIBAS_0021hrl0409ra": "speed_-1",\ "speed_RWS01_MONIBAS_0021hrl0420ra": "speed_+1", "speed_RWS01_MONIBAS_0021hrl0426ra": "speed_+2",\ "flow_RWS01_MONIBAS_0021hrl0403ra": "flow_-2", "flow_RWS01_MONIBAS_0021hrl0409ra": "flow_-1",\ "flow_RWS01_MONIBAS_0021hrl0420ra": "flow_+1", "flow_RWS01_MONIBAS_0021hrl0426ra": "flow_+2"\ }, axis = 1, inplace = True) col_order = ["speed", "flow", "speed_-2", "speed_-1","speed_+1", "speed_+2", "flow_-2", "flow_-1", "flow_+1", "flow_+2", "speed_limit"] df = df[col_order] # Save table to csv #df.to_csv(data_dir + "combined_df.csv") df.head() ``` ### Impute data ``` cols = col_order speed_cols = ["speed", "speed_-2", "speed_-1","speed_+1", "speed_+2"] flow_cols = ["flow", "flow_-2", "flow_-1", "flow_+1", "flow_+2"] # Where values are missing in one or more sensors, but are present in others, impute with mean of others def fill_na_row_mean(df): row_avgs = df.mean(axis = 1).values.reshape(-1, 1) df = df.fillna(0) + df.isna().values * row_avgs return df speed_df = fill_na_row_mean(df[speed_cols]) flow_df = fill_na_row_mean(df[flow_cols]) df = speed_df.join(flow_df, how = "inner").join(df[["speed_limit"]], how = "inner") # Interpolate null vals for the first week of data of speed and flow cols def interpolate_week(df, cols): week = 7 * 24 * 60 for col in cols: df.iloc[:week, df.columns.get_loc(col)] = df[col][:week].interpolate(method = "time") return df # Replace remaining nulls with value from 1 week previous def shift_week(df, cols): # Use RangeIndex for the this operation df["timestamp"] = df.index df.reset_index(drop = True, inplace = True) week = 7 * 24 * 60 for col in cols: col_index = df.columns.get_loc(col) for row in df.itertuples(): if np.isnan(row[col_index + 1]): df.iat[row[0], col_index] = df.iat[(row[0] - week), col_index] # Return to DateTimeIndex again df.set_index(pd.to_datetime(df.timestamp.values), inplace = True) df.drop("timestamp", axis = 1, inplace = True) return df df = interpolate_week(df, cols) df = shift_week(df, cols) #df.to_csv("data/df_imputed_week_shift.csv") import holidays df["density"] = (df.flow * 60) / df.speed df["weekend"] = np.where(df.index.weekday > 4, 1, 0).astype(np.int16) df["holiday"] = np.array([int(x in holidays.NL()) for x in df.index]).astype(np.int16) df["speed_limit"] = np.where(df.speed_limit > 115, 1, 0) df.to_csv("data/df_imputed_week_shift_added_holiday_weekends_speed_limit_130.csv") ```
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. !wget --no-check-certificate \ https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip \ -O /tmp/horse-or-human.zip !wget --no-check-certificate \ https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip \ -O /tmp/validation-horse-or-human.zip ``` The following python code will use the OS library to use Operating System libraries, giving you access to the file system, and the zipfile library allowing you to unzip the data. ``` import os import zipfile local_zip = '/tmp/horse-or-human.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('/tmp/horse-or-human') local_zip = '/tmp/validation-horse-or-human.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('/tmp/validation-horse-or-human') zip_ref.close() ``` The contents of the .zip are extracted to the base directory `/tmp/horse-or-human`, which in turn each contain `horses` and `humans` subdirectories. In short: The training set is the data that is used to tell the neural network model that 'this is what a horse looks like', 'this is what a human looks like' etc. One thing to pay attention to in this sample: We do not explicitly label the images as horses or humans. If you remember with the handwriting example earlier, we had labelled 'this is a 1', 'this is a 7' etc. Later you'll see something called an ImageGenerator being used -- and this is coded to read images from subdirectories, and automatically label them from the name of that subdirectory. So, for example, you will have a 'training' directory containing a 'horses' directory and a 'humans' one. ImageGenerator will label the images appropriately for you, reducing a coding step. Let's define each of these directories: ``` # Directory with our training horse pictures train_horse_dir = os.path.join('/tmp/horse-or-human/horses') # Directory with our training human pictures train_human_dir = os.path.join('/tmp/horse-or-human/humans') # Directory with our training horse pictures validation_horse_dir = os.path.join('/tmp/validation-horse-or-human/horses') # Directory with our training human pictures validation_human_dir = os.path.join('/tmp/validation-horse-or-human/humans') ``` Now, let's see what the filenames look like in the `horses` and `humans` training directories: ``` train_horse_names = os.listdir(train_horse_dir) print(train_horse_names[:10]) train_human_names = os.listdir(train_human_dir) print(train_human_names[:10]) validation_horse_hames = os.listdir(validation_horse_dir) print(validation_horse_hames[:10]) validation_human_names = os.listdir(validation_human_dir) print(validation_human_names[:10]) ``` Let's find out the total number of horse and human images in the directories: ``` print('total training horse images:', len(os.listdir(train_horse_dir))) print('total training human images:', len(os.listdir(train_human_dir))) print('total validation horse images:', len(os.listdir(validation_horse_dir))) print('total validation human images:', len(os.listdir(validation_human_dir))) ``` Now let's take a look at a few pictures to get a better sense of what they look like. First, configure the matplot parameters: ``` %matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg # Parameters for our graph; we'll output images in a 4x4 configuration nrows = 4 ncols = 4 # Index for iterating over images pic_index = 0 ``` Now, display a batch of 8 horse and 8 human pictures. You can rerun the cell to see a fresh batch each time: ``` # Set up matplotlib fig, and size it to fit 4x4 pics fig = plt.gcf() fig.set_size_inches(ncols * 4, nrows * 4) pic_index += 8 next_horse_pix = [os.path.join(train_horse_dir, fname) for fname in train_horse_names[pic_index-8:pic_index]] next_human_pix = [os.path.join(train_human_dir, fname) for fname in train_human_names[pic_index-8:pic_index]] for i, img_path in enumerate(next_horse_pix+next_human_pix): # Set up subplot; subplot indices start at 1 sp = plt.subplot(nrows, ncols, i + 1) sp.axis('Off') # Don't show axes (or gridlines) img = mpimg.imread(img_path) plt.imshow(img) plt.show() ``` ## Building a Small Model from Scratch But before we continue, let's start defining the model: Step 1 will be to import tensorflow. ``` import tensorflow as tf ``` We then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers. Finally we add the densely connected layers. Note that because we are facing a two-class classification problem, i.e. a *binary classification problem*, we will end our network with a [*sigmoid* activation](https://wikipedia.org/wiki/Sigmoid_function), so that the output of our network will be a single scalar between 0 and 1, encoding the probability that the current image is class 1 (as opposed to class 0). ``` model = tf.keras.models.Sequential([ # Note the input shape is the desired size of the image 300x300 with 3 bytes color # This is the first convolution tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)), tf.keras.layers.MaxPooling2D(2, 2), # The second convolution tf.keras.layers.Conv2D(32, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The third convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fourth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # The fifth convolution tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), # Flatten the results to feed into a DNN tf.keras.layers.Flatten(), # 512 neuron hidden layer tf.keras.layers.Dense(512, activation='relu'), # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans') tf.keras.layers.Dense(1, activation='sigmoid') ]) ``` The model.summary() method call prints a summary of the NN ``` model.summary() ``` The "output shape" column shows how the size of your feature map evolves in each successive layer. The convolution layers reduce the size of the feature maps by a bit due to padding, and each pooling layer halves the dimensions. Next, we'll configure the specifications for model training. We will train our model with the `binary_crossentropy` loss, because it's a binary classification problem and our final activation is a sigmoid. (For a refresher on loss metrics, see the [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/descending-into-ml/video-lecture).) We will use the `rmsprop` optimizer with a learning rate of `0.001`. During training, we will want to monitor classification accuracy. **NOTE**: In this case, using the [RMSprop optimization algorithm](https://wikipedia.org/wiki/Stochastic_gradient_descent#RMSProp) is preferable to [stochastic gradient descent](https://developers.google.com/machine-learning/glossary/#SGD) (SGD), because RMSprop automates learning-rate tuning for us. (Other optimizers, such as [Adam](https://wikipedia.org/wiki/Stochastic_gradient_descent#Adam) and [Adagrad](https://developers.google.com/machine-learning/glossary/#AdaGrad), also automatically adapt the learning rate during training, and would work equally well here.) ``` from tensorflow.keras.optimizers import RMSprop model.compile(loss='binary_crossentropy', optimizer=RMSprop(lr=0.001), metrics=['accuracy']) ``` ### Data Preprocessing Let's set up data generators that will read pictures in our source folders, convert them to `float32` tensors, and feed them (with their labels) to our network. We'll have one generator for the training images and one for the validation images. Our generators will yield batches of images of size 300x300 and their labels (binary). As you may already know, data that goes into neural networks should usually be normalized in some way to make it more amenable to processing by the network. (It is uncommon to feed raw pixels into a convnet.) In our case, we will preprocess our images by normalizing the pixel values to be in the `[0, 1]` range (originally all values are in the `[0, 255]` range). In Keras this can be done via the `keras.preprocessing.image.ImageDataGenerator` class using the `rescale` parameter. This `ImageDataGenerator` class allows you to instantiate generators of augmented image batches (and their labels) via `.flow(data, labels)` or `.flow_from_directory(directory)`. These generators can then be used with the Keras model methods that accept data generators as inputs: `fit`, `evaluate_generator`, and `predict_generator`. ``` from tensorflow.keras.preprocessing.image import ImageDataGenerator # All images will be rescaled by 1./255 train_datagen = ImageDataGenerator(rescale=1/255) validation_datagen = ImageDataGenerator(rescale=1/255) # Flow training images in batches of 128 using train_datagen generator train_generator = train_datagen.flow_from_directory( '/tmp/horse-or-human/', # This is the source directory for training images target_size=(300, 300), # All images will be resized to 300x300 batch_size=128, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') # Flow training images in batches of 128 using train_datagen generator validation_generator = validation_datagen.flow_from_directory( '/tmp/validation-horse-or-human/', # This is the source directory for training images target_size=(300, 300), # All images will be resized to 300x300 batch_size=32, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') ``` ### Training Let's train for 15 epochs -- this may take a few minutes to run. Do note the values per epoch. The Loss and Accuracy are a great indication of progress of training. It's making a guess as to the classification of the training data, and then measuring it against the known label, calculating the result. Accuracy is the portion of correct guesses. ``` history = model.fit( train_generator, steps_per_epoch=8, epochs=15, verbose=1, validation_data = validation_generator, validation_steps=8) ``` ###Running the Model Let's now take a look at actually running a prediction using the model. This code will allow you to choose 1 or more files from your file system, it will then upload them, and run them through the model, giving an indication of whether the object is a horse or a human. ``` import numpy as np from google.colab import files from keras.preprocessing import image uploaded = files.upload() for fn in uploaded.keys(): # predicting images path = '/content/' + fn img = image.load_img(path, target_size=(300, 300)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) images = np.vstack([x]) classes = model.predict(images, batch_size=10) print(classes[0]) if classes[0]>0.5: print(fn + " is a human") else: print(fn + " is a horse") ``` ### Visualizing Intermediate Representations To get a feel for what kind of features our convnet has learned, one fun thing to do is to visualize how an input gets transformed as it goes through the convnet. Let's pick a random image from the training set, and then generate a figure where each row is the output of a layer, and each image in the row is a specific filter in that output feature map. Rerun this cell to generate intermediate representations for a variety of training images. ``` import numpy as np import random from tensorflow.keras.preprocessing.image import img_to_array, load_img # Let's define a new Model that will take an image as input, and will output # intermediate representations for all layers in the previous model after # the first. successive_outputs = [layer.output for layer in model.layers[1:]] #visualization_model = Model(img_input, successive_outputs) visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs) # Let's prepare a random input image from the training set. horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names] human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names] img_path = random.choice(horse_img_files + human_img_files) img = load_img(img_path, target_size=(300, 300)) # this is a PIL image x = img_to_array(img) # Numpy array with shape (150, 150, 3) x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 150, 150, 3) # Rescale by 1/255 x /= 255 # Let's run our image through our network, thus obtaining all # intermediate representations for this image. successive_feature_maps = visualization_model.predict(x) # These are the names of the layers, so can have them as part of our plot layer_names = [layer.name for layer in model.layers[1:]] # Now let's display our representations for layer_name, feature_map in zip(layer_names, successive_feature_maps): if len(feature_map.shape) == 4: # Just do this for the conv / maxpool layers, not the fully-connected layers n_features = feature_map.shape[-1] # number of features in feature map # The feature map has shape (1, size, size, n_features) size = feature_map.shape[1] # We will tile our images in this matrix display_grid = np.zeros((size, size * n_features)) for i in range(n_features): # Postprocess the feature to make it visually palatable x = feature_map[0, :, :, i] x -= x.mean() x /= x.std() x *= 64 x += 128 x = np.clip(x, 0, 255).astype('uint8') # We'll tile each filter into this big horizontal grid display_grid[:, i * size : (i + 1) * size] = x # Display the grid scale = 20. / n_features plt.figure(figsize=(scale * n_features, scale)) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') ``` As you can see we go from the raw pixels of the images to increasingly abstract and compact representations. The representations downstream start highlighting what the network pays attention to, and they show fewer and fewer features being "activated"; most are set to zero. This is called "sparsity." Representation sparsity is a key feature of deep learning. These representations carry increasingly less information about the original pixels of the image, but increasingly refined information about the class of the image. You can think of a convnet (or a deep network in general) as an information distillation pipeline. ## Clean Up Before running the next exercise, run the following cell to terminate the kernel and free memory resources: ``` import os, signal os.kill(os.getpid(), signal.SIGKILL) ```
github_jupyter
# Testinnsening av upersonlig skattemelding med næringspesifikasjon Denne demoen er ment for å vise hvordan flyten for et sluttbrukersystem kan hente et utkast, gjøre endringer, validere/kontrollere det mot Skatteetatens apier, for å sende det inn via Altinn3. ``` try: from altinn3 import * from skatteetaten_api import main_relay, base64_decode_response, decode_dokument import requests import base64 import xmltodict import xml.dom.minidom from pathlib import Path except ImportError as e: print("Mangler en avhengighet, installer dem via pip") !pip install python-jose !pip install xmltodict !pip install pathlib import xmltodict from skatteetaten_api import main_relay, base64_decode_response, decode_dokument #hjelpe metode om du vil se en request printet som curl def print_request_as_curl(r): command = "curl -X {method} -H {headers} -d '{data}' '{uri}'" method = r.request.method uri = r.request.url data = r.request.body headers = ['"{0}: {1}"'.format(k, v) for k, v in r.request.headers.items()] headers = " -H ".join(headers) print(command.format(method=method, headers=headers, data=data, uri=uri)) idporten_header = main_relay() ``` # Hent utkast og gjeldende Her legger vi inn fødselsnummeret vi logget oss inn med, Dersom du velger et annet fødselsnummer så må den du logget på med ha tilgang til skattemeldingen du ønsker å hente #### Parten nedenfor er brukt for demostrasjon, pass på bruk deres egne testparter når dere tester 01014701377 er daglig leder for 811422762 ``` s = requests.Session() s.headers = dict(idporten_header) fnr="01014701377" #oppdater med test fødselsnummerene du har fått tildelt orgnr_as = "811423262" ``` ### Utkast ``` url_utkast = f'https://mp-test.sits.no/api/skattemelding/v2/utkast/2021/{orgnr_as}' r = s.get(url_utkast) r ``` ### Gjeldende ``` url_gjeldende = f'https://mp-test.sits.no/api/skattemelding/v2/2021/{orgnr_as}' r_gjeldende = s.get(url_gjeldende) r_gjeldende ``` ## Fastsatt Her får en _http 404_ om vedkommende ikke har noen fastsetting, rekjørt denne etter du har sendt inn og fått tilbakemdling i Altinn at den har blitt behandlet, du skal nå ha en fastsatt skattemelding om den har blitt sent inn som Komplett ``` url_fastsatt = f'https://mp-test.sits.no/api/skattemelding/v2/fastsatt/2021/{orgnr_as}' r_fastsatt = s.get(url_fastsatt) r_fastsatt r_fastsatt.headers ``` ## Svar fra hent gjeldende ### Gjeldende dokument referanse: I responsen på alle api kallene, være seg utkast/fastsatt eller gjeldene, så følger det med en dokumentreferanse. For å kalle valider tjenesten, er en avhengig av å bruke korrekt referanse til gjeldende skattemelding. Cellen nedenfor henter ut gjeldende dokumentrefranse printer ut responsen fra hent gjeldende kallet ``` sjekk_svar = r_gjeldende sme_og_naering_respons = xmltodict.parse(sjekk_svar.text) skattemelding_base64 = sme_og_naering_respons["skattemeldingOgNaeringsspesifikasjonforespoerselResponse"]["dokumenter"]["skattemeldingdokument"] sme_base64 = skattemelding_base64["content"] dokref = sme_og_naering_respons["skattemeldingOgNaeringsspesifikasjonforespoerselResponse"]["dokumenter"]['skattemeldingdokument']['id'] decoded_sme_xml = decode_dokument(skattemelding_base64) sme_utkast = xml.dom.minidom.parseString(decoded_sme_xml["content"]).toprettyxml() print(f"Responsen fra hent gjeldende ser slik ut, gjeldende dokumentrerefanse er {dokref}") print(sjekk_svar.request.method ,sjekk_svar.request.url) print(xml.dom.minidom.parseString(sjekk_svar.text).toprettyxml()) #Legg merge til dokumenter.dokument.type = skattemeldingUpersonlig with open("../../../src/resources/eksempler/v2/Naeringspesifikasjon-enk-v2.xml", 'r') as f: naering_as_xml = f.read() innsendingstype = "komplett" naeringsspesifikasjoner_as_b64 = base64.b64encode(naering_as_xml.encode("utf-8")) naeringsspesifikasjoner_as_b64 = str(naeringsspesifikasjoner_as_b64.decode("utf-8")) naeringsspesifikasjoner_base64=naeringsspesifikasjoner_as_b64 dok_ref=dokref valider_konvlutt_v2 = """ <?xml version="1.0" encoding="utf-8" ?> <skattemeldingOgNaeringsspesifikasjonRequest xmlns="no:skatteetaten:fastsetting:formueinntekt:skattemeldingognaeringsspesifikasjon:request:v2"> <dokumenter> <dokument> <type>skattemeldingUpersonlig</type> <encoding>utf-8</encoding> <content>{sme_base64}</content> </dokument> <dokument> <type>naeringsspesifikasjon</type> <encoding>utf-8</encoding> <content>{naeringsspeifikasjon_base64}</content> </dokument> </dokumenter> <dokumentreferanseTilGjeldendeDokument> <dokumenttype>skattemeldingPersonlig</dokumenttype> <dokumentidentifikator>{dok_ref}</dokumentidentifikator> </dokumentreferanseTilGjeldendeDokument> <inntektsaar>2021</inntektsaar> <innsendingsinformasjon> <innsendingstype>{innsendingstype}</innsendingstype> <opprettetAv>TurboSkatt</opprettetAv> </innsendingsinformasjon> </skattemeldingOgNaeringsspesifikasjonRequest> """.replace("\n","") naering_enk = valider_konvlutt_v2.format(sme_base64=sme_base64, naeringsspeifikasjon_base64=naeringsspesifikasjoner_base64, dok_ref=dok_ref, innsendingstype=innsendingstype) ``` # Valider utkast sme med næringsopplysninger ``` def valider_sme(payload): url_valider = f'https://mp-test.sits.no/api/skattemelding/v2/valider/2021/{orgnr_as}' header = dict(idporten_header) header["Content-Type"] = "application/xml" return s.post(url_valider, headers=header, data=payload) valider_respons = valider_sme(naering_enk) resultatAvValidering = xmltodict.parse(valider_respons.text)["skattemeldingerOgNaeringsspesifikasjonResponse"]["resultatAvValidering"] if valider_respons: print(resultatAvValidering) print() print(xml.dom.minidom.parseString(valider_respons.text).toprettyxml()) else: print(valider_respons.status_code, valider_respons.headers, valider_respons.text) ``` # Altinn 3 1. Hent Altinn Token 2. Oppretter en ny instans av skjemaet 3. lasteropp metadata til skjemaet 4. last opp vedlegg til skattemeldingen 5. oppdater skattemelding xml med referanse til vedlegg_id fra altinn3. 6. laster opp skattemeldingen og næringsopplysninger som et vedlegg ``` #1 altinn3_applikasjon = "skd/formueinntekt-skattemelding-v2" altinn_header = hent_altinn_token(idporten_header) #2 instans_data = opprett_ny_instans(altinn_header, fnr, appnavn=altinn3_applikasjon) ``` ### 3 Last opp metadata (skattemelding_V1) ``` print(f"innsendingstypen er satt til: {innsendingstype}") req_metadata = last_opp_metadata_json(instans_data, altinn_header, inntektsaar=2021, appnavn=altinn3_applikasjon) req_metadata ``` ## Last opp skattemelding ``` #Last opp skattemeldingen req_send_inn = last_opp_skattedata(instans_data, altinn_header, xml=naering_enk, data_type="skattemeldingOgNaeringspesifikasjon", appnavn=altinn3_applikasjon) req_send_inn ``` ### Sett statusen klar til henting av skatteetaten. ``` req_bekreftelse = endre_prosess_status(instans_data, altinn_header, "next", appnavn=altinn3_applikasjon) req_bekreftelse = endre_prosess_status(instans_data, altinn_header, "next", appnavn=altinn3_applikasjon) req_bekreftelse ``` ### Framtidig: Sjekk status på altinn3 instansen om skatteetaten har hentet instansen. ### Se innsending i Altinn Ta en slurk av kaffen og klapp deg selv på ryggen, du har nå sendt inn, la byråkratiet gjøre sin ting... og det tar litt tid. Pt så sjekker skatteeaten hos Altinn3 hvert 5 min om det har kommet noen nye innsendinger.
github_jupyter
# Keras tutorial - the Happy House Welcome to the first assignment of week 2. In this assignment, you will: 1. Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK. 2. See how you can in a couple of hours build a deep learning algorithm. Why are we using Keras? Keras was developed to enable deep learning engineers to build and experiment with different models very quickly. Just as TensorFlow is a higher-level framework than Python, Keras is an even higher-level framework and provides additional abstractions. Being able to go from idea to result with the least possible delay is key to finding good models. However, Keras is more restrictive than the lower-level frameworks, so there are some very complex models that you can implement in TensorFlow but not (without more difficulty) in Keras. That being said, Keras will work fine for many common models. In this exercise, you'll work on the "Happy House" problem, which we'll explain below. Let's load the required packages and solve the problem of the Happy House! ``` import numpy as np from keras import layers from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D from keras.models import Model from keras.preprocessing import image from keras.utils import layer_utils from keras.utils.data_utils import get_file from keras.applications.imagenet_utils import preprocess_input import pydot from IPython.display import SVG from keras.utils.vis_utils import model_to_dot from keras.utils import plot_model from kt_utils import * import keras.backend as K K.set_image_data_format('channels_last') import matplotlib.pyplot as plt from matplotlib.pyplot import imshow %matplotlib inline ``` **Note**: As you can see, we've imported a lot of functions from Keras. You can use them easily just by calling them directly in the notebook. Ex: `X = Input(...)` or `X = ZeroPadding2D(...)`. ## 1 - The Happy House For your next vacation, you decided to spend a week with five of your friends from school. It is a very convenient house with many things to do nearby. But the most important benefit is that everybody has commited to be happy when they are in the house. So anyone wanting to enter the house must prove their current state of happiness. <img src="images/happy-house.jpg" style="width:350px;height:270px;"> <caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **the Happy House**</center></caption> As a deep learning expert, to make sure the "Happy" rule is strictly applied, you are going to build an algorithm which that uses pictures from the front door camera to check if the person is happy or not. The door should open only if the person is happy. You have gathered pictures of your friends and yourself, taken by the front-door camera. The dataset is labbeled. <img src="images/house-members.png" style="width:550px;height:250px;"> Run the following code to normalize the dataset and learn about its shapes. ``` X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() # Normalize image vectors X_train = X_train_orig/255. X_test = X_test_orig/255. # Reshape Y_train = Y_train_orig.T Y_test = Y_test_orig.T print ("number of training examples = " + str(X_train.shape[0])) print ("number of test examples = " + str(X_test.shape[0])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape)) ``` **Details of the "Happy" dataset**: - Images are of shape (64,64,3) - Training: 600 pictures - Test: 150 pictures It is now time to solve the "Happy" Challenge. ## 2 - Building a model in Keras Keras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results. Here is an example of a model in Keras: ```python def model(input_shape): # Define the input placeholder as a tensor with shape input_shape. Think of this as your input image! X_input = Input(input_shape) # Zero-Padding: pads the border of X_input with zeroes X = ZeroPadding2D((3, 3))(X_input) # CONV -> BN -> RELU Block applied to X X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X) X = BatchNormalization(axis = 3, name = 'bn0')(X) X = Activation('relu')(X) # MAXPOOL X = MaxPooling2D((2, 2), name='max_pool')(X) # FLATTEN X (means convert it to a vector) + FULLYCONNECTED X = Flatten()(X) X = Dense(1, activation='sigmoid', name='fc')(X) # Create model. This creates your Keras model instance, you'll use this instance to train/test the model. model = Model(inputs = X_input, outputs = X, name='HappyModel') return model ``` Note that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow. In particular, rather than creating and assigning a new variable on each step of forward propagation such as `X`, `Z1`, `A1`, `Z2`, `A2`, etc. for the computations for the different layers, in Keras code each line above just reassigns `X` to a new value using `X = ...`. In other words, during each step of forward propagation, we are just writing the latest value in the commputation into the same variable `X`. The only exception was `X_input`, which we kept separate and did not overwrite, since we needed it at the end to create the Keras model instance (`model = Model(inputs = X_input, ...)` above). **Exercise**: Implement a `HappyModel()`. This assignment is more open-ended than most. We suggest that you start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. But after that, come back and take initiative to try out other model architectures. For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish. You can also use other functions such as `AveragePooling2D()`, `GlobalMaxPooling2D()`, `Dropout()`. **Note**: You have to be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to. ``` # GRADED FUNCTION: HappyModel def HappyModel(input_shape): """ Implementation of the HappyModel. Arguments: input_shape -- shape of the images of the dataset Returns: model -- a Model() instance in Keras """ ### START CODE HERE ### # Feel free to use the suggested outline in the text above to get started, and run through the whole # exercise (including the later portions of this notebook) once. The come back also try out other # network architectures as well. X_input = Input(input_shape) X = ZeroPadding2D((3, 3))(X_input) X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X) X = BatchNormalization(axis = 3, name = 'bn0')(X) X = Activation('relu')(X) X = MaxPooling2D((2, 2), name='max_pool')(X) X = Flatten()(X) X = Dense(1, activation='sigmoid', name='fc')(X) model = Model(inputs = X_input, outputs = X, name='HappyModel') ### END CODE HERE ### return model ``` You have now built a function to describe your model. To train and test this model, there are four steps in Keras: 1. Create the model by calling the function above 2. Compile the model by calling `model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"])` 3. Train the model on train data by calling `model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)` 4. Test the model on test data by calling `model.evaluate(x = ..., y = ...)` If you want to know more about `model.compile()`, `model.fit()`, `model.evaluate()` and their arguments, refer to the official [Keras documentation](https://keras.io/models/model/). **Exercise**: Implement step 1, i.e. create the model. ``` ### START CODE HERE ### (1 line) happyModel = HappyModel((64, 64, 3)) ### END CODE HERE ### ``` **Exercise**: Implement step 2, i.e. compile the model to configure the learning process. Choose the 3 arguments of `compile()` wisely. Hint: the Happy Challenge is a binary classification problem. ``` ### START CODE HERE ### (1 line) happyModel.compile(optimizer='adam', loss='binary_crossentropy', metrics = ["accuracy"]) ### END CODE HERE ### ``` **Exercise**: Implement step 3, i.e. train the model. Choose the number of epochs and the batch size. ``` ### START CODE HERE ### (1 line) happyModel.fit(X_train, Y_train, epochs=45, batch_size=16) ### END CODE HERE ### ``` Note that if you run `fit()` again, the `model` will continue to train with the parameters it has already learnt instead of reinitializing them. **Exercise**: Implement step 4, i.e. test/evaluate the model. ``` ### START CODE HERE ### (1 line) preds = happyModel.evaluate(X_test, Y_test) ### END CODE HERE ### print() print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1])) ``` If your `happyModel()` function worked, you should have observed much better than random-guessing (50%) accuracy on the train and test sets. To give you a point of comparison, our model gets around **95% test accuracy in 40 epochs** (and 99% train accuracy) with a mini batch size of 16 and "adam" optimizer. But our model gets decent accuracy after just 2-5 epochs, so if you're comparing different models you can also train a variety of models on just a few epochs and see how they compare. If you have not yet achieved a very good accuracy (let's say more than 80%), here're some things you can play around with to try to achieve it: - Try using blocks of CONV->BATCHNORM->RELU such as: ```python X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X) X = BatchNormalization(axis = 3, name = 'bn0')(X) X = Activation('relu')(X) ``` until your height and width dimensions are quite low and your number of channels quite large (≈32 for example). You are encoding useful information in a volume with a lot of channels. You can then flatten the volume and use a fully-connected layer. - You can use MAXPOOL after such blocks. It will help you lower the dimension in height and width. - Change your optimizer. We find Adam works well. - If the model is struggling to run and you get memory issues, lower your batch_size (12 is usually a good compromise) - Run on more epochs, until you see the train accuracy plateauing. Even if you have achieved a good accuracy, please feel free to keep playing with your model to try to get even better results. **Note**: If you perform hyperparameter tuning on your model, the test set actually becomes a dev set, and your model might end up overfitting to the test (dev) set. But just for the purpose of this assignment, we won't worry about that here. ## 3 - Conclusion Congratulations, you have solved the Happy House challenge! Now, you just need to link this model to the front-door camera of your house. We unfortunately won't go into the details of how to do that here. <font color='blue'> **What we would like you to remember from this assignment:** - Keras is a tool we recommend for rapid prototyping. It allows you to quickly try out different model architectures. Are there any applications of deep learning to your daily life that you'd like to implement using Keras? - Remember how to code a model in Keras and the four steps leading to the evaluation of your model on the test set. Create->Compile->Fit/Train->Evaluate/Test. ## 4 - Test with your own image (Optional) Congratulations on finishing this assignment. You can now take a picture of your face and see if you could enter the Happy House. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right (0 is unhappy, 1 is happy)! The training/test sets were quite similar; for example, all the pictures were taken against the same background (since a front door camera is always mounted in the same position). This makes the problem easier, but a model trained on this data may or may not work on your own data. But feel free to give it a try! ``` ### START CODE HERE ### img_path = 'images/my_image.jpg' ### END CODE HERE ### img = image.load_img(img_path, target_size=(64, 64)) imshow(img) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) print(happyModel.predict(x)) ``` ## 5 - Other useful functions in Keras (Optional) Two other basic features of Keras that you'll find useful are: - `model.summary()`: prints the details of your layers in a table with the sizes of its inputs/outputs - `plot_model()`: plots your graph in a nice layout. You can even save it as ".png" using SVG() if you'd like to share it on social media ;). It is saved in "File" then "Open..." in the upper bar of the notebook. Run the following code. ``` happyModel.summary() plot_model(happyModel, to_file='HappyModel.png') SVG(model_to_dot(happyModel).create(prog='dot', format='svg')) ```
github_jupyter
## Face detection using OpenCV One older (from around 2001), but still popular scheme for face detection is a Haar cascade classifier; these classifiers in the OpenCV library and use feature-based classification cascades that learn to isolate and detect faces in an image. You can read [the original paper proposing this approach here](https://www.cs.cmu.edu/~efros/courses/LBMV07/Papers/viola-cvpr-01.pdf). Let's see how face detection works on an exampe in this notebook. ``` # import required libraries for this section %matplotlib inline import numpy as np import matplotlib.pyplot as plt import cv2 # load in color image for face detection image = cv2.imread('images/multi_faces.jpg') # convert to RBG image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) plt.figure(figsize=(20,10)) plt.imshow(image) ``` To use a face detector, we'll first convert the image from color to grayscale. For face detection this is perfectly fine to do as there is plenty non-color specific structure in the human face for our detector to learn on. ``` # convert to grayscale gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) plt.figure(figsize=(20,10)) plt.imshow(gray, cmap='gray') ``` Next we load in the fully trained architecture of the face detector, found in the file `detector_architectures/ haarcascade_frontalface_default.xml`,and use it on our image to find faces! **A note on parameters** How many faces are detected is determined by the function, `detectMultiScale` which aims to detect faces of varying sizes. The inputs to this function are: `(image, scaleFactor, minNeighbors)`; you will often detect more faces with a smaller scaleFactor, and lower value for minNeighbors, but raising these values often produces better matches. Modify these values depending on your input image. ``` # load in cascade classifier face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml') # run the detector on the grayscale image faces = face_cascade.detectMultiScale(gray, 4, 6) ``` The output of the classifier is an array of detections; coordinates that define the dimensions of a bounding box around each face. Note that this always outputs a bounding box that is square in dimension. ``` # print out the detections found print ('We found ' + str(len(faces)) + ' faces in this image') print ("Their coordinates and lengths/widths are as follows") print ('=============================') print (faces) ``` Let's plot the corresponding detection boxes on our original image to see how well we've done. - The coordinates are in form (x,y,w,h) - To get the four coordinates of the bounding box (x, y, x+w, y+h) ``` # make a copy of the original image to plot rectangle detections ontop of img_with_detections = np.copy(image) # loop over our detections and draw their corresponding boxes on top of our original image for (x,y,w,h) in faces: # draw next detection as a red rectangle on top of the original image. # Note: the fourth element (255,0,0) determines the color of the rectangle, # and the final argument (here set to 5) determines the width of the lines that draw the rectangle cv2.rectangle(img_with_detections,(x,y),(x+w,y+h),(255,0,0),5) # display the result plt.figure(figsize=(20,10)) plt.imshow(img_with_detections) ``` - Article about a GAN model that detects its own bias (racial/gender) and corrects its predictions (https://godatadriven.com/blog/fairness-in-machine-learning-with-pytorch/)
github_jupyter
# How do ratings behave after users have seen many captions? This notebook looks at the "vote decay" of users. The New Yorker caption contest organizer, Bob Mankoff, has received many emails like the one below (name/personal details left out for anonymity) > Here's my issue. > > First time I encounter something, I might say it's funny. > > Then it comes back in many forms over and over and it's no longer funny and I wish I could go back to the first one and say it's not funny. > > But it's funny, and then I can't decide whether to credit everyone with funny or keep hitting unfunny. What I really like to find out is who submitted it first, but often it's slightly different and there may be a best version. Auggh! > > How should we do this??? We can investigate this: we have all the data at hand. We record the timestamp, participant ID and their rating for a given caption. So let's see how votes go after a user has seen $n$ captions! ``` %matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt plt.style.use('seaborn') import caption_contest_data as ccd ``` ## Reading in data Let's read in the data. As the last column can contain a non-escaped comma, we have to fix that before doing any analysis. Note that two versions of this notebook exist (the previous notebook can be found in [43bc5d]). This highlights some of the differences required to read in the earlier datasets. [43bc5d]:https://github.com/nextml/caption-contest-data/commit/43bc5d23ee287b8b34cc4eb0181484bd21bbd341 ``` contest = 540 responses = ccd.responses(contest) print(len(responses)) responses.head() ``` ## Seeing how many captions a user has seen This is the workhorse of the notebook: it sees how many captions one participant has seen. I sorted by timestamp (and with an actual timestamp, not a str) to collect the ratings in the order a user has seen. I do not assume that only one user answers at a time. ``` last_id = None i = 0 num_responses = [] captions_seen = [] responses = responses.sort_values(by='timestamp_query_generated') # responses = responses[0:1000] # debug captions_seen_by = {} captions_seen = [] for _, response in responses.iterrows(): id_, rating = response['participant_uid'], response['target_reward'] if id_ not in captions_seen_by: captions_seen_by[id_] = 0 captions_seen_by[id_] += 1 captions_seen += [captions_seen_by[id_]] num_responses += [i] responses['number of captions seen'] = captions_seen responses.head() ``` ## Viewing the data Now let's format the data to view it. We can view the data in two ways: as we only have three rating values, we can view the probability of a person rating 1, 2 or 3, and can also view the mean. In this, we rely on `pd.pivot_table`. This can take DataFrame that looks like a list of dictionaries and compute `aggfunc` (by default `np.mean`) for all items that contain common keys (indicated by `index` and `columns`). It's similar to Excel's pivot table functionality. ### Probability of rating {1, 2, 3} ``` def prob(x): n = len(x) ret = {'n': n} ret.update({name: np.sum(x == i) for name, i in [('unfunny', 1), ('somewhat funny', 2), ('funny', 3)]}) return ret probs = responses.pivot_table(index='number of captions seen', columns='alg_label', values='target_reward', aggfunc=prob) probs.head() d = {label: dict(probs[label]) for label in ['RandomSampling']} for label in d.keys(): for n in d[label].keys(): if d[label][n] is None: continue for rating in ['unfunny', 'somewhat funny', 'funny']: d[label][n][rating] = d[label][n][rating] / d[label][n]['n'] df = pd.DataFrame(d['RandomSampling']).T df = pd.concat({'RandomSampling': df}, axis=1) df.head() plt.style.use("default") fig, axs = plt.subplots(figsize=(8, 4), ncols=2) alg = "RandomSampling" show = df[alg].copy() show["captions seen"] = show.index for y in ["funny", "somewhat funny", "unfunny"]: show.plot(x="captions seen", y=y, ax=axs[0]) show.plot(x="captions seen", y="n", ax=axs[1]) for ax in axs: ax.set_xlim(0, 100) ax.grid(linestyle='--', alpha=0.5) plt.style.use("default") def plot(alg): fig = plt.figure(figsize=(10, 5)) ax = plt.subplot(1, 2, 1) df[alg][['unfunny', 'somewhat funny', 'funny']].plot(ax=ax) plt.xlim(0, 100) plt.title('{} ratings\nfor contest {}'.format(alg, contest)) plt.ylabel('Probability of rating') plt.xlabel('Number of captions seen') plt.grid(linestyle="--", alpha=0.6) ax = plt.subplot(1, 2, 2) df[alg]['n'].plot(ax=ax, logy=False) plt.ylabel('Number of users') plt.xlabel('Number of captions seen, $n$') plt.title('Number of users that have\nseen $n$ captions') plt.xlim(0, 100) plt.grid(linestyle="--", alpha=0.6) for alg in ['RandomSampling']: fig = plot(alg) plt.show() ```
github_jupyter
## MNIST Handwritten Digits Classification Experiment This demo shows how you can use SageMaker Experiment Management Python SDK to organize, track, compare, and evaluate your machine learning (ML) model training experiments. You can track artifacts for experiments, including data sets, algorithms, hyper-parameters, and metrics. Experiments executed on SageMaker such as SageMaker Autopilot jobs and training jobs will be automatically tracked. You can also track artifacts for additional steps within an ML workflow that come before/after model training e.g. data pre-processing or post-training model evaluation. The APIs also let you search and browse your current and past experiments, compare experiments, and identify best performing models. Now we will demonstrate these capabilities through an MNIST handwritten digits classification example. The experiment will be organized as follow: 1. Download and prepare the MNIST dataset. 2. Train a Convolutional Neural Network (CNN) Model. Tune the hyper parameter that configures the number of hidden channels in the model. Track the parameter configurations and resulting model accuracy using SageMaker Experiments Python SDK. 3. Finally use the search and analytics capabilities of Python SDK to search, compare and evaluate the performance of all model versions generated from model tuning in Step 2. 4. We will also see an example of tracing the complete linage of a model version i.e. the collection of all the data pre-processing and training configurations and inputs that went into creating that model version. Make sure you selected `Python 3 (Data Science)` kernel. ### Install Python SDKs ``` import sys !{sys.executable} -m pip install sagemaker-experiments==0.1.24 ``` ### Install PyTroch ``` # pytorch version needs to be the same in both the notebook instance and the training job container # https://github.com/pytorch/pytorch/issues/25214 !{sys.executable} -m pip install torch==1.1.0 !{sys.executable} -m pip install torchvision==0.3.0 !{sys.executable} -m pip install pillow==6.2.2 !{sys.executable} -m pip install --upgrade sagemaker ``` ### Setup ``` import time import boto3 import numpy as np import pandas as pd from IPython.display import set_matplotlib_formats from matplotlib import pyplot as plt from torchvision import datasets, transforms import sagemaker from sagemaker import get_execution_role from sagemaker.session import Session from sagemaker.analytics import ExperimentAnalytics from smexperiments.experiment import Experiment from smexperiments.trial import Trial from smexperiments.trial_component import TrialComponent from smexperiments.tracker import Tracker set_matplotlib_formats('retina') sess = boto3.Session() sm = sess.client('sagemaker') role = get_execution_role() ``` ### Create a S3 bucket to hold data ``` # create a s3 bucket to hold data, note that your account might already created a bucket with the same name account_id = sess.client('sts').get_caller_identity()["Account"] bucket = 'sagemaker-experiments-{}-{}'.format(sess.region_name, account_id) prefix = 'mnist' try: if sess.region_name == "us-east-1": sess.client('s3').create_bucket(Bucket=bucket) else: sess.client('s3').create_bucket(Bucket=bucket, CreateBucketConfiguration={'LocationConstraint': sess.region_name}) except Exception as e: print(e) ``` ### Dataset We download the MNIST hand written digits dataset, and then apply transformation on each of the image. ``` # TODO: can be removed after upgrade to torchvision==0.9.1 # see github.com/pytorch/vision/issues/1938 and github.com/pytorch/vision/issues/3549 datasets.MNIST.urls = [ 'https://ossci-datasets.s3.amazonaws.com/mnist/train-images-idx3-ubyte.gz', 'https://ossci-datasets.s3.amazonaws.com/mnist/train-labels-idx1-ubyte.gz', 'https://ossci-datasets.s3.amazonaws.com/mnist/t10k-images-idx3-ubyte.gz', 'https://ossci-datasets.s3.amazonaws.com/mnist/t10k-labels-idx1-ubyte.gz' ] # download the dataset # this will not only download data to ./mnist folder, but also load and transform (normalize) them train_set = datasets.MNIST('mnist', train=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]), download=True) test_set = datasets.MNIST('mnist', train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]), download=False) plt.imshow(train_set.data[2].numpy()) ``` After transforming the images in the dataset, we upload it to s3. ``` inputs = sagemaker.Session().upload_data(path='mnist', bucket=bucket, key_prefix=prefix) print('input spec: {}'.format(inputs)) ``` Now lets track the parameters from the data pre-processing step. ``` with Tracker.create(display_name="Preprocessing", sagemaker_boto_client=sm) as tracker: tracker.log_parameters({ "normalization_mean": 0.1307, "normalization_std": 0.3081, }) # we can log the s3 uri to the dataset we just uploaded tracker.log_input(name="mnist-dataset", media_type="s3/uri", value=inputs) ``` ### Step 1 - Set up the Experiment Create an experiment to track all the model training iterations. Experiments are a great way to organize your data science work. You can create experiments to organize all your model development work for : [1] a business use case you are addressing (e.g. create experiment named “customer churn prediction”), or [2] a data science team that owns the experiment (e.g. create experiment named “marketing analytics experiment”), or [3] a specific data science and ML project. Think of it as a “folder” for organizing your “files”. ### Create an Experiment ``` mnist_experiment = Experiment.create( experiment_name=f"mnist-hand-written-digits-classification-{int(time.time())}", description="Classification of mnist hand-written digits", sagemaker_boto_client=sm) print(mnist_experiment) ``` ### Step 2 - Track Experiment ### Now create a Trial for each training run to track the it's inputs, parameters, and metrics. While training the CNN model on SageMaker, we will experiment with several values for the number of hidden channel in the model. We will create a Trial to track each training job run. We will also create a TrialComponent from the tracker we created before, and add to the Trial. This will enrich the Trial with the parameters we captured from the data pre-processing stage. Note the execution of the following code takes a while. ``` from sagemaker.pytorch import PyTorch, PyTorchModel hidden_channel_trial_name_map = {} ``` If you want to run the following training jobs asynchronously, you may need to increase your resource limit. Otherwise, you can run them sequentially. ``` preprocessing_trial_component = tracker.trial_component for i, num_hidden_channel in enumerate([2, 5, 10, 20, 32]): # create trial trial_name = f"cnn-training-job-{num_hidden_channel}-hidden-channels-{int(time.time())}" cnn_trial = Trial.create( trial_name=trial_name, experiment_name=mnist_experiment.experiment_name, sagemaker_boto_client=sm, ) hidden_channel_trial_name_map[num_hidden_channel] = trial_name # associate the proprocessing trial component with the current trial cnn_trial.add_trial_component(preprocessing_trial_component) # all input configurations, parameters, and metrics specified in estimator # definition are automatically tracked estimator = PyTorch( py_version='py3', entry_point='./mnist.py', role=role, sagemaker_session=sagemaker.Session(sagemaker_client=sm), framework_version='1.1.0', instance_count=1, instance_type='ml.c4.xlarge', hyperparameters={ 'epochs': 2, 'backend': 'gloo', 'hidden_channels': num_hidden_channel, 'dropout': 0.2, 'kernel_size': 5, 'optimizer': 'sgd' }, metric_definitions=[ {'Name':'train:loss', 'Regex':'Train Loss: (.*?);'}, {'Name':'test:loss', 'Regex':'Test Average loss: (.*?),'}, {'Name':'test:accuracy', 'Regex':'Test Accuracy: (.*?)%;'} ], enable_sagemaker_metrics=True ) cnn_training_job_name = "cnn-training-job-{}".format(int(time.time())) # Now associate the estimator with the Experiment and Trial estimator.fit( inputs={'training': inputs}, job_name=cnn_training_job_name, experiment_config={ "TrialName": cnn_trial.trial_name, "TrialComponentDisplayName": "Training", }, wait=True, ) # give it a while before dispatching the next training job time.sleep(2) ``` ### Compare the model training runs for an experiment Now we will use the analytics capabilities of Python SDK to query and compare the training runs for identifying the best model produced by our experiment. You can retrieve trial components by using a search expression. ### Some Simple Analyses ``` search_expression = { "Filters":[ { "Name": "DisplayName", "Operator": "Equals", "Value": "Training", } ], } trial_component_analytics = ExperimentAnalytics( sagemaker_session=Session(sess, sm), experiment_name=mnist_experiment.experiment_name, search_expression=search_expression, sort_by="metrics.test:accuracy.max", sort_order="Descending", metric_names=['test:accuracy'], parameter_names=['hidden_channels', 'epochs', 'dropout', 'optimizer'] ) trial_component_analytics.dataframe() ``` To isolate and measure the impact of change in hidden channels on model accuracy, we vary the number of hidden channel and fix the value for other hyperparameters. Next let's look at an example of tracing the lineage of a model by accessing the data tracked by SageMaker Experiments for `cnn-training-job-2-hidden-channels` trial ``` lineage_table = ExperimentAnalytics( sagemaker_session=Session(sess, sm), search_expression={ "Filters":[{ "Name": "Parents.TrialName", "Operator": "Equals", "Value": hidden_channel_trial_name_map[2] }] }, sort_by="CreationTime", sort_order="Ascending", ) lineage_table.dataframe() ``` ## Deploy endpoint for the best training-job / trial component Now we'll take the best (as sorted) and create an endpoint for it. ``` #Pulling best based on sort in the analytics/dataframe so first is best.... best_trial_component_name = trial_component_analytics.dataframe().iloc[0]['TrialComponentName'] best_trial_component = TrialComponent.load(best_trial_component_name) model_data = best_trial_component.output_artifacts['SageMaker.ModelArtifact'].value env = {'hidden_channels': str(int(best_trial_component.parameters['hidden_channels'])), 'dropout': str(best_trial_component.parameters['dropout']), 'kernel_size': str(int(best_trial_component.parameters['kernel_size']))} model = PyTorchModel( model_data, role, './mnist.py', py_version='py3', env=env, sagemaker_session=sagemaker.Session(sagemaker_client=sm), framework_version='1.1.0', name=best_trial_component.trial_component_name) predictor = model.deploy( instance_type='ml.m5.xlarge', initial_instance_count=1) ``` ## Cleanup Once we're doing don't forget to clean up the endpoint to prevent unnecessary billing. > Trial components can exist independent of trials and experiments. You might want keep them if you plan on further exploration. If so, comment out tc.delete() ``` predictor.delete_endpoint() mnist_experiment.delete_all(action='--force') ``` ## Contact Submit any questions or issues to https://github.com/aws/sagemaker-experiments/issues or mention @aws/sagemakerexperimentsadmin
github_jupyter
# Codage des nombres réels Il repose sur l'**écriture scientifique** des nombres réels: $${\Large \pm\ \text{Mantisse}\times\text{Base}^{\text{Exposant}} }$$ $$\text{où Mantisse}\in[1;\text{Base}[\\ \text{ et Exposant est un entier «signé»}$$ Exemples en base 10: - $-0,\!000000138$ s'écrit $-1,\!38\times 10^{-7}$, - $299\,790\,000$ s'écrit $+2,9979\times 10^{8}$. - $-5,\!29$ s'écrit $-5,\!29\cdot 10^{0}$. **Note**: en toute rigueur, 0 ne peut pas être représenté dans cette écriture. ## Codage des nombres purement fractionnaires (strictement inférieurs à 1) Les puissances négatives de deux sont **un-demi**, **un-quart**, **un-huitième**, **un-seizième**, **un-trente-deuxième**, etc. | **puissances négatives de deux** | $2^{-1}$ | $2^{-2}$ | $2^{-3}$ | $2^{-4}$ | $2^{-5}$ | |------------------------|:-----:|:-----:|:-----:|:------:|:---------:| | **décimal** | $0,\!5$ | $0,\!25$ | $0,\!125$ | $0,\!0625$ | $0,\!03125$ | | **binaire** | $0,\!1$ | $0,\!01$ | $0,\!001$ | $0,\!0001$ | $0,\!00001$ | **Méthode1**: Pour trouver l'écriture binaire d'un nombre de l'intervalle $[0;1[$ (purement fractionnaire), on peut procèder de manière analogue à la «méthode des différences» pour les entiers. **Exemple**: Soit à coder le nombre (en base 10) $0,696$ sur **4 bits** - on ne s'intéresse qu'aux bits situés après la virgule. $$\begin{array}{r|c|l} \text{puissances de 2} & \text{différences}&\text{bits}\cr \hline & 0,\!696 & \cr 0,\!5 & 0,\!196& 1\cr 0,\!25 & & 0\cr 0,\!125 & 0,\!071 & 1\cr 0,\!0625 & 0,\!0085& 1\cr \hline \end{array} \\\text{donc, sur 4 bits: }0,\!696 \text{ correspond environ à }0,\!1011\text{ en base } 2$$ **Notes**: - En observant les puissances négatives de deux, observer que sur 5 bits, le dernier chiffre du motif binaire serait 0 et qu'il s'agirait toujours d'une **approximation**. En fait, même avec un très grand nombre de bits, la dernière différence ne serait probablement pas nulle pour cet exemple même si elle tendrait bien sûr à diminuer. - Il n'est pas difficile de comprendre qu'avec 4 bits l'approximation effectuée ne peut excédé un-seizieme ($2^{-4}$), avec 10 bits elle serait au pire de $1/2^{10}<$ un-millième, avec 20 bits au pire d'un-millionième, etc. **méthode 2**: On peut aussi utiliser des multiplications successives par 2: **Tant que** la *partie fractionnaire pure n'est pas nulle*: - la multiplier **par 2** - placer la partie entière (0 ou 1) et la partie fractionnaire pure du résultat dans la colonne correspondante. - arrêter la boucle si le nombre d'itération dépasse une «certaine valeur»... Le codage binaire du nombre fractionnaire pure initial est donné par la colonne parties entières lue de haut en bas. $$\begin{array}{r|l} \text{partie entière}&\text{partie fractionnaire pure}\cr \hline & 0,\!696 \cr 1 & 0,\!392 \cr 0 & 0,\!784\cr 1 & 0,\!568\cr 1 & 0,\!136\cr \dots & \dots\cr \end{array} \\\text{donc: }0,\!696 \text{ correspond environ à }0,\!1011\dots\text{ en base 2}$$ **Inversement**, Quelle est la valeur en base 10 de $0,\!0110$ (en base 2)? Le premier chiffre après la virgule correspond à un-demi, le second à un-quart, etc. donc on a un-quart + un-huitième soit $0,25+0,125={\bf 0,\!375}$ ## Norme IEEE 754 L'exemple précédent doit vous faire sentir la difficulté du codage des réels; il a donc été décidé de normaliser cette représentation; il y a (entre autre) - la **simple précision** avec 32 bits, - la **double précision** avec 64 bits. Dans tous les cas, le codage utilise l'écriture scientifique où la «Base» est 2. $${\Large \pm\ \text{Mantisse}\times\text{2}^{\text{Exposant}} }$$ $$\text{où Mantisse}\in[1;2[\\ \text{ et Exposant est un entier «signé»}$$ En **simple précision**, Le motif binaire correspondant est organisé comme suit: $$\begin{array}{ccc} 1\text{ bit}& 8 \text{ bits}& 23 \text{ bits}\cr \hline \text{signe}& \text{exposant} & \text{mantisse}\cr \hline \end{array}$$ **Signe**: 0 signifie + et 1 signifie - **Mantisse**: Comme elle est toujours de la forme: $$1,\!b_1b_2b_3\dots\\ \text{les } b_i \text{représentent un bit}$$les 23 bits de la mantisse correspondent aux bits situés après la virgule; le bit obligatoirement à 1 est omis - on parle de bit caché. *Exemple*: Si la mantisse en mémoire est $0110\dots 0$, elle correspond à $$\underbrace{1}_{\text{bit caché}}+1/4+1/8=1,\!375$$ **Exposant**: il peut-être positif ou négatif et on utilise le codage par «valeur biaisée (ou décalée)» pour le représenter. *Exemple*: si l'exposant est $0111\,1010$, qui correspond à soixante-quatre + trente-deux + seize + huit + deux soit à $122$, on lui soustrait le **biais** soit $127$ (pour un octet) ce qui donne $-5$ *Exemple récapitulatif*: Le mot de 32 bits qui suit interprété comme un flottant en simple précision, $$\large\overbrace{\color{red}{1}}^{\text{signe}}\ \overbrace{\color{green}{0111\,1010}}^{\text{exposant}}\ \overbrace{\color{blue}{0110\,0000\,0000\,0000\,0000\,000}}^{\text{mantisse}}$$ correspond au nombre: $$\Large\overbrace{\color{red}{-}}^{\text{signe}}\overbrace{\color{blue}{1,\!375}}^{\text{mantisse}} \times 2^{\overbrace{\color{green}{-5}}^{\text{exposant}}}\\ =-4,\!296875\times 10^{-2}$$ **Notes**: - en **simple précision**, on peut représenter approximativement les nombres décimaux positifs de l'intervalle $[10^{-38}; 10^{38}]$ ainsi que les nombres négatifs correspondants. Voici comment on le voit: $$2^{128}=2^8\times (2^{10})^{12}\approx 100 \times(10^3)^{12}=10^2\times 10^{36}=10^{38}$$ - en **double précision** (64 bits), la méthode est la même et: - l'**exposant** est codé sur **11 bits** (avec un décalage ou biais de 1023), - et la **mantisse** sur **52 bits**. ## Valeurs spéciales La norme précise que les valeurs $0000\,0000$ et $1111\,1111$ de l'exposant sont **réservées**: - le nombre $0$ est représenté conventionnellement avec un bit de signe arbitraires et tous les autres à $0$: on distingue donc $+0$ et $-0$, - les **infinis** sont représentés par l'exposant $1111\,1111$ et une mantisse nulle: ils servent à indiquer le dépassement de capacité, - une valeur spéciale `NaN` (pour *Not a Number*) est représentée par un signe à 0, l'exposant $1111\,1111$ et une mantisse non nulle: elle sert à représenter le résultat d'opérations invalides comme $0/0$, $\sqrt{-1}$, $0\times +\infty$ etc. - enfin, lorsque l'exposant est nulle et la mantisse non, on convient que le nombre représenté est: $${\large \pm~ {\bf 0},\!m \times 2^{-126}}\qquad \text{nombre dénormalisé sur 32 bits}$$ où $m$ est la représentation décimale de la mantisse non nulle. | Signe | Exposant | mantisse | valeur spéciale | |:------:|:--------:|:--------:|:--------------------------------:| | 0 | 0...0 | 0 | $+0$ | | 1 | 0...0 | 0 | $-0$ | | 0 ou 1 | 0...0 | $\neq 0$ | $\pm {\bf 0},\!m\times 2^{-127}$ | | 0 | 1...1 | 0 | $+\infty$ | | 1 | 1...1 | 0 | $-\infty$ | | 0 | 1...1 | $\neq 0$ | `NaN` | ## Avec Python Les nombres flottants suivent la norme IEEE 754 en **double précision** (64 bits). On peut utiliser la notation décimale ou scientifique pour les définir: ``` x = 1.6 y = 1.2e-4 # 1,2x10^{-4} print(f"x={x}, y={y}") ``` la fonction `float` convertie un entier en un flottant tandis que `int` fait le contraire: ``` x = float(-4) y = int(5.9) # simple troncature print(f"x={x}, y={y}") ``` L'opérateur `/` de division produit toujours un flottant quelque soit le type de ses arguments. Note: `isinstance(valeur, type)` renvoie `True` ou `False` selon que `valeur` est du type `type` ou non. ``` x = 4 / 2 print(f"x est un entier? {isinstance(x, int)}, x est un flottant? {isinstance(x, float)}") ``` Certaines expression peuvent générer des valeurs spéciales: ``` x = 1e200 y = x * x z = y * 0 print(f"x={x}, y={y}, z={z}") x = 10 ** 400 # conversion implicite en flottant pour effectuer l'addition ... x + 0.5 # erreur... ``` Un simple calcul peut donner un résultat inattendu ... ``` 1.2 * 3 ``` On prendra donc garde à éviter tout test d'égalité `==` avec les flottants. À la place, on peut vérifier que la valeur absolue `abs` de leur différence est petite (très petite); par exemple: ``` x = 0.1+0.2 y = 0.3 print(f"x et y sont identiques? {x==y}") print(f"x et y sont très proches? {abs(x-y) < 1e-10}") ``` ### Complément À faire ...
github_jupyter
# 使用预训练的词向量完成文本分类任务 **作者**: [fiyen](https://github.com/fiyen)<br> **日期**: 2021.10<br> **摘要**: 本示例教程将会演示如何使用飞桨内置的Imdb数据集,并使用预训练词向量进行文本分类。 ## 一、环境设置 本教程基于Paddle 2.2.0-rc0 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) Paddle 2.2.0-rc0。 ``` import paddle from paddle.io import Dataset import numpy as np import paddle.text as text import random print(paddle.__version__) ``` ## 二、数据载入 在这个示例中,将使用 Paddle 2.2.0-rc0 完成针对 Imdb 数据集(电影评论情感二分类数据集)的分类训练和测试。Imdb 将直接调用自 Paddle 2.2.0-rc0,同时, 利用预训练的词向量([GloVe embedding](http://nlp.stanford.edu/projects/glove/))完成任务。 ``` print('自然语言相关数据集:', paddle.text.__all__) ``` 由于 Paddle 2.2.0-rc0 提供了经过处理的Imdb数据集,可以方便地调用所需要的数据实例,省去了数据预处理的麻烦。目前, Paddle 2.2.0-rc0 以及内置的高质量 数据集包括 Conll05st、Imdb、Imikolov、Movielens、HCIHousing、WMT14 和 WMT16 等,未来还将提供更多常用数据集的调用接口。 以下定义了调用 imdb 训练集合测试集的方法。其中,cutoff 定义了构建词典的截止大小,即数据集中出现频率在 cutoff 以下的不予考虑;mode 定义了返回的数据用于何种用途(test: 测试集,train: 训练集)。 ### 2.1 定义数据集 ``` imdb_train = text.Imdb(mode='train', cutoff=150) imdb_test = text.Imdb(mode='test', cutoff=150) ``` 调用 Imdb 得到的是经过编码的数据集,每个 term 对应一个唯一 id,映射关系可以通过 imdb_train.word_idx 查看。将每一个样本即一条电影评论,表示成 id 序列。可以检查一下以上生成的数据内容: ``` print("训练集样本数量: %d; 测试集样本数量: %d" % (len(imdb_train), len(imdb_test))) print(f"样本标签: {set(imdb_train.labels)}") print(f"样本字典: {list(imdb_train.word_idx.items())[:10]}") print(f"单个样本: {imdb_train.docs[0]}") print(f"最小样本长度: {min([len(x) for x in imdb_train.docs])};最大样本长度: {max([len(x) for x in imdb_train.docs])}") ``` 对于训练集,将数据的顺序打乱,以优化将要进行的分类模型训练的效果。 ``` shuffle_index = list(range(len(imdb_train))) random.shuffle(shuffle_index) train_x = [imdb_train.docs[i] for i in shuffle_index] train_y = [imdb_train.labels[i] for i in shuffle_index] test_x = imdb_test.docs test_y = imdb_test.labels ``` 从样本长度上可以看到,每个样本的长度是不相同的。然而,在模型的训练过程中,需要保证每个样本的长度相同,以便于构造矩阵进行批量运算。 因此,需要先对所有样本进行填充或截断,使样本的长度一致。 ``` def vectorizer(input, label=None, length=2000): if label is not None: for x, y in zip(input, label): yield np.array((x + [0]*length)[:length]).astype('int64'), np.array([y]).astype('int64') else: for x in input: yield np.array((x + [0]*length)[:length]).astype('int64') ``` ### 2.2 载入预训练向量 以下给出的文件较小,可以直接完全载入内存。对于大型的预训练向量,无法一次载入内存的,可以采用分批载入,并行处理的方式进行匹配。 此外,AIStudio 中提供了 glove.6B 数据集挂载,用户可在 AIStudio 中直接载入数据集并解压。 ``` # 下载并解压预训练向量 !wget http://nlp.stanford.edu/data/glove.6B.zip !unzip -q glove.6B.zip glove_path = "./glove.6B.100d.txt" embeddings = {} ``` 观察上述GloVe预训练向量文件一行的数据: ``` # 使用utf8编码解码 with open(glove_path, encoding='utf-8') as gf: line = gf.readline() print("GloVe单行数据:'%s'" % line) ``` 可以看到,每一行都以单词开头,其后接上该单词的向量值,各个值之间用空格隔开。基于此,可以用如下方法得到所有词向量的字典。 ``` with open(glove_path, encoding='utf-8') as gf: for glove in gf: word, embedding = glove.split(maxsplit=1) embedding = [float(s) for s in embedding.split(' ')] embeddings[word] = embedding print("预训练词向量总数:%d" % len(embeddings)) print(f"单词'the'的向量是:{embeddings['the']}") ``` ### 3.3 给数据集的词表匹配词向量 接下来,提取数据集的词表,需要注意的是,词表中的词编码的先后顺序是按照词出现的频率排列的,频率越高的词编码值越小。 ``` word_idx = imdb_train.word_idx vocab = [w for w in word_idx.keys()] print(f"词表的前5个单词:{vocab[:5]}") print(f"词表的后5个单词:{vocab[-5:]}") ``` 观察词表的后5个单词,发现最后一个词是"\<unk\>",这个符号代表所有词表以外的词。另外,对于形式b'the',是字符串'the' 的二进制编码形式,使用中注意使用b'the'.decode()来进行转换('\<unk\>'并没有进行二进制编码,注意区分)。 接下来,给词表中的每个词匹配对应的词向量。预训练词向量可能没有覆盖数据集词表中的所有词,对于没有的词,设该词的词 向量为零向量。 ``` # 定义词向量的维度,注意与预训练词向量保持一致 dim = 100 vocab_embeddings = np.zeros((len(vocab), dim)) for ind, word in enumerate(vocab): if word != '<unk>': word = word.decode() embedding = embeddings.get(word, np.zeros((dim,))) vocab_embeddings[ind, :] = embedding ``` ## 四、组网 ### 4.1 构建基于预训练向量的Embedding 对于预训练向量的Embedding,一般期望它的参数不再变动,所以要设置trainable=False。如果希望在此基础上训练参数,则需要 设置trainable=True。 ``` pretrained_attr = paddle.ParamAttr(name='embedding', initializer=paddle.nn.initializer.Assign(vocab_embeddings), trainable=False) embedding_layer = paddle.nn.Embedding(num_embeddings=len(vocab), embedding_dim=dim, padding_idx=word_idx['<unk>'], weight_attr=pretrained_attr) ``` ### 4.2 构建分类器 这里,构建简单的基于一维卷积的分类模型,其结构为:Embedding->Conv1D->Pool1D->Linear。在定义Linear时,由于需要知 道输入向量的维度,可以按照公式[官方文档](https://www.paddlepaddle.org.cn/documentation/docs/zh/2.0-beta/api/paddle/nn/layer/conv/Conv2d_cn.html) 来进行计算。这里给出计算的函数如下: ``` def cal_output_shape(input_shape, out_channels, kernel_size, stride, padding=0, dilation=1): return out_channels, int((input_shape + 2*padding - (dilation*(kernel_size - 1) + 1)) / stride) + 1 # 定义每个样本的长度 length = 2000 # 定义卷积层参数 kernel_size = 5 out_channels = 10 stride = 2 padding = 0 output_shape = cal_output_shape(length, out_channels, kernel_size, stride, padding) output_shape = cal_output_shape(output_shape[1], output_shape[0], 2, 2, 0) sim_model = paddle.nn.Sequential(embedding_layer, paddle.nn.Conv1D(in_channels=dim, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=padding, data_format='NLC', bias_attr=True), paddle.nn.ReLU(), paddle.nn.MaxPool1D(kernel_size=2, stride=2), paddle.nn.Flatten(), paddle.nn.Linear(in_features=np.prod(output_shape), out_features=2, bias_attr=True), paddle.nn.Softmax()) paddle.summary(sim_model, input_size=(-1, length), dtypes='int64') ``` ### 4.3 读取数据,进行训练 可以利用飞桨2.0的io.Dataset模块来构建一个数据的读取器,方便地将数据进行分批训练。 ``` class DataReader(Dataset): def __init__(self, input, label, length): self.data = list(vectorizer(input, label, length=length)) def __getitem__(self, idx): return self.data[idx] def __len__(self): return len(self.data) # 定义输入格式 input_form = paddle.static.InputSpec(shape=[None, length], dtype='int64', name='input') label_form = paddle.static.InputSpec(shape=[None, 1], dtype='int64', name='label') model = paddle.Model(sim_model, input_form, label_form) model.prepare(optimizer=paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters()), loss=paddle.nn.loss.CrossEntropyLoss(), metrics=paddle.metric.Accuracy()) # 分割训练集和验证集 eval_length = int(len(train_x) * 1/4) model.fit(train_data=DataReader(train_x[:-eval_length], train_y[:-eval_length], length), eval_data=DataReader(train_x[-eval_length:], train_y[-eval_length:], length), batch_size=32, epochs=10, verbose=1) ``` ## 五、评估效果并用模型预测 ``` # 评估 model.evaluate(eval_data=DataReader(test_x, test_y, length), batch_size=32, verbose=1) # 预测 true_y = test_y[100:105] + test_y[-110:-105] pred_y = model.predict(DataReader(test_x[100:105] + test_x[-110:-105], None, length), batch_size=1) test_x_doc = test_x[100:105] + test_x[-110:-105] # 标签编码转文字 label_id2text = {0: 'positive', 1: 'negative'} for index, y in enumerate(pred_y[0]): print("原文本:%s" % ' '.join([vocab[i].decode() for i in test_x_doc[index] if i < len(vocab) - 1])) print("预测的标签是:%s, 实际标签是:%s" % (label_id2text[np.argmax(y)], label_id2text[true_y[index]])) ```
github_jupyter
``` #本章需导入的模块 import numpy as np import pandas as pd import matplotlib.pyplot as plt from pylab import * import matplotlib.cm as cm import warnings warnings.filterwarnings(action = 'ignore') %matplotlib inline plt.rcParams['font.sans-serif']=['SimHei'] #解决中文显示乱码问题 plt.rcParams['axes.unicode_minus']=False from sklearn import svm import sklearn.linear_model as LM import scipy.stats as st from scipy.optimize import root,fsolve from sklearn.feature_selection import VarianceThreshold,SelectKBest,f_classif,chi2 from sklearn.feature_selection import RFE,RFECV,SelectFromModel from sklearn.linear_model import Lasso,LassoCV,lasso_path,Ridge,RidgeCV from sklearn.linear_model import enet_path,ElasticNetCV,ElasticNet data=pd.read_table('邮政编码数据.txt',sep=' ',header=None) tmp=data.loc[(data[0]==1) | (data[0]==3)] X=tmp.iloc[:,1:-1] Y=tmp.iloc[:,0] fig,axes=plt.subplots(nrows=1,ncols=2,figsize=(12,5)) alphas=list(np.linspace(0,1,20)) alphas.extend([2,3]) coef=np.zeros((len(alphas),X.shape[1])) err=[] for i,alpha in enumerate(alphas): modelLasso = Lasso(alpha=alpha) modelLasso.fit(X,Y) if i==0: coef[i]=modelLasso.coef_ else: coef[i]=(modelLasso.coef_/coef[0]) err.append(1-modelLasso.score(X,Y)) print('前5个变量的回归系数(alpha=0):%s'%coef[0,][0:5]) for i in np.arange(0,X.shape[1]): axes[0].plot(coef[1:-1,i]) axes[0].set_title("Lasso回归中的收缩参数alpha和回归系数") axes[0].set_xlabel("收缩参数alpha变化") axes[0].set_xticks(np.arange(len(alphas))) axes[0].set_ylabel("Beta(alpha)/Beta(alpha=0)") axes[1].plot(err) axes[1].set_title("Lasso回归中的收缩参数alpha和训练误差") axes[1].set_xlabel("收缩参数alpha变化") axes[1].set_xticks(np.arange(len(alphas))) axes[1].set_ylabel("错判率") alphas_lasso, coefs_lasso, _ = lasso_path(X, Y) l1 = plt.plot(-np.log10(alphas_lasso), coefs_lasso.T) plt.xlabel('-Log(alpha)') plt.ylabel('回归系数') plt.title('Lasso回归中的收缩参数alpha和回归系数') plt.show() model = LassoCV() #默认采用3-折交叉验证确定的alpha model.fit(X,Y) print('Lasso剔除的变量:%d'%sum(model.coef_==0)) print('Lasso的最佳的alpha:',model.alpha_) # 只有在使用LassoCV有效 lassoAlpha=model.alpha_ estimator = Lasso(alpha=lassoAlpha) selector=SelectFromModel(estimator=estimator) selector.fit(X,Y) print("阈值:%s"%selector.threshold_) print("保留的特征个数:%d"%len(selector.get_support(indices=True))) Xtmp=selector.inverse_transform(selector.transform(X)) plt.figure(figsize=(8,8)) np.random.seed(1) ids=np.random.choice(len(Y),25) for i,item in enumerate(ids): img=np.array(Xtmp[item,]).reshape((16,16)) plt.subplot(5,5,i+1) plt.imshow(img,cmap=cm.gray) plt.show() modelLasso = Lasso(alpha=lassoAlpha) modelLasso.fit(X,Y) print("lasso训练误差:%.2f"%(1-modelLasso.score(X,Y))) modelRidge = RidgeCV() # RidgeCV自动调节alpha可以实现选择最佳的alpha。 modelRidge.fit(X,Y) print('岭回归剔除的变量:%d'%sum(modelRidge.coef_==0)) print('岭回归最优alpha:',modelRidge.alpha_) print("岭回归训练误差:%.2f"%(1-modelRidge.score(X,Y))) ```
github_jupyter
Basic math operations as well as a few warmup problems for testing out your functional programming chops in python. (Please ignore the @jit decorator for now. It will come back in later assignments.) ``` try: from .util import jit except: from util import jit import math @jit def mul(x, y): ":math:`f(x, y) = x * y`" return x * y @jit def id(x): ":math:`f(x) = x`" return x @jit def add(x, y): ":math:`f(x, y) = x + y`" return float(x + y) @jit def neg(x): ":math:`f(x) = -x`" return -float(x) @jit def lt(x, y): ":math:`f(x) =` 1.0 if x is greater then y else 0.0" return 1.0 if x > y else 0. EPS = 1e-6 @jit def log(x): ":math:`f(x) = log(x)`" return math.log(x + EPS) @jit def exp(x): ":math:`f(x) = e^{x}`" return math.exp(x) @jit def log_back(a, b): return b / (a + EPS) @jit def sigmoid(x): r""" :math:`f(x) = \frac{1.0}{(1.0 + e^{-a})}` (See https://en.wikipedia.org/wiki/Sigmoid_function .) """ return 1.0 / add(1.0, exp(-x)) @jit def relu(x): """ :math:`f(x) =` x if x is greater then y else 0 (See https://en.wikipedia.org/wiki/Rectifier_(neural_networks).) """ return x if x > 0. else 0. @jit def relu_back(x, y): ":math:`f(x) =` y if x is greater then 0 else 0" return y if x > 0. else 0. def inv(x): ":math:`f(x) = 1/x`" return 1.0 / x def inv_back(a, b): return -(1.0 / a ** 2) * b def eq(x, y): ":math:`f(x) =` 1.0 if x is equal to y else 0.0" return 1. if x == y else 0. ``` ### Higher-order functions ``` def map(fn): """ Higher-order map. .. image:: figs/Ops/maplist.png See https://en.wikipedia.org/wiki/Map_(higher-order_function) Args: fn (one-arg function): process one value Returns: function : a function that takes a list and applies `fn` to each element """ def _fn(ls): return [fn(e) for e in ls] return _fn def negList(ls): "Use :func:`map` and :func:`neg` negate each element in `ls`" return map(neg)(ls) def zipWith(fn): """ Higher-order zipwith (or map2). .. image:: figs/Ops/ziplist.png See https://en.wikipedia.org/wiki/Map_(higher-order_function) Args: fn (two-arg function): combine two values Returns: function : takes two equally sized lists `ls1` and `ls2`, produce a new list by applying fn(x, y) one each pair of elements. """ def _fn(ls1, ls2): return [fn(e1, e2) for e1, e2 in zip(ls1, ls2)] return _fn def addLists(ls1, ls2): "Add the elements of `ls1` and `ls2` using :func:`zipWith` and :func:`add`" return zipWith(add)(ls1, ls2) def reduce(fn, start): r""" Higher-order reduce. .. image:: figs/Ops/reducelist.png Args: fn (two-arg function): combine two values start (float): start value :math:`x_0` Returns: function : function that takes a list `ls` of elements :math:`x_1 \ldots x_n` and computes the reduction :math:`fn(x_3, fn(x_2, fn(x_1, x_0)))` """ def _fn(ls): r = start for e in ls: r = fn(r, e) return r return _fn def sum(ls): """ Sum up a list using :func:`reduce` and :func:`add`. """ return reduce(add, 0)(ls) def prod(ls): """ Product of a list using :func:`reduce` and :func:`mul`. """ return reduce(mul, 1)(ls) ```
github_jupyter
################################################################################ #Licensed Materials - Property of IBM #(C) Copyright IBM Corp. 2019 #US Government Users Restricted Rights - Use, duplication disclosure restricted #by GSA ADP Schedule Contract with IBM Corp. ################################################################################ The auto-generated notebooks are subject to the International License Agreement for Non-Warranted Programs (or equivalent) and License Information document for Watson Studio Auto-generated Notebook ("License Terms"), such agreements located in the link below. Specifically, the Source Components and Sample Materials clause included in the License Information document for Watson Studio Auto-generated Notebook applies to the auto-generated notebooks. By downloading, copying, accessing, or otherwise using the materials, you agree to the License Terms. http://www14.software.ibm.com/cgi-bin/weblap/lap.pl?li_formnum=L-AMCU-BHU2B7&title=IBM%20Watson%20Studio%20Auto-generated%20Notebook%20V2.1 ## IBM AutoAI Auto-Generated Notebook v1.11.7 ### Representing Pipeline: P8 from run 25043980-e0c9-476a-8385-755ecd49aa48 **Note**: Notebook code generated using AutoAI will execute successfully. If code is modified or reordered, there is no guarantee it will successfully execute. This pipeline is optimized for the original dataset. The pipeline may fail or produce sub-optimium results if used with different data. For different data, please consider returning to AutoAI Experiements to generate a new pipeline. Please read our documentation for more information: (IBM Cloud Platform) https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html . (IBM Cloud Pak For Data) https://www.ibm.com/support/knowledgecenter/SSQNUZ_3.0.0/wsj/analyze-data/autoai-notebook.html . Before modifying the pipeline or trying to re-fit the pipeline, consider: The notebook converts dataframes to numpy arrays before fitting the pipeline (a current restriction of the preprocessor pipeline). The known_values_list is passed by reference and populated with categorical values during fit of the preprocessing pipeline. Delete its members before re-fitting. ### 1. Set Up ``` #attempt import of autoai_libs and install if missing try: import autoai_libs except Exception as e: print('attempting to install missing autoai_libs from pypi, this may take tens of seconds to complete.') import subprocess try: # attempt to install missing autoai-libs from pypi out = subprocess.check_output('pip install autoai-libs', shell=True) for line in out.splitlines(): print(line) except Exception as e: print(str(e)) try: import autoai_libs except Exception as e: print('attempting to install missing autoai_libs from local filesystem, this may take tens of seconds to complete.') import subprocess # attempt to install missing autoai-libs from local filesystem try: out = subprocess.check_output('pip install .', shell=True, cwd='software/autoai_libs') for line in out.splitlines(): print(line) import autoai_libs except Exception as e: print(str(e)) import sklearn try: import xgboost except: print('xgboost, if needed, will be installed and imported later') try: import lightgbm except: print('lightgbm, if needed, will be installed and imported later') from sklearn.cluster import FeatureAgglomeration import numpy from numpy import inf, nan, dtype, mean from autoai_libs.sklearn.custom_scorers import CustomScorers from autoai_libs.cognito.transforms.transform_utils import TExtras, FC from autoai_libs.transformers.exportable import * from autoai_libs.utils.exportable_utils import * from sklearn.pipeline import Pipeline known_values_list=[] # compose a decorator to assist pipeline instantiation via import of modules and installation of packages def decorator_retries(func): def install_import_retry(*args, **kwargs): retries = 0 successful = False failed_retries = 0 while retries < 100 and failed_retries < 10 and not successful: retries += 1 failed_retries += 1 try: result = func(*args, **kwargs) successful = True except Exception as e: estr = str(e) if estr.startswith('name ') and estr.endswith(' is not defined'): try: import importlib module_name = estr.split("'")[1] module = importlib.import_module(module_name) globals().update({module_name: module}) print('import successful for ' + module_name) failed_retries -= 1 except Exception as import_failure: print('import of ' + module_name + ' failed with: ' + str(import_failure)) import subprocess print('attempting pip install of ' + module_name) process = subprocess.Popen('pip install ' + module_name, shell=True) process.wait() try: print('re-attempting import of ' + module_name) module = importlib.import_module(module_name) globals().update({module_name: module}) print('import successful for ' + module_name) failed_retries -= 1 except Exception as import_or_installation_failure: print('failure installing and/or importing ' + module_name + ' error was: ' + str( import_or_installation_failure)) raise (ModuleNotFoundError('Missing package in environment for ' + module_name + '? Try import and/or pip install manually?')) elif type(e) is AttributeError: if 'module ' in estr and ' has no attribute ' in estr: pieces = estr.split("'") if len(pieces) == 5: try: import importlib print('re-attempting import of ' + pieces[3] + ' from ' + pieces[1]) module = importlib.import_module('.' + pieces[3], pieces[1]) failed_retries -= 1 except: print('failed attempt to import ' + pieces[3]) raise (e) else: raise (e) else: raise (e) if successful: print('Pipeline successfully instantiated') else: raise (ModuleNotFoundError( 'Remaining missing imports/packages in environment? Retry cell and/or try pip install manually?')) return result return install_import_retry ``` ### 2. Compose Pipeline ``` # metadata necessary to replicate AutoAI scores with the pipeline _input_metadata = {'run_uid': '25043980-e0c9-476a-8385-755ecd49aa48', 'pn': 'P8', 'data_source': '', 'target_label_name': 'charges', 'learning_type': 'regression', 'optimization_metric': 'neg_root_mean_squared_error', 'random_state': 33, 'cv_num_folds': 3, 'holdout_fraction': 0.1, 'pos_label': None} # define a function to compose the pipeline, and invoke it @decorator_retries def compose_pipeline(): import numpy from numpy import nan, dtype, mean # # composing steps for toplevel Pipeline # _input_metadata = {'run_uid': '25043980-e0c9-476a-8385-755ecd49aa48', 'pn': 'P8', 'data_source': '', 'target_label_name': 'charges', 'learning_type': 'regression', 'optimization_metric': 'neg_root_mean_squared_error', 'random_state': 33, 'cv_num_folds': 3, 'holdout_fraction': 0.1, 'pos_label': None} steps = [] # # composing steps for preprocessor Pipeline # preprocessor__input_metadata = None preprocessor_steps = [] # # composing steps for preprocessor_features FeatureUnion # preprocessor_features_transformer_list = [] # # composing steps for preprocessor_features_categorical Pipeline # preprocessor_features_categorical__input_metadata = None preprocessor_features_categorical_steps = [] preprocessor_features_categorical_steps.append(('cat_column_selector', autoai_libs.transformers.exportable.NumpyColumnSelector(columns=[0, 1, 3, 4, 5]))) preprocessor_features_categorical_steps.append(('cat_compress_strings', autoai_libs.transformers.exportable.CompressStrings(activate_flag=True, compress_type='hash', dtypes_list=['int_num', 'char_str', 'int_num', 'char_str', 'char_str'], missing_values_reference_list=['', '-', '?', nan], misslist_list=[[], [], [], [], []]))) preprocessor_features_categorical_steps.append(('cat_missing_replacer', autoai_libs.transformers.exportable.NumpyReplaceMissingValues(filling_values=nan, missing_values=[]))) preprocessor_features_categorical_steps.append(('cat_unknown_replacer', autoai_libs.transformers.exportable.NumpyReplaceUnknownValues(filling_values=nan, filling_values_list=[nan, nan, nan, nan, nan], known_values_list=known_values_list, missing_values_reference_list=['', '-', '?', nan]))) preprocessor_features_categorical_steps.append(('boolean2float_transformer', autoai_libs.transformers.exportable.boolean2float(activate_flag=True))) preprocessor_features_categorical_steps.append(('cat_imputer', autoai_libs.transformers.exportable.CatImputer(activate_flag=True, missing_values=nan, sklearn_version_family='20', strategy='most_frequent'))) preprocessor_features_categorical_steps.append(('cat_encoder', autoai_libs.transformers.exportable.CatEncoder(activate_flag=True, categories='auto', dtype=numpy.float64, encoding='ordinal', handle_unknown='error', sklearn_version_family='20'))) preprocessor_features_categorical_steps.append(('float32_transformer', autoai_libs.transformers.exportable.float32_transform(activate_flag=True))) # assembling preprocessor_features_categorical_ Pipeline preprocessor_features_categorical_pipeline = sklearn.pipeline.Pipeline(steps=preprocessor_features_categorical_steps) preprocessor_features_transformer_list.append(('categorical', preprocessor_features_categorical_pipeline)) # # composing steps for preprocessor_features_numeric Pipeline # preprocessor_features_numeric__input_metadata = None preprocessor_features_numeric_steps = [] preprocessor_features_numeric_steps.append(('num_column_selector', autoai_libs.transformers.exportable.NumpyColumnSelector(columns=[2]))) preprocessor_features_numeric_steps.append(('num_floatstr2float_transformer', autoai_libs.transformers.exportable.FloatStr2Float(activate_flag=True, dtypes_list=['float_num'], missing_values_reference_list=[]))) preprocessor_features_numeric_steps.append(('num_missing_replacer', autoai_libs.transformers.exportable.NumpyReplaceMissingValues(filling_values=nan, missing_values=[]))) preprocessor_features_numeric_steps.append(('num_imputer', autoai_libs.transformers.exportable.NumImputer(activate_flag=True, missing_values=nan, strategy='median'))) preprocessor_features_numeric_steps.append(('num_scaler', autoai_libs.transformers.exportable.OptStandardScaler(num_scaler_copy=None, num_scaler_with_mean=None, num_scaler_with_std=None, use_scaler_flag=False))) preprocessor_features_numeric_steps.append(('float32_transformer', autoai_libs.transformers.exportable.float32_transform(activate_flag=True))) # assembling preprocessor_features_numeric_ Pipeline preprocessor_features_numeric_pipeline = sklearn.pipeline.Pipeline(steps=preprocessor_features_numeric_steps) preprocessor_features_transformer_list.append(('numeric', preprocessor_features_numeric_pipeline)) # assembling preprocessor_features_ FeatureUnion preprocessor_features_pipeline = sklearn.pipeline.FeatureUnion(transformer_list=preprocessor_features_transformer_list) preprocessor_steps.append(('features', preprocessor_features_pipeline)) preprocessor_steps.append(('permuter', autoai_libs.transformers.exportable.NumpyPermuteArray(axis=0, permutation_indices=[0, 1, 3, 4, 5, 2]))) # assembling preprocessor_ Pipeline preprocessor_pipeline = sklearn.pipeline.Pipeline(steps=preprocessor_steps) steps.append(('preprocessor', preprocessor_pipeline)) # # composing steps for cognito Pipeline # cognito__input_metadata = None cognito_steps = [] cognito_steps.append(('0', autoai_libs.cognito.transforms.transform_utils.TA2(fun=numpy.add, name='sum', datatypes1=['intc', 'intp', 'int_', 'uint8', 'uint16', 'uint32', 'uint64', 'int8', 'int16', 'int32', 'int64', 'short', 'long', 'longlong', 'float16', 'float32', 'float64'], feat_constraints1=[autoai_libs.utils.fc_methods.is_not_categorical], datatypes2=['intc', 'intp', 'int_', 'uint8', 'uint16', 'uint32', 'uint64', 'int8', 'int16', 'int32', 'int64', 'short', 'long', 'longlong', 'float16', 'float32', 'float64'], feat_constraints2=[autoai_libs.utils.fc_methods.is_not_categorical], tgraph=None, apply_all=True, col_names=['age', 'sex', 'bmi', 'children', 'smoker', 'region'], col_dtypes=[dtype('float32'), dtype('float32'), dtype('float32'), dtype('float32'), dtype('float32'), dtype('float32')], col_as_json_objects=None))) cognito_steps.append(('1', autoai_libs.cognito.transforms.transform_utils.FS1(cols_ids_must_keep=range(0, 6), additional_col_count_to_keep=8, ptype='regression'))) # assembling cognito_ Pipeline cognito_pipeline = sklearn.pipeline.Pipeline(steps=cognito_steps) steps.append(('cognito', cognito_pipeline)) steps.append(('estimator', sklearn.ensemble.forest.RandomForestRegressor(bootstrap=True, criterion='friedman_mse', max_depth=4, max_features=0.9832410473940374, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=3, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=29, n_jobs=4, oob_score=False, random_state=33, verbose=0, warm_start=False))) # assembling Pipeline pipeline = sklearn.pipeline.Pipeline(steps=steps) return pipeline pipeline = compose_pipeline() ``` ### 3. Extract needed parameter values from AutoAI run metadata ``` # Metadata used in retrieving data and computing metrics. Customize as necessary for your environment. #data_source='replace_with_path_and_csv_filename' target_label_name = _input_metadata['target_label_name'] learning_type = _input_metadata['learning_type'] optimization_metric = _input_metadata['optimization_metric'] random_state = _input_metadata['random_state'] cv_num_folds = _input_metadata['cv_num_folds'] holdout_fraction = _input_metadata['holdout_fraction'] if 'data_provenance' in _input_metadata: data_provenance = _input_metadata['data_provenance'] else: data_provenance = None if 'pos_label' in _input_metadata and learning_type == 'classification': pos_label = _input_metadata['pos_label'] else: pos_label = None ``` ### 4. Create dataframe from dataset in IBM Cloud Object Storage or IBM Cloud Pak For Data ``` # @hidden_cell # The following code contains the credentials for a file in your IBM Cloud Object Storage. # You might want to remove those credentials before you share your notebook. credentials_0 = { } # Read the data as a dataframe import pandas as pd csv_encodings=['UTF-8','Latin-1'] # supplement list of encodings as necessary for your data df = None readable = None # if automatic detection fails, you can supply a filename here # First, obtain a readable object # IBM Cloud Object Storage data access # Assumes COS credentials are in a dictionary named 'credentials_0' cos_credentials = df = globals().get('credentials_0') if readable is None and cos_credentials is not None: print('accessing data via IBM Cloud Object Storage') try: import types from botocore.client import Config import ibm_boto3 def __iter__(self): return 0 if 'SERVICE_NAME' not in cos_credentials: # in case of Studio-supplied credentials for a different dataset cos_credentials['SERVICE_NAME'] = 's3' client = ibm_boto3.client(service_name=cos_credentials['SERVICE_NAME'], ibm_api_key_id=cos_credentials['IBM_API_KEY_ID'], ibm_auth_endpoint=cos_credentials['IBM_AUTH_ENDPOINT'], config=Config(signature_version='oauth'), endpoint_url=cos_credentials['ENDPOINT']) try: readable = client.get_object(Bucket=cos_credentials['BUCKET'],Key=cos_credentials['FILE'])['Body'] # add missing __iter__ method, so pandas accepts readable as file-like object if not hasattr(readable, "__iter__"): readable.__iter__ = types.MethodType( __iter__, readable ) except Exception as cos_access_exception: print('unable to access data object in cloud object storage with credentials supplied') except Exception as cos_exception: print('unable to create client for cloud object storage') # IBM Cloud Pak for Data data access project_filename = globals().get('project_filename') if readable is None and 'credentials_0' in globals() and 'ASSET_ID' in credentials_0: project_filename = credentials_0['ASSET_ID'] if project_filename is not None: print('attempting project_lib access to ' + str(project_filename)) try: from project_lib import Project project = Project.access() storage_credentials = project.get_storage_metadata() readable = project.get_file(project_filename) except Exception as project_exception: print('unable to access data using the project_lib interface and filename supplied') # Use data_provenance as filename if other access mechanisms are unsuccessful if readable is None and type(data_provenance) is str: print('attempting to access local file using path and name ' + data_provenance) readable = data_provenance # Second, use pd.read_csv to read object, iterating over list of csv_encodings until successful if readable is not None: for encoding in csv_encodings: try: df = pd.read_csv(readable, encoding=encoding) print('successfully loaded dataframe using encoding = ' + str(encoding)) break except Exception as exception_csv: print('unable to read csv using encoding ' + str(encoding)) print('handled error was ' + str(exception_csv)) if df is None: print('unable to read file/object as a dataframe using supplied csv_encodings ' + str(csv_encodings)) print("Please use 'insert to code' on data panel to load dataframe.") raise(ValueError('unable to read file/object as a dataframe using supplied csv_encodings ' + str(csv_encodings))) if df is None: print('Unable to access bucket/file in IBM Cloud Object Storage or asset in IBM Cloud Pak for Data with the parameters supplied.') print('This is abnormal, but proceeding assuming the notebook user will supply a dataframe by other means.') print("Please use 'insert to code' on data panel to load dataframe.") ``` ### 5. Preprocess Data ``` # Drop rows whose target is not defined target = target_label_name # your target name here if learning_type == 'regression': df[target] = pd.to_numeric(df[target], errors='coerce') df.dropna('rows', how='any', subset=[target], inplace=True) # extract X and y df_X = df.drop(columns=[target]) df_y = df[target] # Detach preprocessing pipeline (which needs to see all training data) preprocessor_index = -1 preprocessing_steps = [] for i, step in enumerate(pipeline.steps): preprocessing_steps.append(step) if step[0]=='preprocessor': preprocessor_index = i break if len(pipeline.steps) > preprocessor_index+1 and pipeline.steps[preprocessor_index + 1][0] == 'cognito': preprocessor_index += 1 preprocessing_steps.append(pipeline.steps[preprocessor_index]) if preprocessor_index >= 0: preprocessing_pipeline = Pipeline(memory=pipeline.memory, steps=preprocessing_steps) pipeline = Pipeline(steps=pipeline.steps[preprocessor_index+1:]) # Preprocess X # preprocessor should see all data for cross_validate on the remaining steps to match autoai scores known_values_list.clear() # known_values_list is filled in by the preprocessing_pipeline if needed preprocessing_pipeline.fit(df_X.values, df_y.values) X_prep = preprocessing_pipeline.transform(df_X.values) ``` ### 6. Split data into Training and Holdout sets ``` # determine learning_type and perform holdout split (stratify conditionally) if learning_type is None: # When the problem type is not available in the metadata, use the sklearn type_of_target to determine whether to stratify the holdout split # Caution: This can mis-classify regression targets that can be expressed as integers as multiclass, in which case manually override the learning_type from sklearn.utils.multiclass import type_of_target if type_of_target(df_y.values) in ['multiclass', 'binary']: learning_type = 'classification' else: learning_type = 'regression' print('learning_type determined by type_of_target as:',learning_type) else: print('learning_type specified as:',learning_type) from sklearn.model_selection import train_test_split if learning_type == 'classification': X, X_holdout, y, y_holdout = train_test_split(X_prep, df_y.values, test_size=holdout_fraction, random_state=random_state, stratify=df_y.values) else: X, X_holdout, y, y_holdout = train_test_split(X_prep, df_y.values, test_size=holdout_fraction, random_state=random_state) ``` ### 7. Additional setup: Define a function that returns a scorer for the target's positive label ``` # create a function to produce a scorer for a given positive label def make_pos_label_scorer(scorer, pos_label): kwargs = {'pos_label':pos_label} for prop in ['needs_proba', 'needs_threshold']: if prop+'=True' in scorer._factory_args(): kwargs[prop] = True if scorer._sign == -1: kwargs['greater_is_better'] = False from sklearn.metrics import make_scorer scorer=make_scorer(scorer._score_func, **kwargs) return scorer ``` ### 8. Fit pipeline, predict on Holdout set, calculate score, perform cross-validation ``` # fit the remainder of the pipeline on the training data pipeline.fit(X,y) # predict on the holdout data y_pred = pipeline.predict(X_holdout) # compute score for the optimization metric # scorer may need pos_label, but not all scorers take pos_label parameter from sklearn.metrics import get_scorer scorer = get_scorer(optimization_metric) score = None #score = scorer(pipeline, X_holdout, y_holdout) # this would suffice for simple cases pos_label = None # if you want to supply the pos_label, specify it here if pos_label is None and 'pos_label' in _input_metadata: pos_label=_input_metadata['pos_label'] try: score = scorer(pipeline, X_holdout, y_holdout) except Exception as e1: if pos_label is None or str(pos_label)=='': print('You may have to provide a value for pos_label in order for a score to be calculated.') raise(e1) else: exception_string=str(e1) if 'pos_label' in exception_string: try: scorer = make_pos_label_scorer(scorer, pos_label=pos_label) score = scorer(pipeline, X_holdout, y_holdout) print('Retry was successful with pos_label supplied to scorer') except Exception as e2: print('Initial attempt to use scorer failed. Exception was:') print(e1) print('') print('Retry with pos_label failed. Exception was:') print(e2) else: raise(e1) if score is not None: print(score) # cross_validate pipeline using training data from sklearn.model_selection import cross_validate from sklearn.model_selection import StratifiedKFold, KFold if learning_type == 'classification': fold_generator = StratifiedKFold(n_splits=cv_num_folds, random_state=random_state) else: fold_generator = KFold(n_splits=cv_num_folds, random_state=random_state) cv_results = cross_validate(pipeline, X, y, cv=fold_generator, scoring={optimization_metric:scorer}, return_train_score=True) import numpy as np np.mean(cv_results['test_' + optimization_metric]) cv_results ```
github_jupyter
## Requirements Before using this tutorial, ensure that the following are on your system: - <b>SteganoGAN is installed</b>. Install via pip or source code. - <b>Training and Validation Dataset are available </b>. Download via data/download.sh or retrieve your own. It is also suggested that you have the following: - <b>CUDA-enabled machine</b>. SteganoGAN takes very long to train without a GPU. Use AWS to have access to CUDA machines. Now, we retrieve each of the imports required by steganoGAN ## Imports ``` import numpy as np #numpy is used for a parameter input ``` This imports the SteganoGAN class which has the wrapper functions for SteganoGAN usage: - <b>Create a SteganoGAN architecture</b> - <b>Train a SteganoGAN architecture</b> - <b>Load a SteganoGAN model</b> - <b>Encode and decode operations for SteganoGAN models</b> We retrieve each of these functions later in the tutorial. ``` from steganogan import SteganoGAN ``` The DataLoader is used to do the following: - <b>Load images</b> from a selected database - <b>Specify hyperparameters</b> for database loading ``` from steganogan.loader import DataLoader ``` The encoders are the architectural models that are used to encode the messages inside the image. There are two types of encoders that can be imported: - <b>Basic Encoder</b>: This is memory-efficient but not as robust as the other model - <b>Dense Encoder</b>: This is a more robust model with a denser architecture Please review the SteganoGAN paper for images of the two architectures. A steganoGAN model can only use one of these encoders. You may select which one to use in your model. ``` from steganogan.encoders import BasicEncoder, DenseEncoder ``` The decoders are the architectural models that are used to decode the messages inside the image. There are two types of decoders that can be imported: - <b>Basic Decoder</b>: This is memory-efficient but not as robust as the other model - <b>Dense Decoder</b>: This is a more robust model with a denser architecture Please review the SteganoGAN paper for images of the two architectures. A steganoGAN model can only use one of these dector. You may select which one to use in your model. ``` from steganogan.decoders import BasicDecoder, DenseDecoder ``` The Critic checks if an image is steganographic or not. At the current moment, we have the following Critic: - <b>Basic Critic</b>: This is a GAN discriminator that ensures images are well hid. SteganoGAN currently only uses a BasicCritic. This parameter will never be changed ``` from steganogan.critics import BasicCritic ``` ## Loading Data In the next cell, we load in the data for our training and validation dataset. The training dataset is used to train the model while the validation dataset is used to ensure that the model is working correctly. There are several parameters that can you choose to tune. - <b>path:str</b> - This can be a relative path or an absolute path from the notebook file. - <b>limit:int</b> - The number of images you wish to use. If limit is set as np.inf, all the images in the directory will be used. - <b>shuffle:bool</b> - If true, your images will be randomly shuffled before being used for training. - <b>batch_size:int</b> - The number of images to use in a batch. A batch represents the number of images that are trained in a single training cycle (i.e. batch_size=10, means 10 images are sent through the network at once during training) ``` # Load the data train = DataLoader('D:/dataset/train/', limit=np.inf, shuffle=True, batch_size=4) validation = DataLoader('D:/dataset/val/', limit=np.inf, shuffle=True, batch_size=4) ``` ## Selecting an Architecture Below we are deciding on the architecture that we want to use for SteganoGAN. There are several parameters that you can tune here to decide on the architecture. Let us go over them: - <b>data_depth:int</b> - Represents how many layers we want to represent the data with. Currently, data is representated as a N x data_depth x H x W. Usually, we set this to 1 since that suffices for our needs. For more robustness set this data depth to a higher number. - <b>encoder:EncoderInstance</b> - You can choose either a BasicEncoder or DenseEncoder. - <b>decoder:DecoderInstance</b> - You can choose either a DenseEncoder or DenseDecoder. - <b>critic:CritcInstance</b> - The only option is the BasicCritic - <b>hidden_size:int</b> - The number of channels we wish to use in the hidden layers of our architecture. You can tune this parameter. We chose 32 as we find relatively good models with these number of channels. - <b>cuda:bool</b> - If true and the machine is CUDA-enabled, CUDA will be used for training/execution - <b>verbose:bool</b> - If true, the system will print more output to console ``` # Create the SteganoGAN instance steganogan = SteganoGAN(1, BasicEncoder, BasicDecoder, BasicCritic, hidden_size=32, cuda=True, verbose=True) ``` ## Training and Saving the Model Once the architecture has been decided and the training and validation data are we loaded, we can begin training. To train call the fit function with the following parameter options: - <b>train:DataLoaderInstance</b> - This is the training set that you loaded earlier. - <b>validation:DataLoaderInstance</b> - This is the validation that you loaded earlier. - <b>epochs:int</b> - This is the number of epochs you wish to train for. A larger number of epochs will lead to a more precise model. ``` # Fit on the given data steganogan.fit(train, validation, epochs=5) ``` Once the model is trained, we save the model to a .steg file. In this file, we save all the model weights and the architectures that these weights compose. Both the encoder and decoder are saved in the same file. The arguments taken are: - <b>path:str</b> - This is the path to save the model. Make sure that the directory exists. ``` # Save the fitted model steganogan.save('demo.steg') ``` ## Loading and Executing a Model The next command loads a previously generated model. It takes a couple of different parameters. - <b>architecture:str</b> - You can select either 'basic' or 'dense' architectures. - <b>path:str</b> - The path to a model that you have previously generated. - <b>cuda:bool</b> - If true and the machine is CUDA-enabled, CUDA will be used for training/execution - <b>verbose:bool</b> - If true, the system will print more output to console Note: <b>either architectures or path but not both must not be None</b> ``` # Load the model steganogan = SteganoGAN.load(architecture='demo', path=None, cuda=True, verbose=True) ``` This function encodes an input image with a message and outputs a steganographic image. Note that since SteganoGAN only works on spatial-domains, the output image must be a PNG image. The function takes the following arguments: - <b>input_image:str</b>: The path to the input image - <b>output_image:str</b>: The path to the output image - <b>secret_message:str</b>: The secret message you wish to embed. ``` # Encode a message in input.png steganogan.encode('input.png', 'output.png', 'This is asuper secret message!') ``` This function decode a steganographic image with a message and outputs a message. If no message is found, an error will be thrown by the function. Since steganoGAN encoders and decoders come in pairs, you <b>must</b> use the decoder that was trained with its corresponding encoder. The function takes the following arguments: - <b>stego_image:str</b>: The path to the steganographic image ``` # Decode the message from output.png steganogan.decode('test.png') ```
github_jupyter
``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline df = pd.read_csv(r'C:\Users\prasa\Desktop\advertising.csv') df.head() df.columns df.info() df.describe() sns.set_style('darkgrid') df['Age'].hist(bins=35) plt.xlabel('Age') pd.crosstab(df['Country'], df['Clicked on Ad']).sort_values( 1,ascending = False).tail(15) df[df['Clicked on Ad']==1]['Country'].value_counts().head(15) df['Country'].value_counts().head(15) pd.crosstab(index=df['Country'],columns='count').sort_values(['count'], ascending=False).head(15) df.isnull().sum() type(df['Timestamp'][1]) df['Timestamp'] = pd.to_datetime(df['Timestamp']) df['Month'] = df['Timestamp'].dt.month df['Day'] = df['Timestamp'].dt.day df['Hour'] = df['Timestamp'].dt.hour df["Weekday"] = df['Timestamp'].dt.dayofweek df = df.drop(['Timestamp'], axis=1) df.head() sns.countplot(x = 'Clicked on Ad', data = df) sns.set_style('darkgrid') sns.jointplot(x = "Age", y= "Daily Time Spent on Site", data = df) sns.scatterplot(x = "Age", y= "Daily Time Spent on Site",hue='Clicked on Ad', data = df) sns.lmplot(x = "Age", y= "Daily Time Spent on Site",hue='Clicked on Ad', data = df) sns.pairplot(df, hue = 'Clicked on Ad', vars = ['Daily Time Spent on Site', 'Age', 'Area Income', 'Daily Internet Usage'],palette = 'rocket') plots = ['Daily Time Spent on Site', 'Age', 'Area Income','Daily Internet Usage'] for i in plots: plt.figure(figsize = (12, 6)) plt.subplot(2,3,1) sns.boxplot(data= df, y=df[i],x='Clicked on Ad') plt.subplot(2,3,2) sns.boxplot(data= df, y=df[i]) plt.subplot(2,3,3) sns.distplot(df[i],bins= 20,) plt.tight_layout() plt.title(i) plt.show() print('oldest person didn\'t clicked on the ad was of was of:', df['Age'].max(), 'Years') print('oldest person who clicked on the ad was of:', df[df['Clicked on Ad']==0]['Age'].max(), 'Years') print('Youngest person was of:', df['Age'].min(), 'Years') print('Youngest person who clicked on the ad was of:', df[df['Clicked on Ad']==0]['Age'].min(), 'Years') print('Average age was of:', df['Age'].mean(), 'Years') fig = plt.figure(figsize = (12,10)) sns.heatmap(df.corr(), cmap='viridis', annot = True) f,ax=plt.subplots(1,2,figsize=(14,5)) df['Month'][df['Clicked on Ad']==1].value_counts().sort_index().plot(ax=ax[0]) ax[0].set_ylabel('Count of Clicks') pd.crosstab(df["Clicked on Ad"], df["Month"]).T.plot(kind = 'Bar',ax=ax[1] plt.show() from sklearn.model_selection import train_test_split X = df[['Daily Time Spent on Site', 'Age', 'Area Income','Daily Internet Usage', 'Male']] y = df['Clicked on Ad'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=101) print(X_train.shape, y_train.shape) print(X_test.shape, y_test.shape) from sklearn.linear_model import LogisticRegression logmodel = LogisticRegression(solver='lbfgs') logmodel.fit(X_train,y_train) predictions = logmodel.predict(X_test) from sklearn.metrics import classification_report print(classification_report(y_test,predictions)) # Importing a pure confusion matrix from sklearn.metrics family from sklearn.metrics import confusion_matrix # Printing the confusion_matrix print(confusion_matrix(y_test, predictions)) logmodel.coef_ ```
github_jupyter
# Self-Driving Car Engineer Nanodegree ## Project: **Finding Lane Lines on the Road** *** In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right. In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project. --- Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image. **Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".** --- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.** --- <figure> <img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> </figcaption> </figure> <p></p> <figure> <img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p> </figcaption> </figure> ## Import Packages **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** ``` #importing some useful packages import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 %matplotlib inline ``` # Read in an Image ``` #reading in an image image = mpimg.imread('test_images/solidWhiteRight.jpg') #printing out some stats and plotting print('This image is:', type(image), 'with dimensions:', image.shape) plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray') ``` ## Ideas for Lane Detection Pipeline **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:** `cv2.inRange()` for color selection `cv2.fillPoly()` for regions selection `cv2.line()` to draw lines on an image given endpoints `cv2.addWeighted()` to coadd / overlay two images `cv2.cvtColor()` to grayscale or change color `cv2.imwrite()` to output images to file `cv2.bitwise_and()` to apply a mask to an image **Check out the OpenCV documentation to learn about these and discover even more awesome functionality!** ## Helper Functions Below are some helper functions to help get you started. They should look familiar from the lesson! ## Test Images Build your pipeline to work on the images in the directory "test_images" **You should make sure your pipeline works well on these images before you try the videos.** ``` import math def grayscale(img): """Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale (assuming your grayscaled image is called 'gray') you should call plt.imshow(gray, cmap='gray')""" return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Or use BGR2GRAY if you read an image with cv2.imread() # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def canny(img, low_threshold, high_threshold): """Applies the Canny transform""" return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): """Applies a Gaussian Noise kernel""" return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): """ Applies an image mask. Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. `vertices` should be a numpy array of integer points. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def draw_lines(img, lines, color=[255, 0, 0], thickness=2): """ NOTE: this is the function you might want to use as a starting point once you want to average/extrapolate the line segments you detect to map out the full extent of the lane (going from the result shown in raw-lines-example.mp4 to that shown in P1_example.mp4). Think about things like separating line segments by their slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left line vs. the right line. Then, you can average the position of each of the lines and extrapolate to the top and bottom of the lane. This function draws `lines` with `color` and `thickness`. Lines are drawn on the image inplace (mutates the image). If you want to make the lines semi-transparent, think about combining this function with the weighted_img() function below """ #for line in lines: # for x1,y1,x2,y2 in line: # cv2.line(img, (x1, y1), (x2, y2), color, thickness) def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): """ `img` should be the output of a Canny transform. Returns an image with hough lines drawn. """ lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8) draw_lines(line_img, lines) return line_img # Python 3 has support for cool math symbols. def weighted_img(img, initial_img, α=0.8, β=1., γ=0.): """ `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + γ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, γ) import os os.listdir("test_images/") ``` ## Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report. Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters. SOLUTION FOR IMAGES ``` ######## SOLUTION FOR IMAGES ######## import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 %matplotlib inline # Read in the image choose_picture_name = "whiteCarLaneSwitch" image_name = 'test_images/'+choose_picture_name+'.jpg' image = mpimg.imread(image_name) original = image # Grab the x and y size and make a copy of the image ysize = image.shape[0] xsize = image.shape[1] color_select = np.copy(image) line_image = np.copy(color_select) ######## COLOR SELECTION ######## red_threshold = 200 green_threshold = 100 blue_threshold = 100 rgb_threshold = [red_threshold, green_threshold, blue_threshold] color_thresholds = (image[:,:,0] < rgb_threshold[0]) \ | (image[:,:,1] < rgb_threshold[1]) \ | (image[:,:,2] < rgb_threshold[2]) ######## MASKING ############# left_bottom = [50, 539] right_bottom = [900, 539] apex = [475, 280] # Perform a linear fit (y=Ax+B) to each of the three sides of the triangle # np.polyfit returns the coefficients [A, B] of the fit fit_left = np.polyfit((left_bottom[0], apex[0]), (left_bottom[1], apex[1]), 1) fit_right = np.polyfit((right_bottom[0], apex[0]), (right_bottom[1], apex[1]), 1) fit_bottom = np.polyfit((left_bottom[0], right_bottom[0]), (left_bottom[1], right_bottom[1]), 1) # Find the region inside the lines XX, YY = np.meshgrid(np.arange(0, xsize), np.arange(0, ysize)) region_thresholds = (YY > (XX*fit_left[0] + fit_left[1])) & \ (YY > (XX*fit_right[0] + fit_right[1])) & \ (YY < (XX*fit_bottom[0] + fit_bottom[1])) # Mask color and region selection color_select[color_thresholds | ~region_thresholds] = [0, 0, 0] # Color pixels red where both color and region selections met line_image[~color_thresholds & region_thresholds] = [255, 0, 0] ######## CANNY EDGES ######## gray = cv2.cvtColor(color_select,cv2.COLOR_RGB2GRAY) # Define a kernel size for Gaussian smoothing / blurring kernel_size = 5 # Must be an odd number (3, 5, 7...) blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size),0) # Define our parameters for Canny and run it low_threshold = 50 high_threshold = 150 edges = cv2.Canny(blur_gray, low_threshold, high_threshold) ######## REGION OF INTEREST ######## # Next we'll create a masked edges image using cv2.fillPoly() mask = np.zeros_like(edges) ignore_mask_color = 255 imshape = image.shape vertices = np.array([[(0,imshape[0]),(0, 0), (imshape[1], 0), (imshape[1],imshape[0])]], dtype=np.int32) cv2.fillPoly(mask, vertices, ignore_mask_color) masked_edges = cv2.bitwise_and(edges, mask) ######## HOUGH TRANSFORM ######## # Hough transform parameters rho = 1 # distance resolution in pixels of the Hough grid theta = np.pi/180 # angular resolution in radians of the Hough grid threshold = 35 # minimum number of votes (intersections in Hough grid cell) min_line_length = 5 #minimum number of pixels making up a line max_line_gap = 2 # maximum gap in pixels between connectable line segments line_image = np.copy(image)*0 # creating a blank to draw lines on # Hough on edge detected image lines = cv2.HoughLinesP(masked_edges, rho, theta, threshold, np.array([]), min_line_length, max_line_gap) ######## IMPROVED DRAWN LINES #### # IMPROVED DRAWN LINES A) # In case of error, don't draw the line draw_right = True draw_left = True # IMPROVED DRAWN LINES B) # Find slopes of all lines # But only care about lines where abs(slope) > slope_threshold slope_threshold = 0.5 considered_lane_height = 0.4 slopes = [] new_lines = [] for line in lines: x1, y1, x2, y2 = line[0] # line = [[x1, y1, x2, y2]] # Calculate slope if x2 - x1 == 0.: # corner case, avoiding division by 0 slope = 999. # practically infinite slope else: slope = (y2 - y1) / (x2 - x1) # Filter lines based on slope if abs(slope) > slope_threshold: slopes.append(slope) new_lines.append(line) lines = new_lines # IMPROVED DRAWN LINES C) # Split lines into right_lines and left_lines, representing the right and left lane lines # Right/left lane lines must have positive/negative slope, and be on the right/left half of the image right_lines = [] left_lines = [] for i, line in enumerate(lines): x1, y1, x2, y2 = line[0] img_x_center = image.shape[1] / 2 # x coordinate of center of image if slopes[i] > 0 and x1 > img_x_center and x2 > img_x_center: right_lines.append(line) elif slopes[i] < 0 and x1 < img_x_center and x2 < img_x_center: left_lines.append(line) # IMPROVED DRAWN LINES D) # Run linear regression to find best fit line for right and left lane lines # Right lane lines right_lines_x = [] right_lines_y = [] for line in right_lines: x1, y1, x2, y2 = line[0] right_lines_x.append(x1) right_lines_x.append(x2) right_lines_y.append(y1) right_lines_y.append(y2) if len(right_lines_x) > 0: right_m, right_b = np.polyfit(right_lines_x, right_lines_y, 1) # y = m*x + b else: right_m, right_b = 1, 1 draw_right = False # Left lane lines left_lines_x = [] left_lines_y = [] for line in left_lines: x1, y1, x2, y2 = line[0] left_lines_x.append(x1) left_lines_x.append(x2) left_lines_y.append(y1) left_lines_y.append(y2) if len(left_lines_x) > 0: left_m, left_b = np.polyfit(left_lines_x, left_lines_y, 1) # y = m*x + b else: left_m, left_b = 1, 1 draw_left = False # IMPROVED DRAWN LINES E) # Find 2 end points for right and left lines, used for drawing the line # y = m*x + b --> x = (y - b)/m y1 = image.shape[0] y2 = image.shape[0] * (1 - considered_lane_height) right_x1 = (y1 - right_b) / right_m right_x2 = (y2 - right_b) / right_m left_x1 = (y1 - left_b) / left_m left_x2 = (y2 - left_b) / left_m # IMPROVED DRAWN LINES F) # Convert calculated end points from float to int y1 = int(y1) y2 = int(y2) right_x1 = int(right_x1) right_x2 = int(right_x2) left_x1 = int(left_x1) left_x2 = int(left_x2) # IMPROVED DRAWN LINES G) # Draw the right and left lines on image if draw_right: cv2.line(line_image, (right_x1, y1), (right_x2, y2), (255,0,0), 10) if draw_left: cv2.line(line_image, (left_x1, y1), (left_x2, y2), (255,0,0), 10) # Combine the lines with the original image result = cv2.addWeighted(image, 0.8, line_image, 1, 0) plt.imshow(image) # Save result image with lines mpimg.imsave("test_images_output/"+ choose_picture_name +"_original.png", original) mpimg.imsave("test_images_output/"+ choose_picture_name +"_result.png", result) mpimg.imsave("test_images_output/"+ choose_picture_name +"_color_select.png", color_select) mpimg.imsave("test_images_output/"+ choose_picture_name +"_edges.png", edges) mpimg.imsave("test_images_output/"+ choose_picture_name +"_line_image.png", line_image) ``` ## Test on Videos You know what's cooler than drawing lanes over images? Drawing lanes over video! We can test our solution on two provided videos: `solidWhiteRight.mp4` `solidYellowLeft.mp4` **Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** **If you get an error that looks like this:** ``` NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download() ``` **Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.** SOLUTION FOR VIDEO ``` # Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): # Grab the x and y size and make a copy of the image ysize = image.shape[0] xsize = image.shape[1] color_select = np.copy(image) line_image = np.copy(color_select) ######## COLOR SELECTION ######## red_threshold = 200 green_threshold = 100 blue_threshold = 100 rgb_threshold = [red_threshold, green_threshold, blue_threshold] color_thresholds = (image[:,:,0] < rgb_threshold[0]) \ | (image[:,:,1] < rgb_threshold[1]) \ | (image[:,:,2] < rgb_threshold[2]) ######## MASKING ############# left_bottom = [50, 539] right_bottom = [900, 539] apex = [475, 280] # Perform a linear fit (y=Ax+B) to each of the three sides of the triangle # np.polyfit returns the coefficients [A, B] of the fit fit_left = np.polyfit((left_bottom[0], apex[0]), (left_bottom[1], apex[1]), 1) fit_right = np.polyfit((right_bottom[0], apex[0]), (right_bottom[1], apex[1]), 1) fit_bottom = np.polyfit((left_bottom[0], right_bottom[0]), (left_bottom[1], right_bottom[1]), 1) # Find the region inside the lines XX, YY = np.meshgrid(np.arange(0, xsize), np.arange(0, ysize)) region_thresholds = (YY > (XX*fit_left[0] + fit_left[1])) & \ (YY > (XX*fit_right[0] + fit_right[1])) & \ (YY < (XX*fit_bottom[0] + fit_bottom[1])) # Mask color and region selection color_select[color_thresholds | ~region_thresholds] = [0, 0, 0] # Color pixels red where both color and region selections met line_image[~color_thresholds & region_thresholds] = [255, 0, 0] ######## CANNY EDGES ######## gray = cv2.cvtColor(color_select,cv2.COLOR_RGB2GRAY) # Define a kernel size for Gaussian smoothing / blurring kernel_size = 5 # Must be an odd number (3, 5, 7...) blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size),0) # Define our parameters for Canny and run it low_threshold = 50 high_threshold = 150 edges = cv2.Canny(blur_gray, low_threshold, high_threshold) ######## REGION OF INTEREST ######## # Next we'll create a masked edges image using cv2.fillPoly() mask = np.zeros_like(edges) ignore_mask_color = 255 imshape = image.shape vertices = np.array([[(0,imshape[0]),(0, 0), (imshape[1], 0), (imshape[1],imshape[0])]], dtype=np.int32) cv2.fillPoly(mask, vertices, ignore_mask_color) masked_edges = cv2.bitwise_and(edges, mask) ######## HOUGH TRANSFORM ######## # Hough transform parameters rho = 1 # distance resolution in pixels of the Hough grid theta = np.pi/180 # angular resolution in radians of the Hough grid threshold = 35 # minimum number of votes (intersections in Hough grid cell) min_line_length = 5 #minimum number of pixels making up a line max_line_gap = 2 # maximum gap in pixels between connectable line segments line_image = np.copy(image)*0 # creating a blank to draw lines on # Hough on edge detected image lines = cv2.HoughLinesP(masked_edges, rho, theta, threshold, np.array([]), min_line_length, max_line_gap) ######## IMPROVED DRAWN LINES #### # IMPROVED DRAWN LINES A) # In case of error, don't draw the line draw_right = True draw_left = True # IMPROVED DRAWN LINES B) # Find slopes of all lines # But only care about lines where abs(slope) > slope_threshold slope_threshold = 0.5 considered_lane_height = 0.4 slopes = [] new_lines = [] for line in lines: x1, y1, x2, y2 = line[0] # line = [[x1, y1, x2, y2]] # Calculate slope if x2 - x1 == 0.: # corner case, avoiding division by 0 slope = 999. # practically infinite slope else: slope = (y2 - y1) / (x2 - x1) # Filter lines based on slope if abs(slope) > slope_threshold: slopes.append(slope) new_lines.append(line) lines = new_lines # IMPROVED DRAWN LINES C) # Split lines into right_lines and left_lines, representing the right and left lane lines # Right/left lane lines must have positive/negative slope, and be on the right/left half of the image right_lines = [] left_lines = [] for i, line in enumerate(lines): x1, y1, x2, y2 = line[0] img_x_center = image.shape[1] / 2 # x coordinate of center of image if slopes[i] > 0 and x1 > img_x_center and x2 > img_x_center: right_lines.append(line) elif slopes[i] < 0 and x1 < img_x_center and x2 < img_x_center: left_lines.append(line) # IMPROVED DRAWN LINES D) # Run linear regression to find best fit line for right and left lane lines # Right lane lines right_lines_x = [] right_lines_y = [] for line in right_lines: x1, y1, x2, y2 = line[0] right_lines_x.append(x1) right_lines_x.append(x2) right_lines_y.append(y1) right_lines_y.append(y2) if len(right_lines_x) > 0: right_m, right_b = np.polyfit(right_lines_x, right_lines_y, 1) # y = m*x + b else: right_m, right_b = 1, 1 draw_right = False # Left lane lines left_lines_x = [] left_lines_y = [] for line in left_lines: x1, y1, x2, y2 = line[0] left_lines_x.append(x1) left_lines_x.append(x2) left_lines_y.append(y1) left_lines_y.append(y2) if len(left_lines_x) > 0: left_m, left_b = np.polyfit(left_lines_x, left_lines_y, 1) # y = m*x + b else: left_m, left_b = 1, 1 draw_left = False # IMPROVED DRAWN LINES E) # Find 2 end points for right and left lines, used for drawing the line # y = m*x + b --> x = (y - b)/m y1 = image.shape[0] y2 = image.shape[0] * (1 - considered_lane_height) right_x1 = (y1 - right_b) / right_m right_x2 = (y2 - right_b) / right_m left_x1 = (y1 - left_b) / left_m left_x2 = (y2 - left_b) / left_m # IMPROVED DRAWN LINES F) # Convert calculated end points from float to int y1 = int(y1) y2 = int(y2) right_x1 = int(right_x1) right_x2 = int(right_x2) left_x1 = int(left_x1) left_x2 = int(left_x2) # IMPROVED DRAWN LINES G) # Draw the right and left lines on image if draw_right: cv2.line(line_image, (right_x1, y1), (right_x2, y2), (255,0,0), 10) if draw_left: cv2.line(line_image, (left_x1, y1), (left_x2, y2), (255,0,0), 10) result = cv2.addWeighted(image, 0.8, line_image, 1, 0) return result ``` Let's try the one with the solid white lane on the right first ... ``` white_output = 'test_videos_output/solidWhiteRight.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds #clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,3) clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4") white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(white_output, audio=False) ``` Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice. ``` HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(white_output)) ``` ## Improve the draw_lines() function **At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".** **Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky! ``` yellow_output = 'test_videos_output/solidYellowLeft.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5) clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4') yellow_clip = clip2.fl_image(process_image) %time yellow_clip.write_videofile(yellow_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(yellow_output)) ``` ## Writeup and Submission If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. ## Optional Challenge Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! ``` challenge_output = 'test_videos_output/challenge.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5) #clip3 = VideoFileClip('test_videos/challenge.mp4') challenge_clip = clip3.fl_image(process_image) %time challenge_clip.write_videofile(challenge_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(challenge_output)) ```
github_jupyter
<a href="https://colab.research.google.com/github/LorenzoTinfena/BestSpiderWeb/blob/master/BestSpiderWeb.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # BestSpiderWeb # Problem A city without roads has a wheat producer, an egg producer and a hotel. The mayor also wants to build a pasta producer and a restaurant in the future. He also wants to build roads like in the picture, so that the producer can easily take the wheat and eggs to make pasta, and the restaurant can easily buy pasta, welcome hotel people, and buy eggs for other preparations. <img src="https://github.com/LorenzoTinfena/BestSpiderWeb/blob/master/assets/city0.png?raw=1" width="300"/> **Goal:** to build roads costs, you have to make them as short as possible. <img src="https://github.com/LorenzoTinfena/BestSpiderWeb/blob/master/assets/city1.png?raw=1" width="300"/> --- **In other words:** In an Euclidean space there is a graph with constant edges and with 2 types of nodes, one with constant coordinates, the other with a variable coordinates. **Goal:** To find the positions of the variable nodes in order to have the smaller sum of the length of the edges # Solution $$ N_0[c] = \sum_{i \in N}\sum_{v \in P_{N_0 \longleftrightarrow i}}\frac{\sum O_i[c]}{v} $$ where * $$N_0$$ * $$c$$ coordinates * $$N$$ set of nodes with variable coordinates reachable from N with 0 passing only through nodes belonging to N * $$O$$ set of nodes with constant coordinates * $$O_i$$ set of nodes belonging to "O" adjacent to "i" * $$P_{N_0 \rightarrow i}$$ set of all possible paths (infinite for lenght of "N" greater than 1") between node "N with 0" to node "i", passing only through nodes belonging to N * $$v$$ Or path, is a multiplication of the number of edges for all the nodes it crosses, "N with 0" included, "i" included, (e.g. if it starts from a node that has 7 adjacent edges, then goes through one that has 2, and ends up with one having 3, the calculation will be 7 * 2 * 3 = 42 # Implementation ``` import numpy as np class Node: NoCoordinates = None def __init__(self, coordinates: np.ndarray = None): self.AdjacentNodes = [] if coordinates is None: self.Constant = False else: if len(coordinates) != Node.NoCoordinates: raise Exception('wrong number of coordinates') self.Coordinates = coordinates self.Constant = True def AddAdjacentNode(self, item: 'Node'): self.AdjacentNodes.append(item) class _VirtualNode: def __init__(self, nodeBase: 'Node' = None): if nodeBase is not None: self.ActualNode = nodeBase self.SumConstantNodes = np.zeros(Node.NoCoordinates) for item in nodeBase.AdjacentNodes: if item.Constant: self.SumConstantNodes += item.Coordinates self.NumTmpPath = len(nodeBase.AdjacentNodes) def Copy(self, actualNode: 'Node') -> '_VirtualNode': item = Node._VirtualNode() item.ActualNode = actualNode item.SumConstantNodes = self.SumConstantNodes item.NumTmpPath = self.NumTmpPath * len(actualNode.AdjacentNodes) return item def ComputeBestSpiderWeb(variablesNodes: list): # initialize coordinates of variables nodes for item in variablesNodes: item.Coordinates = np.zeros(Node.NoCoordinates) # initialize virtual nodes _VirtualNodes = [] for item in variablesNodes: _VirtualNodes.append(Node._VirtualNode(item)) # ALGORITHM # more iterations means more accuracy (exponential) for i in range(40): next_VirtualNodes = [] # iterate through all variables virtual nodes for item in _VirtualNodes: # update the coordinates of the actual node item.ActualNode.Coordinates += item.SumConstantNodes / item.NumTmpPath # iterate through adjacent nodes of the actual node for AdjacentItem in item.ActualNode.AdjacentNodes: # if the adjacent node is variable add it in a new virtual node (like a tree) if not AdjacentItem.Constant: next_VirtualNodes.append(item.Copy(AdjacentItem)) _VirtualNodes = next_VirtualNodes def main(): Node.NoCoordinates = 2 # constant nodes Wheat = Node(np.array([0, 0])) eggs = Node(np.array([5, 40])) hotel = Node(np.array([50, 10])) # variables nodes pastaProducer = Node() restaurant = Node() # define edges pastaProducer.AddAdjacentNode(Wheat) pastaProducer.AddAdjacentNode(eggs) pastaProducer.AddAdjacentNode(restaurant) restaurant.AddAdjacentNode(pastaProducer) restaurant.AddAdjacentNode(eggs) restaurant.AddAdjacentNode(hotel) ComputeBestSpiderWeb([pastaProducer, restaurant]) print('pastaProducer: ' + str(pastaProducer.Coordinates)) print('restaurant: ' + str(restaurant.Coordinates)) if __name__ == '__main__': main() ``` <img src="https://github.com/LorenzoTinfena/BestSpiderWeb/blob/master/assets/example.png?raw=1" width="500"/>
github_jupyter
# Day 2 - Conditionals ``` x = 5 if x > 2: print('Bigger than 2') for i in range(5): print(i) if i > 2: print('Bigger than 2') x is None ``` # Labs ## Lab 1 Excercise 1 Write a program to prompt the user for hours and rate per hour and compute gross pay (i.e. gross pay = hrs x rate). <input prompt> Enter Hours: 35 (example input) <input prompt> Enter Rate ($ per hr): 2.75 (example input) <output> Pay: $96.25 (output) Verify your program output for the above example input values. Save this into a file named W1-D1-lab1-1. ``` hrs = int(input('Hours? ')) rte = float(input('Pay rate? ')) print(hrs * rte) ``` ## Lab 2 Exercise 2 Define the following variables and assign some appropriate values. car_name e.g. “Tesla”, “Toyota”, … price1, price2, price3 e.g. any appropriate $ values Define another variable called average and assign the computed average of the three prices.. Output showing the car name and the average price in the following format: On an average, <car_name> costs around $<average>. BONUS: Use the following line of code to print it rounded up to decimal places only. avg = “{:.2f}".format(avg) ``` Tesla = 90 Toyota = 45 avg = (Tesla + Toyota) / 2 print("On average, cars cost about", "{:2f}".format(avg)) ``` ## Lab 2 Exercise 1 Write a program that does the following. Prompt the user for a new file name. Enter: “W1-D1-lab2-1.txt” Open that file with write permission. Write the following text into this file. “The rain in Spain stays mainly on the plain! Did Eliza get drenched in the rainy plains of Spain?” Close the file Save as (this program file) W1-D1-lab2-1 Verify that there is a file “W1-D1-lab2-1.ipynb” (Home view). Execute the program multiple times & look at the file size. Does it make sense? ``` filename = input('Filename to save: ') fh = open(filename, 'w') txt = '''The rain in Spain stays mainly on the plain! Did Eliza get drenched in the rainy plains of Spain? ''' fh.write(txt) fh.close() ``` ## Lab 2 Exercise 2 Write a program that does the following. Prompt the user for a new file name. Enter: “W1-D1-lab2-2.txt” Open that file with append permission. fh = open(fname, ‘a’) Write the following text into this file. “Happy 2020!” Close the file Re-open the file for read. Save as (this program file) W1-D1-lab2-2. Verify that there is a file “W1-D1-lab2-2.ipynb” (Home view). Execute the program multiple times & look at the file size. Does it make sense? ``` filename = input("File to append: ") fh = open(filename, 'a') fh.write("Happy 2020!") fh.close() x = "Mary" y = 'Mary' print(id(x)) print(id(y)) x = 3.14 y = 3.14 z = 3.14 print(id(x)) print(id(y)) print(id(z)) ``` # Lists ``` x = [1, 2, 3] y = x y[2] = 4 x ``` # Quiz ``` myDict = dict() myDict['stuff'] myDict.get('stuff', -1) myDict['a'] = 1 myDict['b'] = 2 for i in myDict: print(i) fruit = 'Banana' fruit[0] = 'b' a = [1,2,3] b = [4,5,6] a + b bdfl = ["Rossum", "Guido", "van"] bdfl.sort() bdfl[0] ```
github_jupyter
``` from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By Be dataFrame= pd.DataFrame(columns=['Name', 'Values']) for i in range(1,20+1): url = 'https://www.transfermarkt.com/spieler-statistik/wertvollstespieler/marktwertetop?ajax=yw1&page=' + str(i) options = webdriver.ChromeOptions() options.add_argument('headless') chrome_driver = r'C:\Users\Gk\Desktop\DSS\install\chromedriver_win32\chromedriver.exe' driver = webdriver.Chrome(chrome_driver, options=options) driver.implicitly_wait(3) driver.get(url) src = driver.page_source driver.close() resp = BeautifulSoup(src, "html.parser") values_data = resp.select('table') table_html = str(values_data) num = 0 name = ' ' value = ' ' for index, row in pd.read_html(table_html)[1].iterrows(): if index%3 == 0: num = row['#'] value = row['Market value'] elif index%3 == 1: name = row['Player'] else : dataFrame.loc[num] = [name, value] dataFrame ul = dataFrame['Name'].tolist() dataFrame.to_csv('userlist.csv', encoding='utf-8-sig') ul[9] #userList = ul[0:50] #userList = ul[49:78] #userList = ul[79:122] #userList = ul[123:178] #userList = ul[179:188] #userList = ul[189:200] userList = ul[200:300] from selenium import webdriver from bs4 import BeautifulSoup import requests import re from selenium.common.exceptions import NoSuchElementException from selenium.common.exceptions import StaleElementReferenceException from selenium.common.exceptions import ElementNotInteractableException #userList = ['chris eriksen', 'cristiano ronaldo', 'lionel messi', 'dreandes'] driver = webdriver.Chrome(r'C:\Users\Gk\Desktop\DSS\install\chromedriver_win32\chromedriver.exe') driver.get('https://www.instagram.com/') delay = 3 driver.implicitly_wait(delay) id = '' #Instagram ID pw = '' #Instagram PW driver.find_element_by_xpath('//*[@id="react-root"]/section/main/article/div[2]/div[1]/div/form/div[2]/div/label/input').send_keys(id) driver.find_element_by_xpath('//*[@id="react-root"]/section/main/article/div[2]/div[1]/div/form/div[3]/div/label/input').send_keys(pw) driver.find_element_by_xpath('//*[@id="react-root"]/section/main/article/div[2]/div[1]/div/form/div[4]/button').click() driver.implicitly_wait(delay) listUser = [] listFollower = [] def checkInstaFollowers(user): driver.find_element_by_xpath('//*[@id="react-root"]/section/nav/div[2]/div/div/div[2]/input').send_keys(user) time.sleep(5) driver.find_element_by_xpath('//*[@id="react-root"]/section/nav/div[2]/div/div/div[2]/div[2]/div[2]/div/a[1]/div').click() r = requests.get(driver.current_url).text followers = re.search('"edge_followed_by":{"count":([0-9]+)}',r).group(1) if (r.find('"is_verified":true')!=-1): # print('{} : {}'.format(user, followers)) listUser.append(user) listFollower.append(followers) else: # print('{} : user not verified'.format(user)) listUser.append(user) listFollower.append('not verified') for a in userList: try: checkInstaFollowers(a) except AttributeError: print("{}'s top search is returned as hashtag. Continue to next item.".format(a)) listUser.append(a) listFollower.append('Hashtag') except StaleElementReferenceException: print("{} called StaleElementReferenceException".format(a)) try: checkInstaFollowers(a) except AttributeError: listUser.append(a) listFollower.append('SERE/Hashtag') except NoSuchElementException: print("{} called NoSuchElementException".format(a)) try: checkInstaFollowers(a) except AttributeError: listUser.append(a) listFollower.append('NSEE/Hashtag') except ElementNotInteractableException: print("{} called ElementNotInteractableException".format(a)) try: checkInstaFollowers(a) except AttributeError: listUser.append(a) listFollower.append('ENIE/Hashtag') driver.quit() ul[208] df_follower = pd.DataFrame(list(zip(listUser, listFollower)), columns=['name', 'follower']) df_follower[df_follower['name']=='Rúben Dias'] resDf = resDf.append(df_follower, ignore_index = True) resDf #resDf.to_csv('mktval_inst_data.csv', encoding='utf-8') readdf = pd.read_csv('mktval_inst_data.csv', encoding='utf-8') col = ['name', 'follower'] readdf = readdf[col] readdf readdf = readdf.append(df_follower, ignore_index = True) readdf readdf.to_csv('test.csv', encoding='utf-8-sig') readdf.truncate(before=0, after=199) from selenium import webdriver from bs4 import BeautifulSoup import requests import re from selenium.common.exceptions import NoSuchElementException from selenium.common.exceptions import StaleElementReferenceException from selenium.common.exceptions import ElementNotInteractableException #userList = ['chris eriksen', 'cristiano ronaldo', 'lionel messi', 'dreandes'] driver = webdriver.Chrome(r'C:\Users\Gk\Desktop\DSS\install\chromedriver_win32\chromedriver.exe') driver.get('https://www.instagram.com/') delay = 3 driver.implicitly_wait(delay) id = '' #Instagram ID pw = '' #Instagram PW driver.find_element_by_xpath('//*[@id="react-root"]/section/main/article/div[2]/div[1]/div/form/div[2]/div/label/input').send_keys(id) driver.find_element_by_xpath('//*[@id="react-root"]/section/main/article/div[2]/div[1]/div/form/div[3]/div/label/input').send_keys(pw) driver.find_element_by_xpath('//*[@id="react-root"]/section/main/article/div[2]/div[1]/div/form/div[4]/button').click() driver.implicitly_wait(delay) listUser = [] listFollower = [] def checkInstaFollowers(user): driver.find_element_by_xpath('//*[@id="react-root"]/section/nav/div[2]/div/div/div[2]/input').send_keys(user) time.sleep(5) driver.find_element_by_xpath('//*[@id="react-root"]/section/nav/div[2]/div/div/div[2]/div[2]/div[2]/div/a[1]/div').click() r = requests.get(driver.current_url).text followers = re.search('"edge_followed_by":{"count":([0-9]+)}',r).group(1) if (r.find('"is_verified":true')!=-1): # print('{} : {}'.format(user, followers)) listUser.append(user) listFollower.append(followers) else: # print('{} : user not verified'.format(user)) listUser.append(user) listFollower.append('not verified') n = 1 for a in userList: try: checkInstaFollowers(a) except AttributeError: print("{}'s top search is returned as hashtag. Continue to next item.".format(a)) listUser.append(a) listFollower.append('Hashtag') except StaleElementReferenceException: print("{} called StaleElementReferenceException".format(a)) try: checkInstaFollowers(a) except AttributeError: listUser.append(a) listFollower.append('SERE/Hashtag') except NoSuchElementException: print("{} called NoSuchElementException".format(a)) try: checkInstaFollowers(a) except AttributeError: listUser.append(a) listFollower.append('NSEE/Hashtag') except ElementNotInteractableException: print("{} called ElementNotInteractableException".format(a)) try: checkInstaFollowers(a) except AttributeError: listUser.append(a) listFollower.append('ENIE/Hashtag') driver.quit() from selenium import webdriver from bs4 import BeautifulSoup import requests import re from selenium.common.exceptions import NoSuchElementException from selenium.common.exceptions import StaleElementReferenceException from selenium.common.exceptions import ElementNotInteractableException listUser = [] listFollower = [] def checkInstaFollowers(user): try: driver.find_element_by_xpath('//*[@id="react-root"]/section/nav/div[2]/div/div/div[2]/input').send_keys(user) time.sleep(5) driver.find_element_by_xpath('//*[@id="react-root"]/section/nav/div[2]/div/div/div[2]/div[2]/div[2]/div/a[1]/div').click() r = requests.get(driver.current_url).text followers = re.search('"edge_followed_by":{"count":([0-9]+)}',r).group(1) except AttributeError: print("{}'s top search is returned as hashtag. Continue to next item.".format(a)) listUser.append(a) listFollower.append('Hashtag') except StaleElementReferenceException: print("{} called StaleElementReferenceException".format(a)) try: checkInstaFollowers(a) except AttributeError: listUser.append(a) listFollower.append('SERE/Hashtag') except NoSuchElementException: print("{} called NoSuchElementException".format(a)) try: checkInstaFollowers(a) except AttributeError: listUser.append(a) listFollower.append('NSEE/Hashtag') except ElementNotInteractableException: print("{} called ElementNotInteractableException".format(a)) try: checkInstaFollowers(a) except AttributeError: listUser.append(a) listFollower.append('ENIE/Hashtag') else: if (r.find('"is_verified":true')!=-1): # print('{} : {}'.format(user, followers)) listUser.append(user) listFollower.append(followers) else: # print('{} : user not verified'.format(user)) listUser.append(user) listFollower.append('not verified') # finally: # driver.quit() for a in range(1): driver = webdriver.Chrome(r'C:\Users\Gk\Desktop\DSS\install\chromedriver_win32\chromedriver.exe') driver.get('https://www.instagram.com/') delay = 3 driver.implicitly_wait(delay) id = '' #Instagram ID pw = '' #Instagram PW driver.find_element_by_xpath('//*[@id="react-root"]/section/main/article/div[2]/div[1]/div/form/div[2]/div/label/input').send_keys(id) driver.find_element_by_xpath('//*[@id="react-root"]/section/main/article/div[2]/div[1]/div/form/div[3]/div/label/input').send_keys(pw) driver.find_element_by_xpath('//*[@id="react-root"]/section/main/article/div[2]/div[1]/div/form/div[4]/button').click() driver.implicitly_wait(delay) for b in range(10): # print('(a*10)+b = {}, a={}, b={}'.format(((a*10) + b), a, b)) num = (a*10) + b userName = ul[num] checkInstaFollowers(userName) # print('==============================================') driver.quit() df_follower = pd.DataFrame(list(zip(listUser, listFollower)), columns=['name', 'follower']) df_follower ```
github_jupyter
# Pandas Data Series [40 exercises] ``` import pandas as pd import numpy as np #1 Write a Pandas program to create and display a one-dimensional array-like object containing an array of data using Pandas module data = pd.Series([9,8,7,6,5,4,3,2,1,]) print(data) # 2. Write a Pandas program to convert a Panda module Series to Python list and it's type. data = pd.Series([9,8,7,6,5,4,3,2,1,]) print(type(data)) lis = data.tolist() print(type(lis)) # 3. Write a Pandas program to add, subtract, multiple and divide two Pandas Series data1 = pd.Series([2, 4, 6, 8, 10]) data2 = pd.Series([1, 3, 5, 7, 9]) print(data1 + data2, data1 - data2, data1 * data2 ,data1 / data2) #4 Write a Pandas program to compare the elements of the two Pandas Series. data1 = pd.Series([2, 4, 6, 8, 10]) data2 = pd.Series([1, 3, 5, 7, 10]) print("Equal : ") print(data1 == data2) print("greater : ") print(data1 > data2) print("lesser : ") print(data1 < data2) #5 Write a Pandas program to convert a dictionary to a Pandas series dic = {'a': 100, 'b': 200, 'c': 300, 'd': 400, 'e': 800} ser = pd.Series(dic) print(ser) # 6. Write a Pandas program to convert a NumPy array to a Pandas series np_arr = np.array([10, 20, 30, 40, 50]) ser = pd.Series(np_arr) print(ser) # 7. Write a Pandas program to change the data type of given a column or a Series. data = pd.Series([100,200,'python',300.12,400]) data = pd.to_numeric(data,errors='coerce') print(data) # 8. Write a Pandas program to convert the first column of a DataFrame as a Series. d = {'col1': [1, 2, 3, 4, 7, 11], 'col2': [4, 5, 6, 9, 5, 0], 'col3': [7, 5, 8, 12, 1,11]} df = pd.DataFrame(data=d) print(pd.Series(df['col1'])) # 9. Write a Pandas program to convert a given Series to an array. data = pd.Series([100,200,'python',300.12,400]) print(np.array(data.tolist())) # 10. Write a Pandas program to convert Series of lists to one Series s = pd.Series([ ['Red', 'Green', 'White'], ['Red', 'Black'], ['Yellow']]) print(s.apply(pd.Series).stack().reset_index(drop=True)) # 11. Write a Pandas program to sort a given Series. data = pd.Series(['100', '200', 'python', '300.12', '400']) data.sort_values() # 12. Write a Pandas program to add some data to an existing Series. data = pd.Series(['100', '200', 'python', '300.12', '400']) data = data.append(pd.Series(['500','php'])) print(data) # 13. Write a Pandas program to create a subset of a given series based on value and condition. data = pd.Series([0,1,2,3,4,5,6,7,8,9]) data = data[data < 6] print(data) ```
github_jupyter
# Customer Churn Prediction with XGBoost _**Using Gradient Boosted Trees to Predict Mobile Customer Departure**_ --- --- ## Runtime This notebook takes approximately 8 minutes to run. ## Contents 1. [Background](#Background) 1. [Setup](#Setup) 1. [Data](#Data) 1. [Train](#Train) 1. [Host](#Host) 1. [Evaluate](#Evaluate) 1. [Relative cost of errors](#Relative-cost-of-errors) 1. [Extensions](#Extensions) --- ## Background _This notebook has been adapted from an [AWS blog post](https://aws.amazon.com/blogs/ai/predicting-customer-churn-with-amazon-machine-learning/)_ Losing customers is costly for any business. Identifying unhappy customers early on gives you a chance to offer them incentives to stay. This notebook describes using machine learning (ML) for the automated identification of unhappy customers, also known as customer churn prediction. ML models rarely give perfect predictions though, so this notebook is also about how to incorporate the relative costs of prediction mistakes when determining the financial outcome of using ML. We use a familiar example of churn: leaving a mobile phone operator. Seems like one can always find fault with their provider du jour! And if the provider knows that a customer is thinking of leaving, it can offer timely incentives - such as a phone upgrade or perhaps having a new feature activated – and the customer may stick around. Incentives are often much more cost-effective than losing and reacquiring a customer. --- ## Setup _This notebook was created and tested on a `ml.m4.xlarge` notebook instance._ Let's start by updating the required packages i.e. SageMaker Python SDK, `pandas` and `numpy`, and specifying: - The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance or Studio, training, and hosting. - The IAM role ARN used to give training and hosting access to your data. See the documentation for how to create these. Note: if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with the appropriate full IAM role ARN string(s). ``` import sys !{sys.executable} -m pip install sagemaker pandas numpy --upgrade import sagemaker sess = sagemaker.Session() bucket = sess.default_bucket() prefix = "sagemaker/DEMO-xgboost-churn" # Define IAM role import boto3 import re from sagemaker import get_execution_role role = get_execution_role() ``` Next, we'll import the Python libraries we'll need for the remainder of the example. ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import io import os import sys import time import json from IPython.display import display from time import strftime, gmtime from sagemaker.inputs import TrainingInput from sagemaker.serializers import CSVSerializer ``` --- ## Data Mobile operators have historical records on which customers ultimately ended up churning and which continued using the service. We can use this historical information to construct an ML model of one mobile operator’s churn using a process called training. After training the model, we can pass the profile information of an arbitrary customer (the same profile information that we used to train the model) to the model, and have the model predict whether this customer is going to churn. Of course, we expect the model to make mistakes. After all, predicting the future is tricky business! But we'll learn how to deal with prediction errors. The dataset we use is publicly available and was mentioned in the book [Discovering Knowledge in Data](https://www.amazon.com/dp/0470908742/) by Daniel T. Larose. It is attributed by the author to the University of California Irvine Repository of Machine Learning Datasets. Let's download and read that dataset in now: ``` s3 = boto3.client("s3") s3.download_file(f"sagemaker-sample-files", "datasets/tabular/synthetic/churn.txt", "churn.txt") churn = pd.read_csv("./churn.txt") pd.set_option("display.max_columns", 500) churn len(churn.columns) ``` By modern standards, it’s a relatively small dataset, with only 5,000 records, where each record uses 21 attributes to describe the profile of a customer of an unknown US mobile operator. The attributes are: - `State`: the US state in which the customer resides, indicated by a two-letter abbreviation; for example, OH or NJ - `Account Length`: the number of days that this account has been active - `Area Code`: the three-digit area code of the corresponding customer’s phone number - `Phone`: the remaining seven-digit phone number - `Int’l Plan`: whether the customer has an international calling plan: yes/no - `VMail Plan`: whether the customer has a voice mail feature: yes/no - `VMail Message`: the average number of voice mail messages per month - `Day Mins`: the total number of calling minutes used during the day - `Day Calls`: the total number of calls placed during the day - `Day Charge`: the billed cost of daytime calls - `Eve Mins, Eve Calls, Eve Charge`: the billed cost for calls placed during the evening - `Night Mins`, `Night Calls`, `Night Charge`: the billed cost for calls placed during nighttime - `Intl Mins`, `Intl Calls`, `Intl Charge`: the billed cost for international calls - `CustServ Calls`: the number of calls placed to Customer Service - `Churn?`: whether the customer left the service: true/false The last attribute, `Churn?`, is known as the target attribute: the attribute that we want the ML model to predict. Because the target attribute is binary, our model will be performing binary prediction, also known as binary classification. Let's begin exploring the data: ``` # Frequency tables for each categorical feature for column in churn.select_dtypes(include=["object"]).columns: display(pd.crosstab(index=churn[column], columns="% observations", normalize="columns")) # Histograms for each numeric features display(churn.describe()) %matplotlib inline hist = churn.hist(bins=30, sharey=True, figsize=(10, 10)) ``` We can see immediately that: - `State` appears to be quite evenly distributed. - `Phone` takes on too many unique values to be of any practical use. It's possible that parsing out the prefix could have some value, but without more context on how these are allocated, we should avoid using it. - Most of the numeric features are surprisingly nicely distributed, with many showing bell-like `gaussianity`. `VMail Message` is a notable exception (and `Area Code` showing up as a feature we should convert to non-numeric). ``` churn = churn.drop("Phone", axis=1) churn["Area Code"] = churn["Area Code"].astype(object) ``` Next let's look at the relationship between each of the features and our target variable. ``` for column in churn.select_dtypes(include=["object"]).columns: if column != "Churn?": display(pd.crosstab(index=churn[column], columns=churn["Churn?"], normalize="columns")) for column in churn.select_dtypes(exclude=["object"]).columns: print(column) hist = churn[[column, "Churn?"]].hist(by="Churn?", bins=30) plt.show() display(churn.corr()) pd.plotting.scatter_matrix(churn, figsize=(12, 12)) plt.show() ``` We see several features that essentially have 100% correlation with one another. Including these feature pairs in some machine learning algorithms can create catastrophic problems, while in others it will only introduce minor redundancy and bias. Let's remove one feature from each of the highly correlated pairs: `Day Charge` from the pair with `Day Mins`, `Night Charge` from the pair with `Night Mins`, `Intl Charge` from the pair with `Intl Mins`: ``` churn = churn.drop(["Day Charge", "Eve Charge", "Night Charge", "Intl Charge"], axis=1) ``` Now that we've cleaned up our dataset, let's determine which algorithm to use. As mentioned above, there appear to be some variables where both high and low (but not intermediate) values are predictive of churn. In order to accommodate this in an algorithm like linear regression, we'd need to generate polynomial (or bucketed) terms. Instead, let's attempt to model this problem using gradient boosted trees. Amazon SageMaker provides an XGBoost container that we can use to train in a managed, distributed setting, and then host as a real-time prediction endpoint. XGBoost uses gradient boosted trees which naturally account for non-linear relationships between features and the target variable, as well as accommodating complex interactions between features. Amazon SageMaker XGBoost can train on data in either a CSV or LibSVM format. For this example, we'll stick with CSV. It should: - Have the predictor variable in the first column - Not have a header row But first, let's convert our categorical features into numeric features. ``` model_data = pd.get_dummies(churn) model_data = pd.concat( [model_data["Churn?_True."], model_data.drop(["Churn?_False.", "Churn?_True."], axis=1)], axis=1 ) ``` And now let's split the data into training, validation, and test sets. This will help prevent us from overfitting the model, and allow us to test the model's accuracy on data it hasn't already seen. ``` train_data, validation_data, test_data = np.split( model_data.sample(frac=1, random_state=1729), [int(0.7 * len(model_data)), int(0.9 * len(model_data))], ) train_data.to_csv("train.csv", header=False, index=False) validation_data.to_csv("validation.csv", header=False, index=False) len(train_data.columns) ``` Now we'll upload these files to S3. ``` boto3.Session().resource("s3").Bucket(bucket).Object( os.path.join(prefix, "train/train.csv") ).upload_file("train.csv") boto3.Session().resource("s3").Bucket(bucket).Object( os.path.join(prefix, "validation/validation.csv") ).upload_file("validation.csv") ``` --- ## Train Moving onto training, first we'll need to specify the locations of the XGBoost algorithm containers. ``` container = sagemaker.image_uris.retrieve("xgboost", sess.boto_region_name, "latest") display(container) ``` Then, because we're training with the CSV file format, we'll create `TrainingInput`s that our training function can use as a pointer to the files in S3. ``` s3_input_train = TrainingInput( s3_data="s3://{}/{}/train".format(bucket, prefix), content_type="csv" ) s3_input_validation = TrainingInput( s3_data="s3://{}/{}/validation/".format(bucket, prefix), content_type="csv" ) ``` Now, we can specify a few parameters like what type of training instances we'd like to use and how many, as well as our XGBoost hyperparameters. A few key hyperparameters are: - `max_depth` controls how deep each tree within the algorithm can be built. Deeper trees can lead to better fit, but are more computationally expensive and can lead to overfitting. There is typically some trade-off in model performance that needs to be explored between numerous shallow trees and a smaller number of deeper trees. - `subsample` controls sampling of the training data. This technique can help reduce overfitting, but setting it too low can also starve the model of data. - `num_round` controls the number of boosting rounds. This is essentially the subsequent models that are trained using the residuals of previous iterations. Again, more rounds should produce a better fit on the training data, but can be computationally expensive or lead to overfitting. - `eta` controls how aggressive each round of boosting is. Larger values lead to more conservative boosting. - `gamma` controls how aggressively trees are grown. Larger values lead to more conservative models. More detail on XGBoost's hyper-parameters can be found on their GitHub [page](https://github.com/dmlc/xgboost/blob/master/doc/parameter.md). ``` sess = sagemaker.Session() xgb = sagemaker.estimator.Estimator( container, role, instance_count=1, instance_type="ml.m4.xlarge", output_path="s3://{}/{}/output".format(bucket, prefix), sagemaker_session=sess, ) xgb.set_hyperparameters( max_depth=5, eta=0.2, gamma=4, min_child_weight=6, subsample=0.8, silent=0, objective="binary:logistic", num_round=100, ) xgb.fit({"train": s3_input_train, "validation": s3_input_validation}) ``` --- ## Host Now that we've trained the algorithm, let's create a model and deploy it to a hosted endpoint. ``` xgb_predictor = xgb.deploy( initial_instance_count=1, instance_type="ml.m4.xlarge", serializer=CSVSerializer() ) ``` ### Evaluate Now that we have a hosted endpoint running, we can make real-time predictions from our model very easily, simply by making a `http` POST request. But first, we'll need to set up serializers and deserializers for passing our `test_data` NumPy arrays to the model behind the endpoint. Now, we'll use a simple function to: 1. Loop over our test dataset 1. Split it into mini-batches of rows 1. Convert those mini-batchs to CSV string payloads 1. Retrieve mini-batch predictions by invoking the XGBoost endpoint 1. Collect predictions and convert from the CSV output our model provides into a NumPy array ``` def predict(data, rows=500): split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1)) predictions = "" for array in split_array: predictions = ",".join([predictions, xgb_predictor.predict(array).decode("utf-8")]) return np.fromstring(predictions[1:], sep=",") predictions = predict(test_data.to_numpy()[:, 1:]) print(predictions) ``` There are many ways to compare the performance of a machine learning model, but let's start by simply by comparing actual to predicted values. In this case, we're simply predicting whether the customer churned (`1`) or not (`0`), which produces a confusion matrix. ``` pd.crosstab( index=test_data.iloc[:, 0], columns=np.round(predictions), rownames=["actual"], colnames=["predictions"], ) ``` _Note, due to randomized elements of the algorithm, your results may differ slightly._ Of the 48 churners, we've correctly predicted 39 of them (true positives). We also incorrectly predicted 4 customers would churn who then ended up not doing so (false positives). There are also 9 customers who ended up churning, that we predicted would not (false negatives). An important point here is that because of the `np.round()` function above, we are using a simple threshold (or cutoff) of 0.5. Our predictions from `xgboost` yield continuous values between 0 and 1, and we force them into the binary classes that we began with. However, because a customer that churns is expected to cost the company more than proactively trying to retain a customer who we think might churn, we should consider lowering this cutoff. That will almost certainly increase the number of false positives, but it can also be expected to increase the number of true positives and reduce the number of false negatives. To get a rough intuition here, let's look at the continuous values of our predictions. ``` plt.hist(predictions) plt.xlabel("Predicted churn probability") plt.ylabel("Number of customers") plt.show() ``` The continuous valued predictions coming from our model tend to skew toward 0 or 1, but there is sufficient mass between 0.1 and 0.9 that adjusting the cutoff should indeed shift a number of customers' predictions. For example... ``` pd.crosstab(index=test_data.iloc[:, 0], columns=np.where(predictions > 0.3, 1, 0)) ``` We can see that lowering the cutoff from 0.5 to 0.3 results in 1 more true positive, 3 more false positives, and 1 fewer false negative. The numbers are small overall here, but that's 6-10% of customers overall that are shifting because of a change to the cutoff. Was this the right decision? We may end up retaining 3 extra customers, but we also unnecessarily incentivized 5 more customers who would have stayed anyway. Determining optimal cutoffs is a key step in properly applying machine learning in a real-world setting. Let's discuss this more broadly and then apply a specific, hypothetical solution for our current problem. ### Relative cost of errors Any practical binary classification problem is likely to produce a similarly sensitive cutoff. That by itself isn’t a problem. After all, if the scores for two classes are really easy to separate, the problem probably isn’t very hard to begin with and might even be solvable with deterministic rules instead of ML. More important, if we put an ML model into production, there are costs associated with the model erroneously assigning false positives and false negatives. We also need to look at similar costs associated with correct predictions of true positives and true negatives. Because the choice of the cutoff affects all four of these statistics, we need to consider the relative costs to the business for each of these four outcomes for each prediction. #### Assigning costs What are the costs for our problem of mobile operator churn? The costs, of course, depend on the specific actions that the business takes. Let's make some assumptions here. First, assign the true negatives the cost of \$0. Our model essentially correctly identified a happy customer in this case, and we don’t need to do anything. False negatives are the most problematic, because they incorrectly predict that a churning customer will stay. We lose the customer and will have to pay all the costs of acquiring a replacement customer, including foregone revenue, advertising costs, administrative costs, point of sale costs, and likely a phone hardware subsidy. A quick search on the Internet reveals that such costs typically run in the hundreds of dollars so, for the purposes of this example, let's assume \$500. This is the cost of false negatives. Finally, for customers that our model identifies as churning, let's assume a retention incentive in the amount of \\$100. If a provider offered a customer such a concession, they may think twice before leaving. This is the cost of both true positive and false positive outcomes. In the case of false positives (the customer is happy, but the model mistakenly predicted churn), we will “waste” the \\$100 concession. We probably could have spent that \$100 more effectively, but it's possible we increased the loyalty of an already loyal customer, so that’s not so bad. #### Finding the optimal cutoff It’s clear that false negatives are substantially more costly than false positives. Instead of optimizing for error based on the number of customers, we should be minimizing a cost function that looks like this: ``` $500 * FN(C) + $0 * TN(C) + $100 * FP(C) + $100 * TP(C) ``` FN(C) means that the false negative percentage is a function of the cutoff, C, and similar for TN, FP, and TP. We need to find the cutoff, C, where the result of the expression is smallest. A straightforward way to do this is to simply run a simulation over numerous possible cutoffs. We test 100 possible values in the for-loop below. ``` cutoffs = np.arange(0.01, 1, 0.01) costs = [] for c in cutoffs: costs.append( np.sum( np.sum( np.array([[0, 100], [500, 100]]) * pd.crosstab(index=test_data.iloc[:, 0], columns=np.where(predictions > c, 1, 0)) ) ) ) costs = np.array(costs) plt.plot(cutoffs, costs) plt.xlabel("Cutoff") plt.ylabel("Cost") plt.show() print( "Cost is minimized near a cutoff of:", cutoffs[np.argmin(costs)], "for a cost of:", np.min(costs), ) ``` The above chart shows how picking a threshold too low results in costs skyrocketing as all customers are given a retention incentive. Meanwhile, setting the threshold too high results in too many lost customers, which ultimately grows to be nearly as costly. The overall cost can be minimized at \\$8400 by setting the cutoff to 0.46, which is substantially better than the \$20k+ we would expect to lose by not taking any action. --- ## Extensions This notebook showcased how to build a model that predicts whether a customer is likely to churn, and then how to optimally set a threshold that accounts for the cost of true positives, false positives, and false negatives. There are several means of extending it including: - Some customers who receive retention incentives will still churn. Including a probability of churning despite receiving an incentive in our cost function would provide a better ROI on our retention programs. - Customers who switch to a lower-priced plan or who deactivate a paid feature represent different kinds of churn that could be modeled separately. - Modeling the evolution of customer behavior. If usage is dropping and the number of calls placed to Customer Service is increasing, you are more likely to experience churn then if the trend is the opposite. A customer profile should incorporate behavior trends. - Actual training data and monetary cost assignments could be more complex. - Multiple models for each type of churn could be needed. Regardless of additional complexity, similar principles described in this notebook are likely applied. ### (Optional) Clean-up If you're ready to be done with this notebook, please run the cell below. This will remove the hosted endpoint you created and avoid any charges from a stray instance being left on. ``` xgb_predictor.delete_endpoint() ```
github_jupyter
``` # Necessary imports import re import emoji from gtrans import translate_text, translate_html import random import pandas as pd import numpy as np from multiprocessing import Pool import time # Function to remove emojis in text, since these conflict during translation def remove_emoji(text): return emoji.get_emoji_regexp().sub(u'', text) def approximate_emoji_insert(string, index,char): if(index<(len(string)-1)): while(string[index]!=' ' ): if(index+1==len(string)): break index=index+1 return string[:index] + ' '+char + ' ' + string[index:] else: return string + ' '+char + ' ' def extract_emojis(str1): try: return [(c,i) for i,c in enumerate(str1) if c in emoji.UNICODE_EMOJI] except AttributeError: return [] # Use multiprocessing framework for speeding up translation process def parallelize_dataframe(df, func, n_cores=4): '''parallelize the dataframe''' df_split = np.array_split(df, n_cores) pool = Pool(n_cores) df = pd.concat(pool.map(func, df_split)) pool.close() pool.join() return df # Main function for translation def translate(x,lang): '''provide the translation given text and the language''' #x=preprocess_lib.preprocess_multi(x,lang,multiple_sentences=False,stop_word_remove=False, tokenize_word=False, tokenize_sentence=False) emoji_list=extract_emojis(x) try: translated_text=translate_text(x,lang,'en') except: translated_text=x for ele in emoji_list: translated_text=approximate_emoji_insert(translated_text, ele[1],ele[0]) return translated_text def add_features(df): '''adding new features to the dataframe''' translated_text=[] for index,row in df.iterrows(): if(row['lang']in ['en','unk']): translated_text.append(row['text']) else: translated_text.append(translate(row['text'],row['lang'])) df["translated"]=translated_text return df import glob train_files = glob.glob('train/*.csv') test_files = glob.glob('test/*.csv') val_files = glob.glob('val/*.csv') files= train_files+test_files+val_files from tqdm import tqdm_notebook size=10 for file in files: wp_data=pd.read_csv(file) list_df=[] for i in tqdm_notebook(range(0,100,size)): print(i,"_iteration") df_new=parallelize_dataframe(wp_data[i:i+size],add_features,n_cores=20) list_df.append(df_new) df_translated=pd.concat(list_df,ignore_index=True) file_name='translated'+file df_translated.to_csv(file_name) ```
github_jupyter
``` import pandas import matplotlib.pyplot import numpy import seaborn import sys import os from os.path import join from scipy import stats from sklearn.svm import SVC from sklearn.metrics import accuracy_score, f1_score from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier ``` First, we will open the csv file and shows its structure. ``` dataFrame=pandas.read_csv(join("./data", "vgsales.csv")) dataFrame.head() ``` # Project Detail ## Dataset Dataset used in the project is taken from https://www.kaggle.com/gregorut/videogamesales and it is about video game sales around the world. Dataset contains 16598 data points in which each of them has 11 attributes. Attributes and their data types will be introduced in the data preprocessing section. ## Project Description In this project, we will explore above mentioned dataset about video games as well as perform statistical analysis, hypothesis testing and machine learning techniques on the dataset respectively. Main focus of our whole project will be to explore relationships between attributes of the dataset and exploring ways to relate some of them with others and finally accurately predict category of data points from our learnings. # Data Preprocessing We will drop NaN values from the table. After that, we will introduce some columns to the table in order to represent some string columns with their numeric representations. ``` dataFrame=dataFrame.dropna(axis=0,how='any') platforms=dict() genres=dict() publishers=dict() pcount=1 gcount=1 pubcount=1 for index,row in dataFrame.iterrows(): if row['Platform'] not in platforms.keys(): platforms[row['Platform']]=pcount pcount+=1 if row['Genre'] not in genres.keys(): genres[row['Genre']]=gcount gcount+=1 if row['Publisher'] not in publishers.keys(): publishers[row['Publisher']]=pubcount pubcount+=1 dataFrame['PlatformNum']=0 dataFrame['GenreNum']=0 dataFrame['PublisherNum']=0 for index, row in dataFrame.iterrows(): dataFrame.loc[index,'PlatformNum'] = platforms[row['Platform']] dataFrame.loc[index,'GenreNum'] = genres[row['Genre']] dataFrame.loc[index,'PublisherNum'] = publishers[row['Publisher']] dataFrame ``` Now lets call the dtypes method to see the attributes of final dataframe ``` dataFrame.dtypes ``` At this point, our dataset is ready to work on with no null data and complete numeric representation of each of its attributes except name of each game which is not required to be a numeric data due to lack of representative quality. # Data Exploration ``` fig=matplotlib.pyplot.figure(figsize=(15,20)) matplotlib.pyplot.subplot(2, 1, 1) seaborn.distplot(dataFrame["Year"].values, norm_hist=True) matplotlib.pyplot.title("Distribution of Release Year of Games") matplotlib.pyplot.xlabel("Years") matplotlib.pyplot.ylabel("Density") matplotlib.pyplot.subplot(2, 1, 2) seaborn.distplot(dataFrame["Global_Sales"].values, norm_hist=True) matplotlib.pyplot.title("Distribution of Sales of Games") matplotlib.pyplot.xlabel("Sales in Millions") matplotlib.pyplot.ylabel("Density") matplotlib.pyplot.show() ``` - When we look at the plot of release years, we can see the imploding point at video game releases. It is convenient that peak point of the sales coincides with the introduction of first smart phone in 2009. After that point, it is safe to assume that mobile phones become more accessible and less risky for game companies to invest and therefore other platforms such as Wii in our dataset took a hit. - On the other hand sales graph does not provide much information to us besides the fact that most of the games released only sold in the amount of few millons. Now we will look at some relations between data attributes and try to deduce as much as we can from the graphs. ``` seaborn.pairplot(data=dataFrame, vars=["Year","Global_Sales"]) matplotlib.pyplot.show() ``` This pairplot between Year and Global Sales attribute shows us that these two attributes are not much related. Even though highest sales also coincides with the peak year for game releases, this can only show us statistical relation. It is arguably sample size bias. We can see that by looking at the number of high global sales in around 90s. Even though 90s are the least productive era of game releases, it has couple of hit games appearantly. Similar to other industries, video game companies as well as video game platforms have peak times. Now, lets look at following plots and try to deduce which platforms dominated which years. But first, lets remember platform codes with the following. ``` platformNames=[] genreNames=[] for key in platforms: platformNames.append(str(key)) for key in genres: genreNames.append(str(key)) fig=matplotlib.pyplot.figure(figsize=(10,10)) matplotlib.pyplot.hexbin(dataFrame['GenreNum'].values, dataFrame['PlatformNum'].values , gridsize=(15,15),cmap=matplotlib.pyplot.cm.Greens ) matplotlib.pyplot.colorbar() matplotlib.pyplot.xticks(numpy.arange(1, len(genreNames)+1, step=1),genreNames,rotation=60) matplotlib.pyplot.yticks(numpy.arange(1, len(platformNames)+1, step=1),platformNames) matplotlib.pyplot.show() ``` - Action games are released more than any other genre. - PS2, PS3, XBOX 360 and Nintendo DS are the platforms that has most number of games that can be played on. - Except for Action genre and above mentioned platforms, there is no visible pattern in the data. This can mean that our observations are heavily affected by some outliers in the data. # Hypothesis Testing In this section, we will perform some hypothesis testing operations to explore our dataset even more. Selection of hypothesis and related attributes will be arbitary. We think that political tension all over the world is affecting video game genre selections as well as it affects every possible cultural aspect. For this reason, we will test the following hypothesis: "After the year of 2001 (Terorist attack to US), Action and Shooter genres have more average sales compared to their average sales before." For this hypothesis, we define following sets: K : Sales of action and shooter genre video games after the year 2001(exclusive) L : Sales of action and shooter genre video games before the year 2001(exclusive) Thus, our hypothesis becomes - $H_0 : \mu_K = \mu_L$ - $H_1 : \mu_K > \mu_L$ At first, we need to find data points for both sets. ``` K=dataFrame[(dataFrame["Year"]>2001) & ((dataFrame['Genre']=='Action') | (dataFrame['Genre']=='Shooter'))]['Global_Sales'] K L=dataFrame[(dataFrame["Year"]<=2001) & ((dataFrame['Genre']=='Action') | (dataFrame['Genre']=='Shooter'))]['Global_Sales'] L ``` Now, lets see their distribution on a plot ``` ax = seaborn.kdeplot(K.rename('After 2001'), shade=True) seaborn.kdeplot(L.rename('Before 2001'), ax=ax, shade=True) matplotlib.pyplot.show() ``` Now, with the 0.05 significance level, we call the difference between means hypothesis testing function ``` t, p = stats.ttest_ind(a=K.values, b=L.values, equal_var=False) print("T-statistşic is ", t) print("P-value is, ", p) ``` We can reject the null hypothesis however, t-statistic is negative. Meaning that we sales after 2001 appears to be lowered compared to the value before 2001. Meaning that our hypothesis is not valid. # Regression Analysis In this section, we will look at the correlation among NA Sales and EU Sales. As we said earlier, they should be increasing and decreasing together with strong correlation coefficient. ``` a, b, correlation, p, sigma = stats.linregress(dataFrame['NA_Sales'],dataFrame['EU_Sales']) print("Slope is ",a) print("Intercept is ", b) print("Correlation is ", correlation) ``` As we can see correlation coefficient is high. This means two variables are very correlated. Now lets see on a graph. To do this we will create a sample. ``` x_vals=[i for i in range(1,20)] y_vals=[a*i+b for i in x_vals] matplotlib.pyplot.plot(x_vals,y_vals) matplotlib.pyplot.xlabel("NA Sales") matplotlib.pyplot.ylabel("EU Sales") matplotlib.pyplot.title("Correlation of NA and EU Sales") matplotlib.pyplot.show() ``` # Machine Learning As machine learning algorithms, we have chosen Support Vector Machine and Naive Bayes. We will classify our data by genres. ``` from sklearn.svm import SVC from sklearn.metrics import accuracy_score, f1_score from sklearn.model_selection import train_test_split from sklearn.naive_bayes import GaussianNB import warnings warnings.filterwarnings('ignore') Classes=dataFrame['GenreNum'].values Features=dataFrame.drop(['GenreNum','Genre','Name','Platform','Publisher'],axis=1).values ``` ## Support Vector Machine ``` X_train, X_test, y_train, y_test = train_test_split(Features, Classes, test_size=0.33, random_state=42) clf = SVC(kernel="rbf") clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print("Accuracy is ",accuracy_score(y_test, y_pred)) print("F1 score is ",f1_score(y_test, y_pred,average='weighted')) ``` ## Naive Bayes ``` clf = GaussianNB() X_train, X_test, y_train, y_test = train_test_split(Features, Classes, test_size=0.33, random_state=42) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print("Accuracy is ",accuracy_score(y_test, y_pred)) print("F1 score is ",f1_score(y_test, y_pred,average='weighted')) ``` As we can see in the above calculations, both machine learning algorithms work pretty bad on the data. This can mean that attributes in our dataset does not contain sufficient information about the Genre information of games. Maybe adding some other features to the dataset increase the accuracy of the algorithms.
github_jupyter
# Character level language model - Dinosaurus land Welcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go beserk, so choose wisely! <table> <td> <img src="images/dino.jpg" style="width:250;height:300px;"> </td> </table> Luckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this [dataset](cities.txt). (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath! By completing this assignment you will learn: - How to store text data for processing using an RNN - How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit - How to build a character-level text generation recurrent neural network - Why clipping the gradients is important We will begin by loading in some functions that we have provided for you in `rnn_utils`. Specifically, you have access to functions such as `rnn_forward` and `rnn_backward` which are equivalent to those you've implemented in the previous assignment. ``` import numpy as np from utils import * import random ``` ## 1 - Problem Statement ### 1.1 - Dataset and Preprocessing Run the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size. ``` data = open('Britishcities.txt', 'r').read() data = data.lower() chars = list(set(data)) data_size, vocab_size = len(data), len(chars) print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size)) ``` The characters are a-z (26 characters) plus the "\n" (or newline character), which in this assignment plays a role similar to the `<EOS>` (or "End of sentence") token we had discussed in lecture, only here it indicates the end of the dinosaur name rather than the end of a sentence. In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26. We also create a second python dictionary that maps each index back to the corresponding character character. This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. Below, `char_to_ix` and `ix_to_char` are the python dictionaries. ``` char_to_ix = { ch:i for i,ch in enumerate(sorted(chars)) } ix_to_char = { i:ch for i,ch in enumerate(sorted(chars)) } print(ix_to_char) ``` ### 1.2 - Overview of the model Your model will have the following structure: - Initialize parameters - Run the optimization loop - Forward propagation to compute the loss function - Backward propagation to compute the gradients with respect to the loss function - Clip the gradients to avoid exploding gradients - Using the gradients, update your parameter with the gradient descent update rule. - Return the learned parameters <img src="images/rnn1.png" style="width:450;height:300px;"> <caption><center> **Figure 1**: Recurrent Neural Network, similar to what you had built in the previous notebook "Building a RNN - Step by Step". </center></caption> At each time-step, the RNN tries to predict what is the next character given the previous characters. The dataset $X = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is a list of characters in the training set, while $Y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$ is such that at every time-step $t$, we have $y^{\langle t \rangle} = x^{\langle t+1 \rangle}$. ## 2 - Building blocks of the model In this part, you will build two important blocks of the overall model: - Gradient clipping: to avoid exploding gradients - Sampling: a technique used to generate characters You will then apply these two functions to build the model. ### 2.1 - Clipping the gradients in the optimization loop In this section you will implement the `clip` function that you will call inside of your optimization loop. Recall that your overall loop structure usually consists of a forward pass, a cost computation, a backward pass, and a parameter update. Before updating the parameters, you will perform gradient clipping when needed to make sure that your gradients are not "exploding," meaning taking on overly large values. In the exercise below, you will implement a function `clip` that takes in a dictionary of gradients and returns a clipped version of gradients if needed. There are different ways to clip gradients; we will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N]. More generally, you will provide a `maxValue` (say 10). In this example, if any component of the gradient vector is greater than 10, it would be set to 10; and if any component of the gradient vector is less than -10, it would be set to -10. If it is between -10 and 10, it is left alone. <img src="images/clip.png" style="width:400;height:150px;"> <caption><center> **Figure 2**: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into slight "exploding gradient" problems. </center></caption> **Exercise**: Implement the function below to return the clipped gradients of your dictionary `gradients`. Your function takes in a maximum threshold and returns the clipped versions of your gradients. You can check out this [hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.clip.html) for examples of how to clip in numpy. You will need to use the argument `out = ...`. ``` ### GRADED FUNCTION: clip def clip(gradients, maxValue): ''' Clips the gradients' values between minimum and maximum. Arguments: gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby" maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue Returns: gradients -- a dictionary with the clipped gradients. ''' dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby'] ### START CODE HERE ### # clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines) for gradient in [dWax, dWaa, dWya, db, dby]: np.clip(gradient, -maxValue, maxValue, out=gradient) ### END CODE HERE ### gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby} return gradients np.random.seed(3) dWax = np.random.randn(5,3)*10 dWaa = np.random.randn(5,5)*10 dWya = np.random.randn(2,5)*10 db = np.random.randn(5,1)*10 dby = np.random.randn(2,1)*10 gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby} gradients = clip(gradients, 10) print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2]) print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1]) print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2]) print("gradients[\"db\"][4] =", gradients["db"][4]) print("gradients[\"dby\"][1] =", gradients["dby"][1]) ``` ** Expected output:** <table> <tr> <td> **gradients["dWaa"][1][2] ** </td> <td> 10.0 </td> </tr> <tr> <td> **gradients["dWax"][3][1]** </td> <td> -10.0 </td> </td> </tr> <tr> <td> **gradients["dWya"][1][2]** </td> <td> 0.29713815361 </td> </tr> <tr> <td> **gradients["db"][4]** </td> <td> [ 10.] </td> </tr> <tr> <td> **gradients["dby"][1]** </td> <td> [ 8.45833407] </td> </tr> </table> ### 2.2 - Sampling Now assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below: <img src="images/dinos3.png" style="width:500;height:300px;"> <caption><center> **Figure 3**: In this picture, we assume the model is already trained. We pass in $x^{\langle 1\rangle} = \vec{0}$ at the first time step, and have the network then sample one character at a time. </center></caption> **Exercise**: Implement the `sample` function below to sample characters. You need to carry out 4 steps: - **Step 1**: Pass the network the first "dummy" input $x^{\langle 1 \rangle} = \vec{0}$ (the vector of zeros). This is the default input before we've generated any characters. We also set $a^{\langle 0 \rangle} = \vec{0}$ - **Step 2**: Run one step of forward propagation to get $a^{\langle 1 \rangle}$ and $\hat{y}^{\langle 1 \rangle}$. Here are the equations: $$ a^{\langle t+1 \rangle} = \tanh(W_{ax} x^{\langle t \rangle } + W_{aa} a^{\langle t \rangle } + b)\tag{1}$$ $$ z^{\langle t + 1 \rangle } = W_{ya} a^{\langle t + 1 \rangle } + b_y \tag{2}$$ $$ \hat{y}^{\langle t+1 \rangle } = softmax(z^{\langle t + 1 \rangle })\tag{3}$$ Note that $\hat{y}^{\langle t+1 \rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). $\hat{y}^{\langle t+1 \rangle}_i$ represents the probability that the character indexed by "i" is the next character. We have provided a `softmax()` function that you can use. - **Step 3**: Carry out sampling: Pick the next character's index according to the probability distribution specified by $\hat{y}^{\langle t+1 \rangle }$. This means that if $\hat{y}^{\langle t+1 \rangle }_i = 0.16$, you will pick the index "i" with 16% probability. To implement it, you can use [`np.random.choice`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.choice.html). Here is an example of how to use `np.random.choice()`: ```python np.random.seed(0) p = np.array([0.1, 0.0, 0.7, 0.2]) index = np.random.choice([0, 1, 2, 3], p = p.ravel()) ``` This means that you will pick the `index` according to the distribution: $P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$. - **Step 4**: The last step to implement in `sample()` is to overwrite the variable `x`, which currently stores $x^{\langle t \rangle }$, with the value of $x^{\langle t + 1 \rangle }$. You will represent $x^{\langle t + 1 \rangle }$ by creating a one-hot vector corresponding to the character you've chosen as your prediction. You will then forward propagate $x^{\langle t + 1 \rangle }$ in Step 1 and keep repeating the process until you get a "\n" character, indicating you've reached the end of the dinosaur name. ``` # GRADED FUNCTION: sample def sample(parameters, char_to_ix, seed): """ Sample a sequence of characters according to a sequence of probability distributions output of the RNN Arguments: parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b. char_to_ix -- python dictionary mapping each character to an index. seed -- used for grading purposes. Do not worry about it. Returns: indices -- a list of length n containing the indices of the sampled characters. """ # Retrieve parameters and relevant shapes from "parameters" dictionary Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b'] vocab_size = by.shape[0] n_a = Waa.shape[1] ### START CODE HERE ### # Step 1: Create the one-hot vector x for the first character (initializing the sequence generation). (≈1 line) x = np.zeros((vocab_size, 1)) # Step 1': Initialize a_prev as zeros (≈1 line) a_prev = np.zeros((n_a, 1)) # Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line) indices = [] # Idx is a flag to detect a newline character, we initialize it to -1 idx = -1 # Loop over time-steps t. At each time-step, sample a character from a probability distribution and append # its index to "indices". We'll stop if we reach 50 characters (which should be very unlikely with a well # trained model), which helps debugging and prevents entering an infinite loop. counter = 0 newline_character = char_to_ix['\n'] while (idx != newline_character and counter != 50): # Step 2: Forward propagate x using the equations (1), (2) and (3) a = np.tanh(np.dot(Wax, x) + np.dot(Waa, a_prev) + b) z = np.dot(Wya, a) + by y = softmax(z) # for grading purposes np.random.seed(counter + seed) # Step 3: Sample the index of a character within the vocabulary from the probability distribution y idx = np.random.choice(list(range(vocab_size)), p=y.ravel()) # Append the index to "indices" indices.append(idx) # Step 4: Overwrite the input character as the one corresponding to the sampled index. x = np.zeros((vocab_size, 1)) x[idx] = 1 # Update "a_prev" to be "a" a_prev = a # for grading purposes seed += 1 counter +=1 ### END CODE HERE ### if (counter == 50): indices.append(char_to_ix['\n']) return indices np.random.seed(2) _, n_a = 20, 100 Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a) b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1) parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by} indices = sample(parameters, char_to_ix, 0) print("Sampling:") print("list of sampled indices:", indices) print("list of sampled characters:", [ix_to_char[i] for i in indices]) ``` ** Expected output:** <table> <tr> <td> **list of sampled indices:** </td> <td> [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, <br> 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 5, 6, 12, 25, 0, 0] </td> </tr><tr> <td> **list of sampled characters:** </td> <td> ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', <br> 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', <br> 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'e', 'f', 'l', 'y', '\n', '\n'] </td> </tr> </table> ## 3 - Building the language model It is time to build the character-level language model for text generation. ### 3.1 - Gradient descent In this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. As a reminder, here are the steps of a common optimization loop for an RNN: - Forward propagate through the RNN to compute the loss - Backward propagate through time to compute the gradients of the loss with respect to the parameters - Clip the gradients if necessary - Update your parameters using gradient descent **Exercise**: Implement this optimization process (one step of stochastic gradient descent). We provide you with the following functions: ```python def rnn_forward(X, Y, a_prev, parameters): """ Performs the forward propagation through the RNN and computes the cross-entropy loss. It returns the loss' value as well as a "cache" storing values to be used in the backpropagation.""" .... return loss, cache def rnn_backward(X, Y, parameters, cache): """ Performs the backward propagation through time to compute the gradients of the loss with respect to the parameters. It returns also all the hidden states.""" ... return gradients, a def update_parameters(parameters, gradients, learning_rate): """ Updates parameters using the Gradient Descent Update Rule.""" ... return parameters ``` ``` # GRADED FUNCTION: optimize def optimize(X, Y, a_prev, parameters, learning_rate = 0.01): """ Execute one step of the optimization to train the model. Arguments: X -- list of integers, where each integer is a number that maps to a character in the vocabulary. Y -- list of integers, exactly the same as X but shifted one index to the left. a_prev -- previous hidden state. parameters -- python dictionary containing: Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x) Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a) Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a) b -- Bias, numpy array of shape (n_a, 1) by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1) learning_rate -- learning rate for the model. Returns: loss -- value of the loss function (cross-entropy) gradients -- python dictionary containing: dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x) dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a) dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a) db -- Gradients of bias vector, of shape (n_a, 1) dby -- Gradients of output bias vector, of shape (n_y, 1) a[len(X)-1] -- the last hidden state, of shape (n_a, 1) """ ### START CODE HERE ### # Forward propagate through time (≈1 line) loss, cache = rnn_forward(X, Y, a_prev, parameters) # Backpropagate through time (≈1 line) gradients, a = rnn_backward(X, Y, parameters, cache) # Clip your gradients between -5 (min) and 5 (max) (≈1 line) gradients = clip(gradients, 5) # Update parameters (≈1 line) parameters = update_parameters(parameters, gradients, learning_rate) ### END CODE HERE ### return loss, gradients, a[len(X)-1] np.random.seed(1) vocab_size, n_a = 27, 100 a_prev = np.random.randn(n_a, 1) Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a) b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1) parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by} X = [12,3,5,11,22,3] Y = [4,14,11,22,25, 26] loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01) print("Loss =", loss) print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2]) print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"])) print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2]) print("gradients[\"db\"][4] =", gradients["db"][4]) print("gradients[\"dby\"][1] =", gradients["dby"][1]) print("a_last[4] =", a_last[4]) ``` ** Expected output:** <table> <tr> <td> **Loss ** </td> <td> 126.503975722 </td> </tr> <tr> <td> **gradients["dWaa"][1][2]** </td> <td> 0.194709315347 </td> <tr> <td> **np.argmax(gradients["dWax"])** </td> <td> 93 </td> </tr> <tr> <td> **gradients["dWya"][1][2]** </td> <td> -0.007773876032 </td> </tr> <tr> <td> **gradients["db"][4]** </td> <td> [-0.06809825] </td> </tr> <tr> <td> **gradients["dby"][1]** </td> <td>[ 0.01538192] </td> </tr> <tr> <td> **a_last[4]** </td> <td> [-1.] </td> </tr> </table> ### 3.2 - Training the model Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. Every 100 steps of stochastic gradient descent, you will sample 10 randomly chosen names to see how the algorithm is doing. Remember to shuffle the dataset, so that stochastic gradient descent visits the examples in random order. **Exercise**: Follow the instructions and implement `model()`. When `examples[index]` contains one dinosaur name (string), to create an example (X, Y), you can use this: ```python index = j % len(examples) X = [None] + [char_to_ix[ch] for ch in examples[index]] Y = X[1:] + [char_to_ix["\n"]] ``` Note that we use: `index= j % len(examples)`, where `j = 1....num_iterations`, to make sure that `examples[index]` is always a valid statement (`index` is smaller than `len(examples)`). The first entry of `X` being `None` will be interpreted by `rnn_forward()` as setting $x^{\langle 0 \rangle} = \vec{0}$. Further, this ensures that `Y` is equal to `X` but shifted one step to the left, and with an additional "\n" appended to signify the end of the dinosaur name. ``` # GRADED FUNCTION: model def model(data, ix_to_char, char_to_ix, num_iterations = 200000, n_a = 50, dino_names = 10, vocab_size = 27): """ Trains the model and generates dinosaur names. Arguments: data -- text corpus ix_to_char -- dictionary that maps the index to a character char_to_ix -- dictionary that maps a character to an index num_iterations -- number of iterations to train the model for n_a -- number of units of the RNN cell dino_names -- number of dinosaur names you want to sample at each iteration. vocab_size -- number of unique characters found in the text, size of the vocabulary Returns: parameters -- learned parameters """ # Retrieve n_x and n_y from vocab_size n_x, n_y = vocab_size, vocab_size # Initialize parameters parameters = initialize_parameters(n_a, n_x, n_y) # Initialize loss (this is required because we want to smooth our loss, don't worry about it) loss = get_initial_loss(vocab_size, dino_names) # Build list of all dinosaur names (training examples). with open("Britishcities.txt") as f: examples = f.readlines() examples = [x.lower().strip() for x in examples] # Shuffle list of all dinosaur names np.random.seed(0) np.random.shuffle(examples) # Initialize the hidden state of your LSTM a_prev = np.zeros((n_a, 1)) # Optimization loop for j in range(num_iterations): ### START CODE HERE ### # Use the hint above to define one training example (X,Y) (≈ 2 lines) index = j % len(examples) X = [None] + [char_to_ix[ch] for ch in examples[index]] Y = X[1:] + [char_to_ix["\n"]] # Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters # Choose a learning rate of 0.01 curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters) ### END CODE HERE ### # Use a latency trick to keep the loss smooth. It happens here to accelerate the training. loss = smooth(loss, curr_loss) # Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly if j % 2000 == 0: print('Iteration: %d, Loss: %f' % (j, loss) + '\n') # The number of dinosaur names to print seed = 0 for name in range(dino_names): # Sample indices and print them sampled_indices = sample(parameters, char_to_ix, seed) print_sample(sampled_indices, ix_to_char) seed += 1 # To get the same result for grading purposed, increment the seed by one. print('\n') return parameters ``` Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names. ``` parameters = model(data, ix_to_char, char_to_ix) ``` ## Conclusion You can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implemetation generated some really cool names like `maconucon`, `marloralus` and `macingsersaurus`. Your model hopefully also learned that dinosaur names tend to end in `saurus`, `don`, `aura`, `tor`, etc. If your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, `dromaeosauroides` is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest! This assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favoriate name is the great, undefeatable, and fierce: Mangosaurus! <img src="images/mangosaurus.jpeg" style="width:250;height:300px;"> ## 4 - Writing like Shakespeare The rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative. A similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in ths sequence. These long term dependencies were less important with dinosaur names, since the names were quite short. <img src="images/shakespeare.jpg" style="width:500;height:400px;"> <caption><center> Let's become poets! </center></caption> We have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes. ``` from __future__ import print_function from keras.callbacks import LambdaCallback from keras.models import Model, load_model, Sequential from keras.layers import Dense, Activation, Dropout, Input, Masking from keras.layers import LSTM from keras.utils.data_utils import get_file from keras.preprocessing.sequence import pad_sequences from shakespeare_utils import * import sys import io ``` To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called [*"The Sonnets"*](shakespeare.txt). Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run `generate_output`, which will prompt asking you for an input (`<`40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try "Forsooth this maketh no sense " (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well. ``` print_callback = LambdaCallback(on_epoch_end=on_epoch_end) model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback]) # Run this cell to try with different inputs without having to re-train the model generate_output() ``` The RNN-Shakespeare model is very similar to the one you have built for dinosaur names. The only major differences are: - LSTMs instead of the basic RNN to capture longer-range dependencies - The model is a deeper, stacked LSTM model (2 layer) - Using Keras instead of python to simplify the code If you want to learn more, you can also check out the Keras Team's text generation implementation on GitHub: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py. Congratulations on finishing this notebook! **References**: - This exercise took inspiration from Andrej Karpathy's implementation: https://gist.github.com/karpathy/d4dee566867f8291f086. To learn more about text generation, also check out Karpathy's [blog post](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). - For the Shakespearian poem generator, our implementation was based on the implementation of an LSTM text generator by the Keras team: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt import warnings warnings.filterwarnings('ignore') from ipywidgets import interactive,FloatSlider import matplotlib def sigmoid(x,x_0,L,y_0,k): return (x-x_0)+(L/2)+y_0 - L/(np.exp(-k*(x-x_0))+1) def speed_sigmoid_func(x,x_0a,x_0b,L,k,y_0): output = np.zeros_like(x) output[(x>=x_0a)&(x<=x_0b)] = y_0 output[x<x_0a] = sigmoid(x[x<x_0a],x_0a,L,y_0,k) output[x>x_0b] = sigmoid(x[x>x_0b],x_0b,L,y_0,k) return output def f(x_0a,x_0b,L,k,y_0): yl,yu = -4,8 inputs = np.linspace(-5,8,100) fig = plt.figure(figsize=(12,8)) plt.plot(inputs,speed_sigmoid_func(inputs,x_0a,x_0b,L,k,y_0)) plt.ylim([yl,yu]) def slider(start,stop,step,init):#,init): return FloatSlider( value=init, min=start, max=stop, step=step, disabled=False, continuous_update=False, orientation='horizontal', readout=True, readout_format='.2f', ) interactive_plot = interactive(f, x_0a =slider(-2,6,0.1,-0.4),x_0b =slider(-2,6,0.1,3.6), L=slider(0,4,0.1,0.8),k=slider(.1,10,0.01,4.),y_0=slider(0,4,0.1,1.6)) output = interactive_plot.children[-1] output.layout.height = '1000px' interactive_plot ``` **Will's explanation of the perfect PID controller for windspeed/groundspeed.** Start with the force equation: $$m\dot{v} = -c(v+w)$$ where $v$ is the fly's groundspeed, and $w$, the wind speed (along the fly body axis) is positive when the wind is blowing against the fly's direct. Before we consider the force the fly exerts, the force the fly experiences (right side) is a constant $c$ times the sum of the groundspeed and the windspeed. For example in the case where the groundspeed $v=1$ and the windspeed is $-1$ (wind going with the fly), the force is 0. If $v=1$ and $w=1$ (wind going against the fly), the fly is experiencing a force of 2. Then add in the force (thrust) the fly can produce: $$m\dot{v} = -c(v+w) + F(v,v_{sp})$$ $F$ is some function, $v$ is the current fly speed, $v_{sp}$ is the set point velocity the fly wants. If we set the acceleration in the above to 0, $$c(v+w) = F(v,v_{sp})$$ $$ v = \frac{F}{c} - w $$ If we plot groundspeed as a function of windspeed, the system described above will look like this: <img src="files/simple_airspeed_controller.JPG" width=400px></center> there are range of wind values for which the fly's thrust can completely compensate for the wind and achieve equilibrium $\dot{v} = 0$. $w_1$ is the maximum postive (into the fly) wind velocity for which the fly can produce a fully compensating counter-force (call this $F_{max}$) into the wind. After this point, the sum of forces becomes negative and so then does $\dot{v}$. (why is it linear with respect to $w$?) As we head towards $w_2$, the thrust decreases and could become negative, ie, the fly is applying force backwards to stop from being pushed forwards (negative w) by the wind. At $w_2$, we have the largest backward force the fly can produce in the face of a negative wind (wind going in direction of the fly), after which point the fly starts getting pushed forward. ``` def pre_sigmoid(x,x_0,L,y_0,k): return (L/2)+y_0 - L/(np.exp(-k*(x-x_0))+1) def sigmoid(x,x_0,L,y_0,k,m): return m*(x-x_0)+(L/2)+y_0 - L/(np.exp(-k*(x-x_0))+1) def speed_sigmoid_func(x,x_0a,x_0b,L,k,y_0,m): output = np.zeros_like(x) output[(x>=x_0a)&(x<=x_0b)] = y_0 output[x<x_0a] = sigmoid(x[x<x_0a],x_0a,L,y_0,k,m) output[x>x_0b] = sigmoid(x[x>x_0b],x_0b,L,y_0,k,m) return output def f(x_0a,x_0b,L,k,y_0,m): yl,yu = -4,8 inputs = np.linspace(-5,8,100) fig = plt.figure(figsize=(12,8)) plt.subplot(2,1,1) plt.plot(inputs,pre_sigmoid(inputs,x_0a,L,y_0,k)) plt.plot(inputs,y_0*np.ones_like(inputs),'--') plt.ylim([yl,yu]) ax= plt.subplot(2,1,2) plt.plot(inputs,sigmoid(inputs,x_0a,L,y_0,k,m)) plt.plot(inputs,sigmoid(inputs,x_0b,L,y_0,k,m)) plt.plot(inputs,speed_sigmoid_func(inputs,x_0a,x_0b,L,k,y_0,m),label='final curve',color='blue') plt.plot(inputs,m*(inputs-x_0a)+(L/2)+y_0,'--') plt.plot(inputs,m*(inputs-x_0b)-(L/2)+y_0,'--') ax.spines['left'].set_position('center') ax.spines['bottom'].set_position('center') # Eliminate upper and right axes ax.spines['right'].set_color('none') ax.spines['top'].set_color('none') # Show ticks in the left and lower axes only ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') ax.spines['bottom'].set_position('zero') ax.spines['left'].set_position('zero') plt.ylim([yl,yu]) plt.legend() def slider(start,stop,step,init):#,init): return FloatSlider( value=init, min=start, max=stop, step=step, disabled=False, continuous_update=False, orientation='horizontal', readout=True, readout_format='.2f', ) interactive_plot = interactive(f, x_0a =slider(-2,6,0.01,-0.4),x_0b =slider(-2,6,0.01,3.6), L=slider(0,4,0.1,0.8),k=slider(.1,10,0.01,4.),y_0=slider(0,4,0.1,1.6), m=slider(0,4,0.1,1.)) output = interactive_plot.children[-1] output.layout.height = '600px' interactive_plot def f(x_0a,x_0b,L,k,y_0,m,theta): yl,yu = -4,8 inputs = np.linspace(-5,8,100) fig = plt.figure(figsize=(12,8)) ax= plt.subplot(1,3,1) ax.set_aspect('equal') plt.plot(inputs,pre_sigmoid(inputs,x_0a,L,y_0,k)) plt.plot(inputs,y_0*np.ones_like(inputs),'--') plt.ylim([yl,yu]) ax = plt.subplot(1,3,2) ax.set_aspect('equal') plt.plot(inputs,sigmoid(inputs,x_0a,L,y_0,k,m)) plt.plot(inputs,sigmoid(inputs,x_0b,L,y_0,k,m)) outputs = speed_sigmoid_func(inputs,x_0a,x_0b,L,k,y_0,m) plt.plot(inputs,m*(inputs-x_0a)+(L/2)+y_0,'--') plt.plot(inputs,m*(inputs-x_0b)-(L/2)+y_0,'--') plt.plot(inputs,outputs,label='final curve',color='blue') plt.ylim([yl,yu]) xlim = ax.get_xlim() plt.legend() rot_mat = np.array([[np.cos(theta),-1.*np.sin(theta)],[np.sin(theta),np.cos(theta)]]) rotation_origin = np.array([x_0a+(x_0b-x_0a)/2,y_0]) plt.plot(rotation_origin[0],rotation_origin[1],'o',color='r') rotation_origin_ones = np.repeat(rotation_origin[:,None],100,axis=1) inputs1,outputs1 = np.dot(rot_mat,np.vstack((inputs,outputs))-rotation_origin_ones)+rotation_origin_ones ax = plt.subplot(1,3,3) ax.set_aspect('equal') plt.plot(inputs,outputs,color='blue') plt.plot(inputs1,outputs1,label='rotated curve') plt.ylim([yl,yu]) plt.xlim(xlim) plt.legend() interactive_plot = interactive(f, x_0a =slider(-2,6,0.01,-0.4),x_0b =slider(-2,6,0.01,3.6), L=slider(0,4,0.1,0.8),k=slider(.1,10,0.01,4.),y_0=slider(0,4,0.1,1.6), m=slider(0,4,0.1,1.),theta=slider(0,np.pi/2,0.1,np.pi/6)) output = interactive_plot.children[-1] output.layout.height = '300px' interactive_plot #Concept-proofing the rotation input-output def find_nearest(array, value): #For each element in value, returns the index of array it is closest to. #array should be 1 x n and value should be m x 1 idx = (np.abs(array - value)).argmin(axis=1) #this rounds up and down ( #of the two values in array closest to value, picks the closer. (not the larger or the smaller) return idx def f(x_0a,x_0b,L,k,y_0,m,theta): yl,yu = -4,8 buffer = 10 num_points = 1000 inputs = np.linspace(yl-buffer,yu+buffer,num_points) outputs = speed_sigmoid_func(inputs,x_0a,x_0b,L,k,y_0,m) fig = plt.figure(figsize=(12,8)) rot_mat = np.array([[np.cos(theta),-1.*np.sin(theta)],[np.sin(theta),np.cos(theta)]]) rotation_origin = np.array([x_0a+(x_0b-x_0a)/2,y_0]) plt.plot(rotation_origin[0],rotation_origin[1],'o',color='r') rotation_origin_ones = np.repeat(rotation_origin[:,None],num_points,axis=1) inputs1,outputs1 = np.dot(rot_mat,np.vstack((inputs,outputs))-rotation_origin_ones)+rotation_origin_ones ax = plt.subplot() ax.set_aspect('equal') plt.plot(inputs,outputs,color='blue') plt.plot(inputs1,outputs1,label='rotated curve') which_inputs = find_nearest(inputs1,inputs[:,None]) plt.plot(inputs,outputs1[which_inputs],'o',color='orange') plt.ylim([yl,yu]) # plt.xlim(xlim) plt.legend() interactive_plot = interactive(f, x_0a =slider(-2,6,0.1,-0.4),x_0b =slider(-2,6,0.1,3.6), L=slider(0,4,0.1,0.8),k=slider(.1,10,0.01,4.),y_0=slider(0,4,0.1,1.6), m=slider(0,4,0.1,1.),theta=slider(0,np.pi/2,0.1,np.pi/6)) output = interactive_plot.children[-1] output.layout.height = '300px' interactive_plot ``` It follows then that we can define the rotated function as ``` def f_rotated(inputs,x_0a,x_0b,L,k,y_0,m,theta): yl,yu = -4,8 buffer = 10 num_points = len(inputs) outputs = speed_sigmoid_func(inputs,x_0a,x_0b,L,k,y_0,m) rot_mat = np.array([[np.cos(theta),-1.*np.sin(theta)],[np.sin(theta),np.cos(theta)]]) rotation_origin = np.array([x_0a+(x_0b-x_0a)/2,y_0]) rotation_origin_ones = np.repeat(rotation_origin[:,None],num_points,axis=1) inputs1,outputs1 = np.dot(rot_mat,np.vstack((inputs,outputs))-rotation_origin_ones)+rotation_origin_ones which_inputs = find_nearest(inputs1,inputs[:,None]) return outputs1[which_inputs] def plot_f_rotated(x_0a,x_0b,L,k,y_0,m,theta): yl,yu = -4,8 buffer = 10 num_points = 1000 inputs = np.linspace(yl-buffer,yu+buffer,num_points) outputs = f_rotated(inputs,x_0a,x_0b,L,k,y_0,m,theta) plt.figure(figsize=(8,8)) ax = plt.subplot() ax.set_aspect('equal') plt.plot(inputs,outputs,'o',color='orange') plt.xlim([-10,10]) plt.ylim([-10,10]) ax.spines['left'].set_position('center') ax.spines['bottom'].set_position('center') # Eliminate upper and right axes ax.spines['right'].set_color('none') ax.spines['top'].set_color('none') # Show ticks in the left and lower axes only ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') interactive_plot = interactive(plot_f_rotated, x_0a =slider(-2,6,0.1,-0.4),x_0b =slider(-2,6,0.1,3.6), L=slider(0,4,0.1,0.8),k=slider(.1,10,0.01,4.),y_0=slider(0,4,0.1,1.6), m=slider(0,4,0.1,1.),theta=slider(0,np.pi/2,0.1,np.pi/6)) output = interactive_plot.children[-1] output.layout.height = '500px' interactive_plot ``` Now, replicate the above, and add in the un-rotated version with fixed parameters (the working version of the sigmoid function up till this point), and drag to find the parameters that best work for the rotated to match up with it in the left and right limit sections. ``` def plot_f_rotated(x_0a,x_0b,L,k,y_0,m,theta): yl,yu = -4,8 buffer = 10 num_points = 1000 inputs = np.linspace(yl-buffer,yu+buffer,num_points) plt.figure(figsize=(8,8)) ax = plt.subplot() ax.set_aspect('equal') #The updating part of the plot is the (scatter) plot of the rotated function outputs = f_rotated(inputs,x_0a,x_0b,L,k,y_0,m,theta) plt.plot(inputs,outputs,'o',color='orange') #The fixed part is the non-rotated plot of the sigmoid with the previously determined parameters outputs1 = f_rotated(inputs, x_0a = -0.4, x_0b= 1.45, L=0.8, k=4., y_0=1.6, m=1., theta=0.) plt.plot(inputs,outputs1,color='blue') plt.xlim([-10,10]) plt.ylim([-10,10]) ax.spines['left'].set_position('center') ax.spines['bottom'].set_position('center') # Eliminate upper and right axes ax.spines['right'].set_color('none') ax.spines['top'].set_color('none') # Show ticks in the left and lower axes only ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') ax.set_xticks(np.arange(-10,10,1)) ax.set_yticks(np.arange(-10,10,1)) interactive_plot = interactive(plot_f_rotated, x_0a =slider(-2,6,0.01,-0.4),x_0b =slider(-2,6,0.01,1.45), L=slider(0,4,0.1,0.8),k=slider(.1,10,0.01,4.),y_0=slider(0,4,0.1,1.6), m=slider(0,1,0.01,1.),theta=slider(0,np.pi/4,0.01,np.pi/6)) output = interactive_plot.children[-1] output.layout.height = '500px' interactive_plot ``` Final plot to check selected parameter values x_0a = -0.4 x_0b = 1.45 L = 0.8 k = 2.4 y0 = 1.6 m = 0.43 theta = 0.37 ``` def plot_f_rotated(x_0a,x_0b,L,k,y_0,m,theta): yl,yu = -4,8 buffer = 10 num_points = 1000 inputs = np.linspace(yl-buffer,yu+buffer,num_points) plt.figure(figsize=(8,8)) ax = plt.subplot() ax.set_aspect('equal') #The updating part of the plot is the (scatter) plot of the rotated function outputs = f_rotated(inputs,x_0a,x_0b,L,k,y_0,m,theta) plt.plot(inputs,outputs,'o',color='orange',label='Leaky Controller \'A\'') #The fixed part is the non-rotated plot of the sigmoid with the previously determined parameters outputs1 = f_rotated(inputs, x_0a = -0.4, x_0b= 1.45, L=0.8, k=4., y_0=1.6, m=1., theta=0.) plt.plot(inputs,outputs1,color='blue',label='Perfect Controller') plt.xlim([-10,10]) plt.ylim([-10,10]) ax.spines['left'].set_position('center') ax.spines['bottom'].set_position('center') # Eliminate upper and right axes ax.spines['right'].set_color('none') ax.spines['top'].set_color('none') # Show ticks in the left and lower axes only ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') ax.set_xticks(np.arange(-10,10,1)) ax.set_yticks(np.arange(-10,10,1)) plt.legend() interactive_plot = interactive(plot_f_rotated, x_0a =slider(-2,6,0.01,-0.4),x_0b =slider(-2,6,0.01,1.45), L=slider(0,4,0.1,0.8),k=slider(.1,10,0.01,2.4),y_0=slider(0,4,0.1,1.6), m=slider(0,1,0.01,0.43),theta=slider(0,np.pi/4,0.01,0.37)) output = interactive_plot.children[-1] output.layout.height = '500px' interactive_plot ``` Now let's do the blue one as the first modified map we use in the direct arrival computations, and second try using the orange one.
github_jupyter
# REWARD-MODULATED SELF ORGANISING RECURRENT NEURAL NETWORK https://www.frontiersin.org/articles/10.3389/fncom.2015.00036/full ### IMPORT REQUIRED LIBRARIES ``` from __future__ import division import numpy as np from scipy.stats import norm import random import tqdm import pandas as pd from collections import OrderedDict import matplotlib.pyplot as plt import heapq import pickle import torch as torch from sorn.utils import Initializer torch.manual_seed(1) random.seed(1) np.random.seed(1) ``` ### UTILS ``` def normalize_weight_matrix(weight_matrix): # Applied only while initializing the weight. Later Synaptic scalling applied on weight matrices """ Normalize the weights in the matrix such that incoming connections to a neuron sum up to 1 Args: weight_matrix(array) -- Incoming Weights from W_ee or W_ei or W_ie Returns: weight_matrix(array) -- Normalized weight matrix""" normalized_weight_matrix = weight_matrix / np.sum(weight_matrix,axis = 0) return normalized_weight_matrix ``` ### Implement lambda incoming connections for Excitatory neurons and outgoing connections per Inhibitory neuron ``` def generate_lambd_connections(synaptic_connection,ne,ni, lambd_w,lambd_std): """ Args: synaptic_connection - Type of sysnpatic connection (EE,EI or IE) ne - Number of excitatory units ni - Number of inhibitory units lambd_w - Average number of incoming connections lambd_std - Standard deviation of average number of connections per neuron Returns: connection_weights - Weight matrix """ if synaptic_connection == 'EE': """Choose random lamda connections per neuron""" # Draw normally distribued ne integers with mean lambd_w lambdas_incoming = norm.ppf(np.random.random(ne), loc=lambd_w, scale=lambd_std).astype(int) # lambdas_outgoing = norm.ppf(np.random.random(ne), loc=lambd_w, scale=lambd_std).astype(int) # List of neurons list_neurons= list(range(ne)) # Connection weights connection_weights = np.zeros((ne,ne)) # For each lambd value in the above list, # generate weights for incoming and outgoing connections #-------------Gaussian Distribution of weights -------------- # weight_matrix = np.random.randn(Sorn.ne, Sorn.ni) + 2 # Small random values from gaussian distribution # Centered around 2 to make all values positive # ------------Uniform Distribution -------------------------- global_incoming_weights = np.random.uniform(0.0,0.1,sum(lambdas_incoming)) # Index Counter global_incoming_weights_idx = 0 # Choose the neurons in order [0 to 199] for neuron in list_neurons: ### Choose ramdom unique (lambdas[neuron]) neurons from list_neurons possible_connections = list_neurons.copy() possible_connections.remove(neuron) # Remove the selected neuron from possible connections i!=j # Choose random presynaptic neurons possible_incoming_connections = random.sample(possible_connections,lambdas_incoming[neuron]) incoming_weights_neuron = global_incoming_weights[global_incoming_weights_idx:global_incoming_weights_idx+lambdas_incoming[neuron]] # ---------- Update the connection weight matrix ------------ # Update incoming connection weights for selected 'neuron' for incoming_idx,incoming_weight in enumerate(incoming_weights_neuron): connection_weights[possible_incoming_connections[incoming_idx]][neuron] = incoming_weight global_incoming_weights_idx += lambdas_incoming[neuron] return connection_weights if synaptic_connection == 'EI': """Choose random lamda connections per neuron""" # Draw normally distribued ni integers with mean lambd_w lambdas = norm.ppf(np.random.random(ni), loc=lambd_w, scale=lambd_std).astype(int) # List of neurons list_neurons= list(range(ni)) # Each i can connect with random ne neurons # Initializing connection weights variable connection_weights = np.zeros((ni,ne)) # ------------Uniform Distribution ----------------------------- global_outgoing_weights = np.random.uniform(0.0,0.1,sum(lambdas)) # Index Counter global_outgoing_weights_idx = 0 # Choose the neurons in order [0 to 40] for neuron in list_neurons: ### Choose ramdom unique (lambdas[neuron]) neurons from list_neurons possible_connections = list(range(ne)) possible_outgoing_connections = random.sample(possible_connections,lambdas[neuron]) # possible_outgoing connections to the neuron # Update weights outgoing_weights = global_outgoing_weights[global_outgoing_weights_idx:global_outgoing_weights_idx+lambdas[neuron]] # ---------- Update the connection weight matrix ------------ # Update outgoing connections for the neuron for outgoing_idx,outgoing_weight in enumerate(outgoing_weights): # Update the columns in the connection matrix connection_weights[neuron][possible_outgoing_connections[outgoing_idx]] = outgoing_weight # Update the global weight values index global_outgoing_weights_idx += lambdas[neuron] return connection_weights ``` ### More Util functions ``` def get_incoming_connection_dict(weights): # Get the non-zero entires in columns is the incoming connections for the neurons # Indices of nonzero entries in the columns connection_dict=dict.fromkeys(range(1,len(weights)+1),0) for i in range(len(weights[0])): # For each neuron connection_dict[i] = list(np.nonzero(weights[:,i])[0]) return connection_dict def get_outgoing_connection_dict(weights): # Get the non-zero entires in rows is the outgoing connections for the neurons # Indices of nonzero entries in the rows connection_dict=dict.fromkeys(range(1,len(weights)+1),1) for i in range(len(weights[0])): # For each neuron connection_dict[i] = list(np.nonzero(weights[i,:])[0]) return connection_dict def prune_small_weights(weights,cutoff_weight): """ Prune the connections with negative connection strength""" weights[weights <= cutoff_weight] = cutoff_weight return weights def set_max_cutoff_weight(weights, cutoff_weight): """ Set cutoff limit for the values in given array""" weights[weights > cutoff_weight] = cutoff_weight return weights def get_unconnected_indexes(wee): """ Helper function for Structural plasticity to randomly select the unconnected units Args: wee - Weight matrix Returns: list (indices) // indices = (row_idx,col_idx)""" i,j = np.where(wee <= 0.) indices = list(zip(i,j)) self_conn_removed = [] for i,idxs in enumerate(indices): if idxs[0] != idxs[1]: self_conn_removed.append(indices[i]) return self_conn_removed def white_gaussian_noise(mu, sigma,t): """Generates white gaussian noise with mean mu, standard deviation sigma and the noise length equals t """ noise = np.random.normal(mu, sigma, t) return np.expand_dims(noise,1) ### SANITY CHECK EACH WEIGHTS #### Note this function has no influence in weight matrix, will be deprecated in next version def zero_sum_incoming_check(weights): zero_sum_incomings = np.where(np.sum(weights,axis = 0) == 0.) if len(zero_sum_incomings[-1]) == 0: return weights else: for zero_sum_incoming in zero_sum_incomings[-1]: rand_indices = np.random.randint(40,size = 2) # 5 because each excitatory neuron connects with 5 inhibitory neurons # given the probability of connections 0.2 rand_values = np.random.uniform(0.0,0.1,2) for i,idx in enumerate(rand_indices): weights[:,zero_sum_incoming][idx] = rand_values[i] return weights ``` ### SORN ``` class Sorn(object): """SORN 1 network model Initialization""" def __init__(self): pass """Initialize network variables as class variables of SORN""" nu = 4 # Number of input units ne = 30 # Number of excitatory units ni = int(0.2*ne) # Number of inhibitory units in the network no = 1 eta_stdp = 0.004 eta_inhib = 0.001 eta_ip = 0.01 te_max = 1.0 ti_max = 0.5 ti_min = 0.0 te_min = 0.0 mu_ip = 0.1 sigma_ip = 0.0 # Standard deviation, variance == 0 # Initialize weight matrices def initialize_weight_matrix(self, network_type,synaptic_connection, self_connection, lambd_w): """ Args: network_type(str) - Spare or Dense synaptic_connection(str) - EE,EI,IE: Note that Spare connection is defined only for EE connections self_connection(str) - True or False: i-->i ; Network is tested only using j-->i lambd_w(int) - Average number of incoming and outgoing connections per neuron Returns: weight_matrix(array) - Array of connection strengths """ if (network_type == "Sparse") and (self_connection == "False"): """Generate weight matrix for E-E/ E-I connections with mean lamda incoming and outgiong connections per neuron""" weight_matrix = generate_lambd_connections(synaptic_connection,Sorn.ne,Sorn.ni,lambd_w,lambd_std = 1) # Dense matrix for W_ie elif (network_type == 'Dense') and (self_connection == 'False'): # Gaussian distribution of weights # weight_matrix = np.random.randn(Sorn.ne, Sorn.ni) + 2 # Small random values from gaussian distribution # Centered around 1 # weight_matrix.reshape(Sorn.ne, Sorn.ni) # weight_matrix *= 0.01 # Setting spectral radius # Uniform distribution of weights weight_matrix = np.random.uniform(0.0,0.1,(Sorn.ne, Sorn.ni)) weight_matrix.reshape((Sorn.ne,Sorn.ni)) elif (network_type == 'Dense_output') and (self_connection == 'False'): # Gaussian distribution of weights # weight_matrix = np.random.randn(Sorn.ne, Sorn.ni) + 2 # Small random values from gaussian distribution # Centered around 1 # weight_matrix.reshape(Sorn.ne, Sorn.ni) # weight_matrix *= 0.01 # Setting spectral radius # Uniform distribution of weights weight_matrix = np.random.uniform(0.0,0.1,(Sorn.no, Sorn.ne)) weight_matrix.reshape((Sorn.no,Sorn.ne)) return weight_matrix def initialize_threshold_matrix(self, te_min,te_max, ti_min,ti_max): # Initialize the threshold for excitatory and inhibitory neurons """Args: te_min(float) -- Min threshold value for excitatory units ti_min(float) -- Min threshold value for inhibitory units te_max(float) -- Max threshold value for excitatory units ti_max(float) -- Max threshold value for inhibitory units Returns: te(vector) -- Threshold values for excitatory units ti(vector) -- Threshold values for inhibitory units""" te = np.random.uniform(0., te_max, (Sorn.ne, 1)) ti = np.random.uniform(0., ti_max, (Sorn.ni, 1)) # For patter recognition task: Heavyside step function with fixed threshold to = 0.5 return te, ti,to def initialize_activity_vector(self,ne, ni, no): # Initialize the activity vectors X and Y for excitatory and inhibitory neurons """Args: ne(int) -- Number of excitatory neurons ni(int) -- Number of inhibitory neurons Returns: x(array) -- Array of activity vectors of excitatory population y(array) -- Array of activity vectors of inhibitory population""" x = np.zeros((ne, 2)) y = np.zeros((ni, 2)) o = np.zeros((no, 2)) return x, y, o class Plasticity(Sorn): """ Instance of class Sorn. Inherits the variables and functions defined in class Sorn Encapsulates all plasticity mechanisms mentioned in the article """ # Initialize the global variables for the class //Class attributes def __init__(self): super().__init__() self.nu = Sorn.nu # Number of input units self.ne = Sorn.ne # Number of excitatory units self.no = Sorn.no self.eta_stdp = Sorn.eta_stdp # STDP plasticity Learning rate constant; SORN1 and SORN2 self.eta_ip = Sorn.eta_ip # Intrinsic plasticity learning rate constant; SORN1 and SORN2 self.eta_inhib = Sorn.eta_inhib # Intrinsic plasticity learning rate constant; SORN2 only self.h_ip = 2 * Sorn.nu / Sorn.ne # Target firing rate self.mu_ip = Sorn.mu_ip # Mean target firing rate self.ni = Sorn.ni # Number of inhibitory units in the network self.time_steps = Sorn.time_steps # Total time steps of simulation self.te_min = Sorn.te_min # Excitatory minimum Threshold self.te_max = Sorn.te_max # Excitatory maximum Threshold def stdp(self, wee, x, mr, cutoff_weights): """ Apply STDP rule : Regulates synaptic strength between the pre(Xj) and post(Xi) synaptic neurons""" x = np.asarray(x) xt_1 = x[:,0] xt = x[:,1] wee_t = wee.copy() # STDP applies only on the neurons which are connected. for i in range(len(wee_t[0])): # Each neuron i, Post-synaptic neuron for j in range(len(wee_t[0:])): # Incoming connection from jth pre-synaptic neuron to ith neuron if wee_t[j][i] != 0. : # Check connectivity # Get the change in weight delta_wee_t = mr*self.eta_stdp * (xt[i] * xt_1[j] - xt_1[i]*xt[j]) # Update the weight between jth neuron to i ""Different from notation in article wee_t[j][i] = wee[j][i] + delta_wee_t """ Prune the smallest weights induced by plasticity mechanisms; Apply lower cutoff weight""" wee_t = prune_small_weights(wee_t,cutoff_weights[0]) """Check and set all weights < upper cutoff weight """ wee_t = set_max_cutoff_weight(wee_t,cutoff_weights[1]) return wee_t def ostdp(self,woe, x, mo): """ Apply STDP rule : Regulates synaptic strength between the pre(Xj) and post(Xi) synaptic neurons""" x = np.asarray(x) xt_1 = x[:, 0] xt = x[:, 1] woe_t = woe.copy() # STDP applies only on the neurons which are connected. for i in range(len(woe_t[0])): # Each neuron i, Post-synaptic neuron for j in range(len(woe_t[0:])): # Incoming connection from jth pre-synaptic neuron to ith neuron if woe_t[j][i] != 0.: # Check connectivity # Get the change in weight delta_woe_t = mo*self.eta_stdp * (xt[i] * xt_1[j] - xt_1[i] * xt[j]) # Update the weight between jth neuron to i ""Different from notation in article woe_t[j][i] = woe[j][i] + delta_woe_t return woe_t def ip(self, te, x): # IP rule: Active unit increases its threshold and inactive decreases its threshold. xt = x[:, 1] te_update = te + self.eta_ip * (xt.reshape(self.ne, 1) - self.h_ip) """ Check whether all te are in range [0.0,1.0] and update acordingly""" # Update te < 0.0 ---> 0.0 # te_update = prune_small_weights(te_update,self.te_min) # Set all te > 1.0 --> 1.0 # te_update = set_max_cutoff_weight(te_update,self.te_max) return te_update def ss(self, wee_t): """Synaptic Scaling or Synaptic Normalization""" wee_t = wee_t / np.sum(wee_t,axis=0) return wee_t @staticmethod def modulation_factor(reward_history, current_reward ,window_sizes): """ Grid search for Modulation factor. Returns the maximum moving average over history of rewards with corresponding window Args: reward_history (list): List with the history of rewards window_sizes (list): List of window sizes for gridsearch Returns: [int]: Modulation factor """ def running_mean(x, K): cumsum = np.cumsum(np.insert(x, 0, 0)) return (cumsum[K:] - cumsum[:-K]) / float(K) reward_avgs = [] # Holds the mean of all rolling averages for each window for window_size in window_sizes: if window_size<=len(reward_history): reward_avgs.append(np.mean(running_mean(reward_history, window_size))) best_reward= np.max(reward_avgs) best_reward_window = window_sizes[np.argmax(best_reward)] print("current_reward %s | Best rolling avergage reward %s | Best Rolling average window %s"%(current_reward, best_reward, best_reward_window )) mo = current_reward - best_reward mr = mo.copy() # TODO: What if mo != mr ? return mo, mr, best_reward, best_reward_window ########################################################### @staticmethod def initialize_plasticity(): """NOTE: DO NOT TRANSPOSE THE WEIGHT MATRIX WEI FOR SORN 2 MODEL""" # Create and initialize sorn object and variables sorn_init = Sorn() WEE_init = sorn_init.initialize_weight_matrix(network_type="Sparse", synaptic_connection='EE', self_connection='False', lambd_w=20) WEI_init = sorn_init.initialize_weight_matrix(network_type="Dense", synaptic_connection='EI', self_connection='False', lambd_w=100) WIE_init = sorn_init.initialize_weight_matrix(network_type="Dense", synaptic_connection='IE', self_connection='False', lambd_w=100) WOE_init = sorn_init.initialize_weight_matrix(network_type="Dense_output", synaptic_connection='OE', self_connection='False', lambd_w=100) Wee_init = Initializer.zero_sum_incoming_check(WEE_init) Wei_init = Initializer.zero_sum_incoming_check(WEI_init.T) # For SORN 1 # Wei_init = Initializer.zero_sum_incoming_check(WEI_init) Wie_init = Initializer.zero_sum_incoming_check(WIE_init) Woe_init = Initializer.zero_sum_incoming_check(WOE_init.T) c = np.count_nonzero(Wee_init) v = np.count_nonzero(Wei_init) b = np.count_nonzero(Wie_init) d = np.count_nonzero(Woe_init) print('Network Initialized') print('Number of connections in Wee %s , Wei %s, Wie %s Woe %s' %(c, v, b, d)) print('Shapes Wee %s Wei %s Wie %s Woe %s' % (Wee_init.shape, Wei_init.shape, Wie_init.shape, Woe_init.shape)) # Normalize the incoming weights normalized_wee = Initializer.normalize_weight_matrix(Wee_init) normalized_wei = Initializer.normalize_weight_matrix(Wei_init) normalized_wie = Initializer.normalize_weight_matrix(Wie_init) te_init, ti_init, to_init = sorn_init.initialize_threshold_matrix(Sorn.te_min, Sorn.te_max, Sorn.ti_min, Sorn.ti_max) x_init, y_init, o_init = sorn_init.initialize_activity_vector(Sorn.ne, Sorn.ni,Sorn.no) return Wee_init, Wei_init, Wie_init,Woe_init, te_init, ti_init, to_init,x_init, y_init, o_init @staticmethod def reorganize_network(): pass class MatrixCollection(Sorn): def __init__(self,phase, matrices = None): super().__init__() self.phase = phase self.matrices = matrices if self.phase == 'Plasticity' and self.matrices == None : self.time_steps = Sorn.time_steps + 1 # Total training steps self.Wee, self.Wei, self.Wie,self.Woe, self.Te, self.Ti, self.To, self.X, self.Y, self.O = [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps, [0] * self.time_steps wee, wei, wie, woe, te, ti, to, x, y, o = Plasticity.initialize_plasticity() # Assign initial matrix to the master matrices self.Wee[0] = wee self.Wei[0] = wei self.Wie[0] = wie self.Woe[0] = woe self.Te[0] = te self.Ti[0] = ti self.To[0] = to self.X[0] = x self.Y[0] = y self.O[0] = o elif self.phase == 'Plasticity' and self.matrices != None: self.time_steps = Sorn.time_steps + 1 # Total training steps self.Wee, self.Wei, self.Wie,self.Woe, self.Te, self.Ti,self.To, self.X, self.Y,self.O = [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps, [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps # Assign matrices from plasticity phase to the new master matrices for training phase self.Wee[0] = matrices['Wee'] self.Wei[0] = matrices['Wei'] self.Wie[0] = matrices['Wie'] self.Woe[0] = matrices['Woe'] self.Te[0] = matrices['Te'] self.Ti[0] = matrices['Ti'] self.To[0] = matrices['To'] self.X[0] = matrices['X'] self.Y[0] = matrices['Y'] self.O[0] = matrices['O'] elif self.phase == 'Training': """NOTE: time_steps here is diferent for plasticity or trianing phase""" self.time_steps = Sorn.time_steps + 1 # Total training steps self.Wee, self.Wei, self.Wie,self.Woe, self.Te, self.Ti,self.To, self.X, self.Y,self.O = [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps, [0] * self.time_steps, [0] * self.time_steps, \ [0] * self.time_steps # Assign matrices from plasticity phase to new respective matrices for training phase self.Wee[0] = matrices['Wee'] self.Wei[0] = matrices['Wei'] self.Wie[0] = matrices['Wie'] self.Woe[0] = matrices['Woe'] self.Te[0] = matrices['Te'] self.Ti[0] = matrices['Ti'] self.To[0] = matrices['To'] self.X[0] = matrices['X'] self.Y[0] = matrices['Y'] self.O[0] = matrices['O'] # @staticmethod def weight_matrix(self, wee, wei, wie, woe, i): # Get delta_weight from Plasticity.stdp # i - training step self.Wee[i + 1] = wee self.Wei[i + 1] = wei self.Wie[i + 1] = wie self.Woe[i + 1] = woe return self.Wee, self.Wei, self.Wie, self.Woe # @staticmethod def threshold_matrix(self, te, ti,to, i): self.Te[i + 1] = te self.Ti[i + 1] = ti self.To[i + 1] = to return self.Te, self.Ti, self.To # @staticmethod def network_activity_t(self, excitatory_net, inhibitory_net, output_net, i): self.X[i + 1] = excitatory_net self.Y[i + 1] = inhibitory_net self.O[i + 1] = output_net return self.X, self.Y, self.O # @staticmethod def network_activity_t_1(self, x, y,o, i): x_1, y_1, o_1 = [0] * self.time_steps, [0] * self.time_steps, [0] * self.time_steps x_1[i] = x y_1[i] = y o_1[i] = o return x_1, y_1, o_1 class NetworkState(Plasticity): """The evolution of network states""" def __init__(self, v_t): super().__init__() self.v_t = v_t def incoming_drive(self,weights,activity_vector): # Broadcasting weight*acivity vectors incoming = weights* activity_vector incoming = np.array(incoming.sum(axis=0)) return incoming def excitatory_network_state(self, wee, wei, te, x, y,white_noise_e): """ Activity of Excitatory neurons in the network""" xt = x[:, 1] xt = xt.reshape(self.ne, 1) yt = y[:, 1] yt = yt.reshape(self.ni, 1) incoming_drive_e = np.expand_dims(self.incoming_drive(weights = wee,activity_vector=xt),1) incoming_drive_i = np.expand_dims(self.incoming_drive(weights = wei,activity_vector=yt),1) if self.v_t.shape[0] < self.ne: inp = [0]*self.ne inp[:len(self.v_t)] = self.v_t self.v_t = inp.copy() tot_incoming_drive = incoming_drive_e - incoming_drive_i + white_noise_e + np.expand_dims(np.asarray(self.v_t),1) - te """Heaviside step function""" """Implement Heaviside step function""" heaviside_step = np.expand_dims([0.] * len(tot_incoming_drive),1) heaviside_step[tot_incoming_drive > 0] = 1. xt_next = np.asarray(heaviside_step.copy()) return xt_next def inhibitory_network_state(self, wie, ti, x,white_noise_i): # Activity of inhibitory neurons wie = np.asarray(wie) xt = x[:, 1] xt = xt.reshape(Sorn.ne, 1) incoming_drive_e = np.expand_dims(self.incoming_drive(weights = wie, activity_vector=xt),1) tot_incoming_drive = incoming_drive_e + white_noise_i - ti """Implement Heaviside step function""" heaviside_step = np.expand_dims([0.] * len(tot_incoming_drive),1) heaviside_step[tot_incoming_drive > 0] = 1. yt_next = np.asarray(heaviside_step.copy()) return yt_next def recurrent_drive(self, wee, wei, te, x, y,white_noise_e): """Network state due to recurrent drive received by the each unit at time t+1""" xt = x[:, 1] xt = xt.reshape(self.ne, 1) yt = y[:, 1] yt = yt.reshape(self.ni, 1) incoming_drive_e = np.expand_dims(self.incoming_drive(weights = wee,activity_vector=xt),1) incoming_drive_i = np.expand_dims(self.incoming_drive(weights = wei,activity_vector=yt),1) tot_incoming_drive = incoming_drive_e - incoming_drive_i + white_noise_e - te """Implement Heaviside step function""" heaviside_step = np.expand_dims([0.] * len(tot_incoming_drive),1) heaviside_step[tot_incoming_drive > 0] = 1. xt_next = np.asarray(heaviside_step.copy()) return xt_next def output_network_state(self,woe, to, x): """ Output layer states Args: woe (array): Connection weights between Reurrent network and Output layer to (array): Threshold of Ouput layer neurons x (array): Excitatory recurrent network states """ woe = np.asarray(woe) xt = x[:, 1] xt = xt.reshape(Sorn.ne, 1) incoming_drive_o = np.expand_dims(self.incoming_drive(weights=woe, activity_vector=xt), 1) tot_incoming_drive = incoming_drive_o - to # TODO: If output neuron is 1, the use Heavyside step function if type(to) == list: """Winner takes all""" ot_next = np.where(tot_incoming_drive == tot_incoming_drive.max(), tot_incoming_drive, 0.) return ot_next else: """Implement Heaviside step function""" heaviside_step = np.expand_dims([0.] * len(tot_incoming_drive),1) heaviside_step[tot_incoming_drive > 0] = 1. return heaviside_step ``` ### Helper class for training SORN #### Build separate class to feed inputs to SORN with plasticity ON ``` class SimulateRMSorn(Sorn): """ Args: inputs - one hot vector of inputs Returns: matrix_collection - collection of all weight matrices in dictionaries """ def __init__(self,phase,matrices,inputs,sequence_length, targets, reward_window_sizes, epochs): super().__init__() self.time_steps = np.shape(inputs)[0]*sequence_length * epochs Sorn.time_steps = np.shape(inputs)[0]*sequence_length* epochs # self.inputs = np.asarray(np.tile(inputs,(1,epochs))) self.inputs = inputs self.phase = phase self.matrices = matrices self.epochs = epochs self.reward_window_sizes = reward_window_sizes self.sequence_length = sequence_length def train_sorn(self): # Collect the network activity at all time steps X_all = [0]*self.time_steps Y_all = [0]*self.time_steps R_all = [0]*self.time_steps O_all = [0]*self.time_steps Rewards,Mo,Mr = [],[],[] frac_pos_active_conn = [] """ DONOT INITIALIZE WEIGHTS""" matrix_collection = MatrixCollection(phase = self.phase, matrices = self.matrices) time_steps_counter= 0 """ Generate white noise""" white_noise_e = white_gaussian_noise(mu= 0., sigma = 0.04,t = Sorn.ne) white_noise_i = white_gaussian_noise(mu= 0., sigma = 0.04,t = Sorn.ni) # Buffers to get the resulting x, y and o vectors at the current time step and update the master matrix x_buffer, y_buffer, o_buffer = np.zeros(( Sorn.ne, 2)), np.zeros((Sorn.ni, 2)), np.zeros(( Sorn.no, 2)) te_buffer, ti_buffer, to_buffer = np.zeros((Sorn.ne, 1)), np.zeros((Sorn.ni, 1)), np.zeros(( Sorn.no, 1)) # Get the matrices and rename them for ease of reading Wee, Wei, Wie,Woe = matrix_collection.Wee, matrix_collection.Wei, matrix_collection.Wie, matrix_collection.Woe Te, Ti,To = matrix_collection.Te, matrix_collection.Ti,matrix_collection.To X, Y, O = matrix_collection.X, matrix_collection.Y, matrix_collection.O i = 0 for k in tqdm.tqdm(range(self.inputs.shape[0])): for j in range(self.sequence_length): """ Fraction of active connections between E-E network""" frac_pos_active_conn.append((Wee[i] > 0.0).sum()) network_state = NetworkState(self.inputs[k][j]) # Feed Input as an argument to the class # Recurrent drive,excitatory, inhibitory and output network states r = network_state.recurrent_drive(Wee[i], Wei[i], Te[i], X[i], Y[i], white_noise_e = 0.) excitatory_state_xt_buffer = network_state.excitatory_network_state(Wee[i], Wei[i], Te[i], X[i], Y[i],white_noise_e = 0.) inhibitory_state_yt_buffer = network_state.inhibitory_network_state(Wie[i], Ti[i], X[i],white_noise_i = 0.) output_state_ot_buffer = network_state.output_network_state(Woe[i], To[i], X[i]) """ Update X and Y """ x_buffer[:, 0] = X[i][:, 1] # xt -->(becomes) xt_1 x_buffer[:, 1] = excitatory_state_xt_buffer.T # New_activation; x_buffer --> xt y_buffer[:, 0] = Y[i][:, 1] y_buffer[:, 1] = inhibitory_state_yt_buffer.T o_buffer[:, 0] = O[i][:, 1] o_buffer[:, 1] = output_state_ot_buffer.T """Plasticity phase""" plasticity = Plasticity() # Reward and mo, mr current_reward = output_state_ot_buffer*targets[k][j] Rewards.extend(current_reward) mo, mr, best_reward, best_reward_window = plasticity.modulation_factor(Rewards, current_reward, self.reward_window_sizes) print('Input %s | Target %s | predicted %s | mr %s, mo %s'%(self.inputs[k].tolist(), targets[k][j],output_state_ot_buffer, mr, mo)) Mo.append(mo) Mr.append(mr) # STDP, Intrinsic plasticity and Synaptic scaling Wee_t = plasticity.stdp(Wee[i],x_buffer,mr, cutoff_weights = (0.0,1.0)) Woe_t = plasticity.ostdp(Woe[i],x_buffer,mo) Te_t = plasticity.ip(Te[i],x_buffer) Wee_t = Plasticity().ss(Wee_t) Woe_t = Plasticity().ss(Woe_t) """Assign the matrices to the matrix collections""" matrix_collection.weight_matrix(Wee_t, Wei[i], Wie[i],Woe_t, i) matrix_collection.threshold_matrix(Te_t, Ti[i],To[i], i) matrix_collection.network_activity_t(x_buffer, y_buffer,o_buffer, i) X_all[i] = x_buffer[:,1] Y_all[i] = y_buffer[:,1] R_all[i] = r O_all[i] = o_buffer[:,1] i+=1 plastic_matrices = {'Wee':matrix_collection.Wee[-1], 'Wei': matrix_collection.Wei[-1], 'Wie':matrix_collection.Wie[-1], 'Woe':matrix_collection.Woe[-1], 'Te': matrix_collection.Te[-1], 'Ti': matrix_collection.Ti[-1], 'X': X[-1], 'Y': Y[-1]} return plastic_matrices,X_all,Y_all,R_all,frac_pos_active_conn training_sequence = np.repeat(np.array([[[0,0,0,1], [0,0,1,0], [0,1,0,0], [1,0,0,0]], [[1,0,0,0], [0,1,0,0], [0,0,1,0], [0,0,0,1]], [[1,0,0,0], [0,0,1,0], [0,0,0,1], [0,1,0,0]], [[0,0,1,0], [0,0,0,0], [0,1,0,0], [0,0,0,1]]]), repeats=1000, axis=0) sequence_targets = np.repeat(np.array([1,0,0,0]),repeats=1000,axis=0) input_str = ['1234','4321', '4213', '2431'] training_input = [] targets = [] for i in range(100): idx = random.randint(0,3) inp = input_str[idx] if inp == '1234': input_seq = [[0,0,0,1], [0,0,1,0], [0,1,0,0], [1,0,0,0]] target = [1,1,1,1] elif inp == '4321': input_seq = [[1,0,0,0], [0,1,0,0], [0,0,1,0], [0,0,0,1]] target = [0,0,0,0] elif inp == '4213': input_seq = [[1,0,0,0], [0,0,1,0], [0,0,0,1], [0,1,0,0]] target = [0,0,0,0] else: input_seq = [[0,0,1,0], [0,0,0,0], [0,1,0,0], [0,0,0,1]] target = [0,0,0,0] training_input.append(input_seq) targets.append(target) print(np.asarray(training_input).shape, targets) train_plast_inp_mat,X_all_inp,Y_all_inp,R_all, frac_pos_active_conn = SimulateRMSorn(phase = 'Plasticity', matrices = None, inputs = np.asarray(training_input),sequence_length = 4, targets = targets, reward_window_sizes = [1,5,10,20], epochs = 1).train_sorn() ```
github_jupyter
``` # Mount the drive from google.colab import drive drive.mount('/content/drive') # Import the necessary libraries import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import tensorflow as tf from IPython.display import display from keras.preprocessing.text import Tokenizer # Model preprocessing APIs from sklearn import preprocessing from sklearn.utils import resample # Model accuracy plotting APIs from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import f1_score from sklearn.metrics import precision_score from sklearn.metrics import recall_score from sklearn.metrics import roc_auc_score # Model building APIs from tensorflow.keras.layers import Activation from tensorflow.keras.layers import Bidirectional from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Embedding from tensorflow.keras.layers import Input from tensorflow.keras.layers import LSTM from tensorflow.keras.models import Model from tensorflow.keras.models import Sequential from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.callbacks import ModelCheckpoint from tensorflow.keras.callbacks import ReduceLROnPlateau from tensorflow.keras.utils import plot_model from zipfile import ZipFile pd.options.display.max_columns = None pd.options.display.max_rows = None # Set path variables project_path = '/content/drive/My Drive/Colab/' file_name ='TempOutput_1.xlsx' # Import the dataframe unsampled_df=pd.read_excel(project_path+file_name) unsampled_df.info() # Drop columns not needed for Model building unsampled_df.drop(["Unnamed: 0","Short description", "Description", "Language"],axis=1,inplace=True) unsampled_df.head(5) unsampled_df = unsampled_df.dropna(axis=0) # Non-Grp_0 dataframe others_df = unsampled_df[unsampled_df['Assignment group'] != 'GRP_0'] # Get the upper/lower limit to resample maxOthers = others_df['Assignment group'].value_counts().max() maxOthers # Upsample the minority classes and downsample the majority classes df_to_process = unsampled_df[0:0] for grp in unsampled_df['Assignment group'].unique(): assign_grp_df = unsampled_df[unsampled_df['Assignment group'] == grp] resampled = resample(assign_grp_df, replace=True, n_samples=maxOthers, random_state=123) df_to_process = df_to_process.append(resampled) # Label encode the assignment groups label_encoder = preprocessing.LabelEncoder() df_to_process['Assignment group ID']= label_encoder.fit_transform(df_to_process['Assignment group']) df_to_process['Assignment group ID'].unique() # Function to generate word tokens def wordTokenizer(dataframe): tokenizer = Tokenizer(num_words=numWords,filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n',lower=True,split=' ', char_level=False) tokenizer.fit_on_texts(dataframe) dataframe = tokenizer.texts_to_sequences(dataframe) return tokenizer,dataframe # GloVe the dataframe and store the embedding result glove_file = project_path + "glove.6B.zip" print(glove_file) #Extract Glove embedding zip file with ZipFile(glove_file, 'r') as z: z.extractall() # Perform GloVe embeddings EMBEDDING_FILE = './glove.6B.100d.txt' embeddings_glove = {} for o in open(EMBEDDING_FILE): word = o.split(" ")[0] embd = o.split(" ")[1:] embd = np.asarray(embd, dtype='float32') embeddings_glove[word] = embd results = pd.DataFrame() predictedResults = pd.DataFrame() max_len = 300 tokenizer = Tokenizer(split=' ') tokenizer.fit_on_texts(df_to_process["New Description"].values) X_seq = tokenizer.texts_to_sequences(df_to_process["New Description"].values) X_padded = pad_sequences(X_seq, maxlen=max_len) numWords = len(tokenizer.word_index) + 1 epochs = 10 batch_size=100 numWords # Try the BiLSTM model on the sampled data and predict the accuracy. tokenizer, X = wordTokenizer(df_to_process['New Description']) y = np.asarray(df_to_process['Assignment group ID']) X = pad_sequences(X,maxlen=max_len) # Create embedding matrix embedding_matrix = np.zeros((numWords+1,100)) for i,word in tokenizer.index_word.items(): if i<numWords+1: embedding_vector = embeddings_glove.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector embedding_matrix # Perform the train-test split X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.3) X_train,X_test,y_train,y_test # Build the Bi-LSTM model input_layer = Input(shape=(max_len,),dtype=tf.int64) embed = Embedding(numWords+1,output_dim=100,input_length=max_len,weights=[embedding_matrix], trainable=True)(input_layer) #weights=[embedding_matrix] lstm=Bidirectional(LSTM(128))(embed) drop=Dropout(0.3)(lstm) dense =Dense(100,activation='relu')(drop) out=Dense((len((pd.Series(y_train)).unique())+1),activation='softmax')(dense) model = Model(input_layer,out) model.compile(loss='sparse_categorical_crossentropy',optimizer="adam",metrics=['accuracy']) model.summary() plot_model(model,to_file=project_path + "Bi-LSTM(GloVe)_Sampled_Model.jpg") checkpoint = ModelCheckpoint('model-{epoch:03d}-{val_accuracy:03f}.h5', verbose=1, monitor='val_accuracy',save_best_only=True, mode='auto') reduceLoss = ReduceLROnPlateau(monitor='val_loss', factor=0.2,patience=2, min_lr=0.0001) # Run the model and get the model history model_history = model.fit(X_train,y_train,batch_size=batch_size, epochs=epochs, callbacks=[checkpoint,reduceLoss], validation_data=(X_test,y_test)) # predict probabilities for test set yhat_probs = model.predict(X_test, verbose=0) # predict crisp classes for test set # use argmax per Jason, instead of model.predict_classes if not available. https://machinelearningmastery.com/how-to-calculate-precision-recall-f1-and-more-for-deep-learning-models/ yhat_classes = np.argmax(yhat_probs, axis=1) # Generate the target label to plot the Confusion Matrix target_names = df_to_process['Assignment group'].unique() target_names # Generate the Confusion Matrix matrix = confusion_matrix(y_test, yhat_classes) print(matrix) # Generate the Classification Report to print the class level accuracies, here, in the case of multiclass classification. print('Classification Report') print(classification_report(y_test, yhat_classes, target_names=target_names)) # Plot the Confusion matrix ax= plt.subplot() #plt.subplots(figsize=(10,10)) sns.heatmap(matrix,annot=True,ax=ax,cmap='Blues',fmt='d'); ax.set_xlabel('Predicted labels');ax.set_ylabel('True/Actual labels'); ax.set_title('Confusion Matrix'); ax.xaxis.set_ticklabels(target_names); ax.yaxis.set_ticklabels(target_names); # Summarize the accuracies from sklearn.metrics import accuracy_score accuracy = accuracy_score(y_test, yhat_classes) print('Accuracy: %f' % accuracy) precision = precision_score(y_test, yhat_classes, average='micro') print('Precision: %f' % precision) recall = recall_score(y_test, yhat_classes, average='micro') print('Recall: %f' % recall) f1 = f1_score(y_test, yhat_classes,average='micro') print('F1 score: %f' % f1) # Plot model losses loss_values = model_history.history['loss'] val_loss_values = model_history.history['val_loss'] epochs = range(1, len(loss_values) + 1) plt.plot(epochs, loss_values, 'bo', label="Training Loss") plt.plot(epochs, val_loss_values, 'b', label="Validation Loss") plt.title('Bi-LSTM(GloVe) on sampled data - Training and Validation Loss') plt.xlabel('Epochs') plt.ylabel('Loss Value') plt.legend() plt.show() # Plot training and validation accuracies acc_values = model_history.history['accuracy'] val_acc_values = model_history.history['val_accuracy'] epochs = range(1, len(loss_values) + 1) plt.plot(epochs, acc_values, 'ro', label="Training Accuracy") plt.plot(epochs, val_acc_values, 'r', label="Validation Accuracy") plt.title('Bi-LSTM(GloVe) on sampled data - Training and Validation Accuraccy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.show() # Summarize the model history model_df = pd.DataFrame(model_history.history) model_df # Thank you. ```
github_jupyter
# Hybrid quantum-classical Neural Networks with PyTorch and Qiskit Machine learning (ML) has established itself as a successful interdisciplinary field which seeks to mathematically extract generalizable information from data. Throwing in quantum computing gives rise to interesting areas of research which seek to leverage the principles of quantum mechanics to augment machine learning or vice-versa. Whether you're aiming to enhance classical ML algorithms by outsourcing difficult calculations to a quantum computer or optimise quantum algorithms using classical ML architectures - both fall under the diverse umbrella of quantum machine learning (QML). In this chapter, we explore how a classical neural network can be partially quantized to create a hybrid quantum-classical neural network. We will code up a simple example that integrates **Qiskit** with a state-of-the-art open-source software package - **[PyTorch](https://pytorch.org/)**. The purpose of this example is to demonstrate the ease of integrating Qiskit with existing ML tools and to encourage ML practitioners to explore what is possible with quantum computing. ## Contents 1. [How Does it Work?](#how) 1.1 [Preliminaries](#prelims) 2. [So How Does Quantum Enter the Picture?](#quantumlayer) 3. [Let's code!](#code) 3.1 [Imports](#imports) 3.2 [Create a "Quantum Class" with Qiskit](#q-class) 3.3 [Create a "Quantum-Classical Class" with PyTorch](#qc-class) 3.4 [Data Loading and Preprocessing](#data-loading-preprocessing) 3.5 [Creating the Hybrid Neural Network](#hybrid-nn) 3.6 [Training the Network](#training) 3.7 [Testing the Network](#testing) 4. [What Now?](#what-now) ## 1. How does it work? <a id='how'></a> <img src="hybridnetwork.png" /> **Fig.1** Illustrates the framework we will construct in this chapter. Ultimately, we will create a hybrid quantum-classical neural network that seeks to classify hand drawn digits. Note that the edges shown in this image are all directed downward; however, the directionality is not visually indicated. ### 1.1 Preliminaries <a id='prelims'></a> The background presented here on classical neural networks is included to establish relevant ideas and shared terminology; however, it is still extremely high-level. __If you'd like to dive one step deeper into classical neural networks, see the well made video series by youtuber__ [3Blue1Brown](https://youtu.be/aircAruvnKk). Alternatively, if you are already familiar with classical networks, you can [skip to the next section](#quantumlayer). ###### Neurons and Weights A neural network is ultimately just an elaborate function that is built by composing smaller building blocks called neurons. A ***neuron*** is typically a simple, easy-to-compute, and nonlinear function that maps one or more inputs to a single real number. The single output of a neuron is typically copied and fed as input into other neurons. Graphically, we represent neurons as nodes in a graph and we draw directed edges between nodes to indicate how the output of one neuron will be used as input to other neurons. It's also important to note that each edge in our graph is often associated with a scalar-value called a [***weight***](https://en.wikipedia.org/wiki/Artificial_neural_network#Connections_and_weights). The idea here is that each of the inputs to a neuron will be multiplied by a different scalar before being collected and processed into a single value. The objective when training a neural network consists primarily of choosing our weights such that the network behaves in a particular way. ###### Feed Forward Neural Networks It is also worth noting that the particular type of neural network we will concern ourselves with is called a **[feed-forward neural network (FFNN)](https://en.wikipedia.org/wiki/Feedforward_neural_network)**. This means that as data flows through our neural network, it will never return to a neuron it has already visited. Equivalently, you could say that the graph which describes our neural network is a **[directed acyclic graph (DAG)](https://en.wikipedia.org/wiki/Directed_acyclic_graph)**. Furthermore, we will stipulate that neurons within the same layer of our neural network will not have edges between them. ###### IO Structure of Layers The input to a neural network is a classical (real-valued) vector. Each component of the input vector is multiplied by a different weight and fed into a layer of neurons according to the graph structure of the network. After each neuron in the layer has been evaluated, the results are collected into a new vector where the i'th component records the output of the i'th neuron. This new vector can then treated as input for a new layer, and so on. We will use the standard term ***hidden layer*** to describe all but the first and last layers of our network. ## 2. So How Does Quantum Enter the Picture? <a id='quantumlayer'> </a> To create a quantum-classical neural network, one can implement a hidden layer for our neural network using a parameterized quantum circuit. By "parameterized quantum circuit", we mean a quantum circuit where the rotation angles for each gate are specified by the components of a classical input vector. The outputs from our neural network's previous layer will be collected and used as the inputs for our parameterized circuit. The measurement statistics of our quantum circuit can then be collected and used as inputs for the following layer. A simple example is depicted below: <img src="neuralnetworkQC.png" /> Here, $\sigma$ is a [nonlinear function](https://en.wikipedia.org/wiki/Activation_function) and $h_i$ is the value of neuron $i$ at each hidden layer. $R(h_i)$ represents any rotation gate about an angle equal to $h_i$ and $y$ is the final prediction value generated from the hybrid network. ### What about backpropagation? If you're familiar with classical ML, you may immediately be wondering *how do we calculate gradients when quantum circuits are involved?* This would be necessary to enlist powerful optimisation techniques such as **[gradient descent](https://en.wikipedia.org/wiki/Gradient_descent)**. It gets a bit technical, but in short, we can view a quantum circuit as a black box and the gradient of this black box with respect to its parameters can be calculated as follows: <img src="quantumgradient.png" /> where $\theta$ represents the parameters of the quantum circuit and $s$ is a macroscopic shift. The gradient is then simply the difference between our quantum circuit evaluated at $\theta+s$ and $\theta - s$. Thus, we can systematically differentiate our quantum circuit as part of a larger backpropogation routine. This closed form rule for calculating the gradient of quantum circuit parameters is known as **[the parameter shift rule](https://arxiv.org/pdf/1905.13311.pdf)**. ## 3. Let's code! <a id='code'></a> ### 3.1 Imports <a id='imports'></a> First, we import some handy packages that we will need, including Qiskit and PyTorch. ``` import numpy as np import matplotlib.pyplot as plt import torch from torch.autograd import Function from torchvision import datasets, transforms import torch.optim as optim import torch.nn as nn import torch.nn.functional as F import qiskit from qiskit.visualization import * ``` ### 3.2 Create a "Quantum Class" with Qiskit <a id='q-class'></a> We can conveniently put our Qiskit quantum functions into a class. First, we specify how many trainable quantum parameters and how many shots we wish to use in our quantum circuit. In this example, we will keep it simple and use a 1-qubit circuit with one trainable quantum parameter $\theta$. We hard code the circuit for simplicity and use a $RY-$rotation by the angle $\theta$ to train the output of our circuit. The circuit looks like this: <img src="1qubitcirc.png" width="400"/> In order to measure the output in the $z-$basis, we calculate the $\sigma_\mathbf{z}$ expectation. $$\sigma_\mathbf{z} = \sum_i z_i p(z_i)$$ We will see later how this all ties into the hybrid neural network. ``` class QuantumCircuit: """ This class provides a simple interface for interaction with the quantum circuit """ def __init__(self, n_qubits, backend, shots): # --- Circuit definition --- self._circuit = qiskit.QuantumCircuit(n_qubits) all_qubits = [i for i in range(n_qubits)] self.theta = qiskit.circuit.Parameter('theta') self._circuit.h(all_qubits) self._circuit.barrier() self._circuit.ry(self.theta, all_qubits) self._circuit.measure_all() # --------------------------- self.backend = backend self.shots = shots def run(self, thetas): job = qiskit.execute(self._circuit, self.backend, shots = self.shots, parameter_binds = [{self.theta: theta} for theta in thetas]) result = job.result().get_counts(self._circuit) counts = np.array(list(result.values())) states = np.array(list(result.keys())).astype(float) # Compute probabilities for each state probabilities = counts / self.shots # Get state expectation expectation = np.sum(states * probabilities) return np.array([expectation]) ``` Let's test the implementation ``` simulator = qiskit.Aer.get_backend('qasm_simulator') circuit = QuantumCircuit(1, simulator, 100) print('Expected value for rotation pi {}'.format(circuit.run([np.pi])[0])) circuit._circuit.draw() ``` ### 3.3 Create a "Quantum-Classical Class" with PyTorch <a id='qc-class'></a> Now that our quantum circuit is defined, we can create the functions needed for backpropagation using PyTorch. [The forward and backward passes](http://www.ai.mit.edu/courses/6.034b/backprops.pdf) contain elements from our Qiskit class. The backward pass directly computes the analytical gradients using the finite difference formula we introduced above. ``` class HybridFunction(Function): """ Hybrid quantum - classical function definition """ @staticmethod def forward(ctx, input, quantum_circuit, shift): """ Forward pass computation """ ctx.shift = shift ctx.quantum_circuit = quantum_circuit expectation_z = ctx.quantum_circuit.run(input[0].tolist()) result = torch.tensor([expectation_z]) ctx.save_for_backward(input, result) return result @staticmethod def backward(ctx, grad_output): """ Backward pass computation """ input, expectation_z = ctx.saved_tensors input_list = np.array(input.tolist()) shift_right = input_list + np.ones(input_list.shape) * ctx.shift shift_left = input_list - np.ones(input_list.shape) * ctx.shift gradients = [] for i in range(len(input_list)): expectation_right = ctx.quantum_circuit.run(shift_right[i]) expectation_left = ctx.quantum_circuit.run(shift_left[i]) gradient = torch.tensor([expectation_right]) - torch.tensor([expectation_left]) gradients.append(gradient) gradients = np.array([gradients]).T return torch.tensor([gradients]).float() * grad_output.float(), None, None class Hybrid(nn.Module): """ Hybrid quantum - classical layer definition """ def __init__(self, backend, shots, shift): super(Hybrid, self).__init__() self.quantum_circuit = QuantumCircuit(1, backend, shots) self.shift = shift def forward(self, input): return HybridFunction.apply(input, self.quantum_circuit, self.shift) ``` ### 3.4 Data Loading and Preprocessing <a id='data-loading-preprocessing'></a> ##### Putting this all together: We will create a simple hybrid neural network to classify images of two types of digits (0 or 1) from the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). We first load MNIST and filter for pictures containing 0's and 1's. These will serve as inputs for our neural network to classify. #### Training data ``` # Concentrating on the first 100 samples n_samples = 100 X_train = datasets.MNIST(root='./data', train=True, download=True, transform=transforms.Compose([transforms.ToTensor()])) # Leaving only labels 0 and 1 idx = np.append(np.where(X_train.targets == 0)[0][:n_samples], np.where(X_train.targets == 1)[0][:n_samples]) X_train.data = X_train.data[idx] X_train.targets = X_train.targets[idx] train_loader = torch.utils.data.DataLoader(X_train, batch_size=1, shuffle=True) n_samples_show = 6 data_iter = iter(train_loader) fig, axes = plt.subplots(nrows=1, ncols=n_samples_show, figsize=(10, 3)) while n_samples_show > 0: images, targets = data_iter.__next__() axes[n_samples_show - 1].imshow(images[0].numpy().squeeze(), cmap='gray') axes[n_samples_show - 1].set_xticks([]) axes[n_samples_show - 1].set_yticks([]) axes[n_samples_show - 1].set_title("Labeled: {}".format(targets.item())) n_samples_show -= 1 ``` #### Testing data ``` n_samples = 50 X_test = datasets.MNIST(root='./data', train=False, download=True, transform=transforms.Compose([transforms.ToTensor()])) idx = np.append(np.where(X_test.targets == 0)[0][:n_samples], np.where(X_test.targets == 1)[0][:n_samples]) X_test.data = X_test.data[idx] X_test.targets = X_test.targets[idx] test_loader = torch.utils.data.DataLoader(X_test, batch_size=1, shuffle=True) ``` So far, we have loaded the data and coded a class that creates our quantum circuit which contains 1 trainable parameter. This quantum parameter will be inserted into a classical neural network along with the other classical parameters to form the hybrid neural network. We also created backward and forward pass functions that allow us to do backpropagation and optimise our neural network. Lastly, we need to specify our neural network architecture such that we can begin to train our parameters using optimisation techniques provided by PyTorch. ### 3.5 Creating the Hybrid Neural Network <a id='hybrid-nn'></a> We can use a neat PyTorch pipeline to create a neural network architecture. The network will need to be compatible in terms of its dimensionality when we insert the quantum layer (i.e. our quantum circuit). Since our quantum in this example contains 1 parameter, we must ensure the network condenses neurons down to size 1. We create a typical Convolutional Neural Network with two fully-connected layers at the end. The value of the last neuron of the fully-connected layer is fed as the parameter $\theta$ into our quantum circuit. The circuit measurement then serves as the final prediction for 0 or 1 as provided by a $\sigma_z$ measurement. ``` class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 32, kernel_size=5) self.conv2 = nn.Conv2d(32, 64, kernel_size=5) self.dropout = nn.Dropout2d() self.fc1 = nn.Linear(256, 64) self.fc2 = nn.Linear(64, 1) self.hybrid = Hybrid(qiskit.Aer.get_backend('qasm_simulator'), 100, np.pi / 2) def forward(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.conv2(x)) x = F.max_pool2d(x, 2) x = self.dropout(x) x = x.view(-1, 256) x = F.relu(self.fc1(x)) x = self.fc2(x) x = self.hybrid(x) return torch.cat((x, 1 - x), -1) ``` ### 3.6 Training the Network <a id='training'></a> We now have all the ingredients to train our hybrid network! We can specify any [PyTorch optimiser](https://pytorch.org/docs/stable/optim.html), [learning rate](https://en.wikipedia.org/wiki/Learning_rate) and [cost/loss function](https://en.wikipedia.org/wiki/Loss_function) in order to train over multiple epochs. In this instance, we use the [Adam optimiser](https://arxiv.org/abs/1412.6980), a learning rate of 0.001 and the [negative log-likelihood loss function](https://pytorch.org/docs/stable/_modules/torch/nn/modules/loss.html). ``` model = Net() optimizer = optim.Adam(model.parameters(), lr=0.001) loss_func = nn.NLLLoss() epochs = 20 loss_list = [] model.train() for epoch in range(epochs): total_loss = [] for batch_idx, (data, target) in enumerate(train_loader): optimizer.zero_grad() # Forward pass output = model(data) # Calculating loss loss = loss_func(output, target) # Backward pass loss.backward() # Optimize the weights optimizer.step() total_loss.append(loss.item()) loss_list.append(sum(total_loss)/len(total_loss)) print('Training [{:.0f}%]\tLoss: {:.4f}'.format( 100. * (epoch + 1) / epochs, loss_list[-1])) ``` Plot the training graph ``` plt.plot(loss_list) plt.title('Hybrid NN Training Convergence') plt.xlabel('Training Iterations') plt.ylabel('Neg Log Likelihood Loss') ``` ### 3.7 Testing the Network <a id='testing'></a> ``` model.eval() with torch.no_grad(): correct = 0 for batch_idx, (data, target) in enumerate(test_loader): output = model(data) pred = output.argmax(dim=1, keepdim=True) correct += pred.eq(target.view_as(pred)).sum().item() loss = loss_func(output, target) total_loss.append(loss.item()) print('Performance on test data:\n\tLoss: {:.4f}\n\tAccuracy: {:.1f}%'.format( sum(total_loss) / len(total_loss), correct / len(test_loader) * 100) ) n_samples_show = 6 count = 0 fig, axes = plt.subplots(nrows=1, ncols=n_samples_show, figsize=(10, 3)) model.eval() with torch.no_grad(): for batch_idx, (data, target) in enumerate(test_loader): if count == n_samples_show: break output = model(data) pred = output.argmax(dim=1, keepdim=True) axes[count].imshow(data[0].numpy().squeeze(), cmap='gray') axes[count].set_xticks([]) axes[count].set_yticks([]) axes[count].set_title('Predicted {}'.format(pred.item())) count += 1 ``` ## 4. What Now? <a id='what-now'></a> #### While it is totally possible to create hybrid neural networks, does this actually have any benefit? In fact, the classical layers of this network train perfectly fine (in fact, better) without the quantum layer. Furthermore, you may have noticed that the quantum layer we trained here **generates no entanglement**, and will, therefore, continue to be classically simulatable as we scale up this particular architecture. This means that if you hope to achieve a quantum advantage using hybrid neural networks, you'll need to start by extending this code to include a more sophisticated quantum layer. The point of this exercise was to get you thinking about integrating techniques from ML and quantum computing in order to investigate if there is indeed some element of interest - and thanks to PyTorch and Qiskit, this becomes a little bit easier. ``` import qiskit qiskit.__qiskit_version__ ```
github_jupyter
``` !pip install transformers from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn') model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn') sequence = ("In May, Churchill was still generally unpopular with many Conservatives and probably most of the Labour Party. Chamberlain " "remained Conservative Party leader until October when ill health forced his resignation. By that time, Churchill had won the " "doubters over and his succession as party leader was a formality." " " "He began his premiership by forming a five-man war cabinet which included Chamberlain as Lord President of the Council, " "Labour leader Clement Attlee as Lord Privy Seal (later as Deputy Prime Minister), Halifax as Foreign Secretary and Labour's " "Arthur Greenwood as a minister without portfolio. In practice, these five were augmented by the service chiefs and ministers " "who attended the majority of meetings. The cabinet changed in size and membership as the war progressed, one of the key " "appointments being the leading trades unionist Ernest Bevin as Minister of Labour and National Service. In response to " "previous criticisms that there had been no clear single minister in charge of the prosecution of the war, Churchill created " "and took the additional position of Minister of Defence, making him the most powerful wartime Prime Minister in British " "history. He drafted outside experts into government to fulfil vital functions, especially on the Home Front. These included " "personal friends like Lord Beaverbrook and Frederick Lindemann, who became the government's scientific advisor." " " "At the end of May, with the British Expeditionary Force in retreat to Dunkirk and the Fall of France seemingly imminent, " "Halifax proposed that the government should explore the possibility of a negotiated peace settlement using the still-neutral " "Mussolini as an intermediary. There were several high-level meetings from 26 to 28 May, including two with the French " "premier Paul Reynaud. Churchill's resolve was to fight on, even if France capitulated, but his position remained precarious " "until Chamberlain resolved to support him. Churchill had the full support of the two Labour members but knew he could not " "survive as Prime Minister if both Chamberlain and Halifax were against him. In the end, by gaining the support of his outer " "cabinet, Churchill outmanoeuvred Halifax and won Chamberlain over. Churchill believed that the only option was to fight on " "and his use of rhetoric hardened public opinion against a peaceful resolution and prepared the British people for a long war " "– Jenkins says Churchill's speeches were 'an inspiration for the nation, and a catharsis for Churchill himself'." " " "His first speech as Prime Minister, delivered to the Commons on 13 May was the 'blood, toil, tears and sweat' speech. It was " "little more than a short statement but, Jenkins says, 'it included phrases which have reverberated down the decades'.") inputs = tokenizer([sequence], max_length=1024, return_tensors='pt') summary_ids = model.generate(inputs['input_ids']) summary = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids] summary from transformers import pipeline sequence = ("In May, Churchill was still generally unpopular with many Conservatives and probably most of the Labour Party. Chamberlain " "remained Conservative Party leader until October when ill health forced his resignation. By that time, Churchill had won the " "doubters over and his succession as party leader was a formality." " " "He began his premiership by forming a five-man war cabinet which included Chamberlain as Lord President of the Council, " "Labour leader Clement Attlee as Lord Privy Seal (later as Deputy Prime Minister), Halifax as Foreign Secretary and Labour's " "Arthur Greenwood as a minister without portfolio. In practice, these five were augmented by the service chiefs and ministers " "who attended the majority of meetings. The cabinet changed in size and membership as the war progressed, one of the key " "appointments being the leading trades unionist Ernest Bevin as Minister of Labour and National Service. In response to " "previous criticisms that there had been no clear single minister in charge of the prosecution of the war, Churchill created " "and took the additional position of Minister of Defence, making him the most powerful wartime Prime Minister in British " "history. He drafted outside experts into government to fulfil vital functions, especially on the Home Front. These included " "personal friends like Lord Beaverbrook and Frederick Lindemann, who became the government's scientific advisor." " " "At the end of May, with the British Expeditionary Force in retreat to Dunkirk and the Fall of France seemingly imminent, " "Halifax proposed that the government should explore the possibility of a negotiated peace settlement using the still-neutral " "Mussolini as an intermediary. There were several high-level meetings from 26 to 28 May, including two with the French " "premier Paul Reynaud. Churchill's resolve was to fight on, even if France capitulated, but his position remained precarious " "until Chamberlain resolved to support him. Churchill had the full support of the two Labour members but knew he could not " "survive as Prime Minister if both Chamberlain and Halifax were against him. In the end, by gaining the support of his outer " "cabinet, Churchill outmanoeuvred Halifax and won Chamberlain over. Churchill believed that the only option was to fight on " "and his use of rhetoric hardened public opinion against a peaceful resolution and prepared the British people for a long war " "– Jenkins says Churchill's speeches were 'an inspiration for the nation, and a catharsis for Churchill himself'." " " "His first speech as Prime Minister, delivered to the Commons on 13 May was the 'blood, toil, tears and sweat' speech. It was " "little more than a short statement but, Jenkins says, 'it included phrases which have reverberated down the decades'.") summarizer = pipeline("summarization") summarized = summarizer(sequence, min_length = 75, max_length=1024) summarized ```
github_jupyter