markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Exercise 1. Get a better fit to the data (create a better model & fit it) - try using different optimizers in scipy.optimize
arr = np.arange(100) arr[10:50] arr[slice(10,50)]
_____no_output_____
BSD-3-Clause
notebooks/MarchApril2016_TutorialSession/Notebook - March 18 - Part 1.ipynb
ESO-python/ESOPythonTutorials
Cloning the pyprobml repo
!git clone https://github.com/probml/pyprobml %cd pyprobml/scripts
_____no_output_____
MIT
notebooks/figures/chapter16_figures.ipynb
kzymgch/pyprobml
Installing required software (This may take few minutes)
!apt-get install octave -qq > /dev/null !apt-get install liboctave-dev -qq > /dev/null %%capture %load_ext autoreload %autoreload 2 DISCLAIMER = 'WARNING : Editing in VM - changes lost after reboot!!' from google.colab import files def interactive_script(script, i=True): if i: s = open(script).read() if not s.split('\n', 1)[0]=="## "+DISCLAIMER: open(script, 'w').write( f'## {DISCLAIMER}\n' + '#' * (len(DISCLAIMER) + 3) + '\n\n' + s) files.view(script) %run $script else: %run $script def show_image(img_path): from google.colab.patches import cv2_imshow import cv2 img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED) img=cv2.resize(img,(600,600)) cv2_imshow(img)
_____no_output_____
MIT
notebooks/figures/chapter16_figures.ipynb
kzymgch/pyprobml
Figure 16.1: (a) Illustration of a $K$-nearest neighbors classifier in 2d for $K=5$. The nearest neighbors of test point $\mathbf x $ have labels $\ 1, 1, 1, 0, 0\ $, so we predict $p(y=1|\mathbf x , \mathcal D ) = 3/5$. (b) Illustration of the Voronoi tesselation induced by 1-NN. Adapted from Figure 4.13 of [DHS01] . Figure(s) generated by [knn_voronoi_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/knn_voronoi_plot.py)
interactive_script("knn_voronoi_plot.py")
_____no_output_____
MIT
notebooks/figures/chapter16_figures.ipynb
kzymgch/pyprobml
Figure 16.2: Decision boundaries induced by a KNN classifier. (a) $K=1$. (b) $K=2$. (c) $K=5$. (d) Train and test error vs $K$. Figure(s) generated by [knn_classify_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/knn_classify_demo.py)
interactive_script("knn_classify_demo.py")
_____no_output_____
MIT
notebooks/figures/chapter16_figures.ipynb
kzymgch/pyprobml
Figure 16.3: Illustration of the curse of dimensionality. (a) We embed a small cube of side $s$ inside a larger unit cube. (b) We plot the edge length of a cube needed to cover a given volume of the unit cube as a function of the number of dimensions. Adapted from Figure 2.6 from [HTF09] . Figure(s) generated by [curse_dimensionality.py](https://github.com/probml/pyprobml/blob/master/scripts/curse_dimensionality.py)
interactive_script("curse_dimensionality.py")
_____no_output_____
MIT
notebooks/figures/chapter16_figures.ipynb
kzymgch/pyprobml
Figure 16.4: Illustration of latent coincidence analysis (LCA) as a directed graphical model. The inputs $\mathbf x , \mathbf x ' \in \mathbb R ^D$ are mapped into Gaussian latent variables $\mathbf z , \mathbf z ' \in \mathbb R ^L$ via a linear mapping $\mathbf W $. If the two latent points coincide (within length scale $\kappa $) then we set the similarity label to $y=1$, otherwise we set it to $y=0$. From Figure 1 of [ML12] . Used with kind permission of Lawrence Saul.
show_image("/content/pyprobml/notebooks/figures/images/LCA-PGM.png")
_____no_output_____
MIT
notebooks/figures/chapter16_figures.ipynb
kzymgch/pyprobml
Figure 16.5: Networks for deep metric learning. (a) Siamese network. (b) Triplet network. From Figure 5 of [MH19] . Used with kind permission of Mahmut Kaya. .
show_image("/content/pyprobml/notebooks/figures/images/siameseNet.png") show_image("/content/pyprobml/notebooks/figures/images/tripletNet.png")
_____no_output_____
MIT
notebooks/figures/chapter16_figures.ipynb
kzymgch/pyprobml
Figure 16.6: Speeding up triplet loss minimization. (a) Illustration of hard vs easy negatives. Here $a$ is the anchor point, $p$ is a positive point, and $n_i$ are negative points. Adapted from Figure 4 of [MH19] . (b) Standard triplet loss would take $8 \times 3 \times 4 = 96$ calculations, whereas using a proxy loss (with one proxy per class) takes $8 \times 2 = 16$ calculations. From Figure 1 of [Tha+19] . Used with kind permission of Gustavo Cerneiro.
show_image("/content/pyprobml/notebooks/figures/images/hard-negative-mining.png") show_image("/content/pyprobml/notebooks/figures/images/tripletBound.png")
_____no_output_____
MIT
notebooks/figures/chapter16_figures.ipynb
kzymgch/pyprobml
Figure 16.7: Adding spherical embedding constraint to a deep metric learning method. Used with kind permission of Dingyi Zhang.
show_image("/content/pyprobml/notebooks/figures/images/SEC.png")
_____no_output_____
MIT
notebooks/figures/chapter16_figures.ipynb
kzymgch/pyprobml
Figure 16.8: A comparison of some popular normalized kernels. Figure(s) generated by [smoothingKernelPlot.m](https://github.com/probml/pmtk3/blob/master/demos/smoothingKernelPlot.m)
!octave -W smoothingKernelPlot.m >> _
_____no_output_____
MIT
notebooks/figures/chapter16_figures.ipynb
kzymgch/pyprobml
Figure 16.9: A nonparametric (Parzen) density estimator in 1d estimated from 6 data points, denoted by x. Top row: uniform kernel. Bottom row: Gaussian kernel. Left column: bandwidth parameter $h=1$. Right column: bandwidth parameter $h=2$. Adapted from http://en.wikipedia.org/wiki/Kernel_density_estimation . Figure(s) generated by [Kernel_density_estimation](http://en.wikipedia.org/wiki/Kernel_density_estimation) [parzen_window_demo2.py](https://github.com/probml/pyprobml/blob/master/scripts/parzen_window_demo2.py)
interactive_script("parzen_window_demo2.py")
_____no_output_____
MIT
notebooks/figures/chapter16_figures.ipynb
kzymgch/pyprobml
Figure 16.10: An example of kernel regression in 1d using a Gaussian kernel. Figure(s) generated by [kernelRegressionDemo.m](https://github.com/probml/pmtk3/blob/master/demos/kernelRegressionDemo.m)
!octave -W kernelRegressionDemo.m >> _
_____no_output_____
MIT
notebooks/figures/chapter16_figures.ipynb
kzymgch/pyprobml
Подключаем необходимые библиотеки
import os import datetime import numpy as np import pandas as pd from sqlalchemy import create_engine
_____no_output_____
MIT
project1_sql/scripts/requests.ipynb
mikhailbadin/ds_couse_homeworks
Настраиваем подключение к БД
engine = create_engine("postgresql://postgres:@{}:{}".format(os.environ["POSGRES_HOST"], os.environ['POSGRES_PORT']))
_____no_output_____
MIT
project1_sql/scripts/requests.ipynb
mikhailbadin/ds_couse_homeworks
Запрос 2: Подсчитать общее количест получателей
emailreceivers = pd.read_sql("select * from emailreceivers", engine) req2_result = emailreceivers["personid"].drop_duplicates().count() print(req2_result) pd.DataFrame( [req2_result], columns=['count']).to_csv(os.environ['PANDAS_EXPORT_FOLDER'] + 'req2.csv')
418
MIT
project1_sql/scripts/requests.ipynb
mikhailbadin/ds_couse_homeworks
Запрос 3: Подсчитать количество отправленных писем за 2012
emails = pd.read_sql("select * from emails", engine) req3_result = emails[ (emails["metadatadatesent"] >= np.datetime64('2012-01-01')) & (emails["metadatadatesent"] < np.datetime64('2013-01-01')) ]["id"].count() print(req3_result) pd.DataFrame( [req3_result], columns=['emails']).to_csv(os.environ['PANDAS_EXPORT_FOLDER'] + 'req3.csv')
1500
MIT
project1_sql/scripts/requests.ipynb
mikhailbadin/ds_couse_homeworks
Запрос 5:Вывести список писем в следующем формате:Отправитель письма, получатель письма, тема письма,и отсортированы по теме письма.
emails = pd.read_sql("select * from emails", engine) req5_df = emails[ ["metadatafrom", "metadatato", "metadatasubject"] ].sort_values(by=["metadatasubject"]) print(req5_df.head()) req5_df.head().to_csv(os.environ['PANDAS_EXPORT_FOLDER'] + 'req5.csv')
metadatafrom metadatato metadatasubject 35711 Mills, Cheryl D H - MORE WHEN WE SPEAK 3951 Mills, Cheryl D H - MORE WHEN WE SPEAK 1528 hrod17@clintonemail.com millscd@state.gov - MORE WHEN WE SPEAK 17504 hrod17@clintonemail.com millscd@state.gov - MORE WHEN WE SPEAK 25355 hrod17@clintonemail.com millscd@state.gov - MORE WHEN WE SPEAK
MIT
project1_sql/scripts/requests.ipynb
mikhailbadin/ds_couse_homeworks
Запрос 6:Вывести среднюю длинну сообщения
emails = pd.read_sql("select * from emails", engine) req6_result = emails["extractedbodytext"].str.len().mean() print(req6_result) pd.DataFrame( [req6_result], columns=['avg']).to_csv(os.environ['PANDAS_EXPORT_FOLDER'] + 'req6.csv')
533.1624147137348
MIT
project1_sql/scripts/requests.ipynb
mikhailbadin/ds_couse_homeworks
Запрос 7:Вывести количество писем, в которых содержится подстрока UNCLASSIFIED
emails = pd.read_sql("select * from emails", engine) req7_result = emails[ emails["extractedbodytext"].str.contains("UNCLASSIFIED") == True ]["extractedbodytext"].count() print(req7_result) pd.DataFrame( [req7_result], columns=['text']).to_csv(os.environ['PANDAS_EXPORT_FOLDER'] + 'req7.csv')
78
MIT
project1_sql/scripts/requests.ipynb
mikhailbadin/ds_couse_homeworks
20 Jan
import pandas as pd import numpy as np tips = pd.read_csv("/Users/amitkumarsahu/Desktop/pandas/tips.csv") tips tips.pivot_table(index=["day", "smoker"]) tips["tip_pct"]=tips["tip"]*100/tips["total_bill"] tips[:6] tips.pivot_table(["tip_pct", "size"], index=["time", "day"], columns="smoker")
_____no_output_____
CC-BY-3.0
Pivot_table and crosstab.ipynb
priyankakushi/machine-learning
21 Jan
tips.pivot_table(["tip_pct", "size"], index=["time", "day"], columns="smoker", margins=True) tips.pivot_table("tip", index=["time", "smoker"], columns="day", aggfunc=len, margins=True) tips.pivot_table("tip_pct", index=["time", "size", "smoker"], columns="day", aggfunc="mean", fill_value=0) from io import StringIO data = """\ Sample Nationality Handedness 1 USA Right-handed 2 Japan Left-handed 3 USA Right-handed 4 Japan Right-handed 5 Japan Left-handed 6 Japan Right-handed 7 USA Right-handed 8 USA Left-handed 9 Japan Right-handed 10 USA Right-handed""" data = pd.read_table(StringIO(data), sep = "\s+") data pd.crosstab(data["Nationality"], data["Handedness"], margins=True) pd.crosstab([tips.time, tips.day], tips.smoker, margins=True)
_____no_output_____
CC-BY-3.0
Pivot_table and crosstab.ipynb
priyankakushi/machine-learning
OverviewAnother data type that we haven't discussed yet is the list. Lists store values for us, and we can access these values using indices, or by knowing where an element is located to access it. Questions- How can I store multiple values? Objectives- Explain why programs need collections of values.- Write programs that create flat lists, index them, slice them, and modify them through assignment and method calls. Code A list stores many values in a single structure.
pressures = [0.273, 0.275, 0.277, 0.276] print(pressures[0]) print("pressures", pressures) print("length:", len(pressures))
pressures [0.273, 0.275, 0.277, 0.276] length: 4
MIT
lessonNotebooks/1.11 Lists.ipynb
crowegian/SummerFellowsDBMIWorkshopMaterials
Use an item’s index to fetch it from a list.
print("Zeroth item:", pressures[0]) print("fourth item:", pressures[3])
Zeroth item: 0.273 fourth item: 0.276
MIT
lessonNotebooks/1.11 Lists.ipynb
crowegian/SummerFellowsDBMIWorkshopMaterials
Lists’ values can be replaced by assigning to them.
print(pressures) pressures[0] = 0.265 print("current pressures", pressures)
current pressures [0.265, 0.275, 0.277, 0.276]
MIT
lessonNotebooks/1.11 Lists.ipynb
crowegian/SummerFellowsDBMIWorkshopMaterials
Appending items to a list lengthens it.
primes = [2, 3, 5] print("current primes", primes) primes.append(7) primes.append(9) print("primes has become", primes) primes = primes.append(13) print(primes) teen_primes = [11, 13, 17, 19] middle_aged_primes = [37, 41, 43, 47] print("primes is currently", primes) primes.extend(teen_primes) print("primes is now", primes)
primes is currently [2, 3, 5, 7, 9] primes is now [2, 3, 5, 7, 9, 11, 13, 17, 19]
MIT
lessonNotebooks/1.11 Lists.ipynb
crowegian/SummerFellowsDBMIWorkshopMaterials
Use del to remove items from a list entirely.
primes = [2, 3, 5, 7, 9] print("current primes", primes) del primes[2] print("primes now", primes)
current primes [2, 3, 5, 7, 9] primes now [2, 3, 7, 9]
MIT
lessonNotebooks/1.11 Lists.ipynb
crowegian/SummerFellowsDBMIWorkshopMaterials
Lists may contain values of different types.
goals = [1, "create lists.", 2, "extract items from lists", 3, "modify lists"] goals
_____no_output_____
MIT
lessonNotebooks/1.11 Lists.ipynb
crowegian/SummerFellowsDBMIWorkshopMaterials
Character strings can be indexed like lists.
element = "carbon" print(element[0])
c
MIT
lessonNotebooks/1.11 Lists.ipynb
crowegian/SummerFellowsDBMIWorkshopMaterials
Character strings are immutable.
print(element) element[0] = "C" print(element)
carbon
MIT
lessonNotebooks/1.11 Lists.ipynb
crowegian/SummerFellowsDBMIWorkshopMaterials
Indexing beyond the end of the collection is an error.
print("99th element of element is:", element[99]) primes = [2, 3, 7, 9] primes.remove(7) primes del primes[0] primes
_____no_output_____
MIT
lessonNotebooks/1.11 Lists.ipynb
crowegian/SummerFellowsDBMIWorkshopMaterials
Inference and ValidationNow that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen before. This is called **overfitting** and it impairs inference performance. To test for overfitting while training, we measure the performance on data not in the training set called the **validation** set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training. In this notebook, I'll show you how to do this in PyTorch. As usual, let's start by loading the dataset through torchvision. You'll learn more about torchvision and loading data in a later part. This time we'll be taking advantage of the test set which you can get by setting `train=False` here:```pythontestset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)```The test set contains images just like the training set. Typically you'll see 10-20% of the original dataset held out for testing and validation with the rest being used for training.
import torch from torchvision import datasets, transforms # Define a transform to normalize the data transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) # Download and load the training data trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # Download and load the test data testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
_____no_output_____
Apache-2.0
01_introduction/5_Inference_and_Validation.ipynb
thearonn/dtu_mlops
Here I'll create a model like normal, using the same one from my solution for part 4.
from torch import nn, optim import torch.nn.functional as F class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.log_softmax(self.fc4(x), dim=1) return x
_____no_output_____
Apache-2.0
01_introduction/5_Inference_and_Validation.ipynb
thearonn/dtu_mlops
The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recallDefinition_(classification_context)) and top-5 error rate. We'll focus on accuracy here. First I'll do a forward pass with one batch from the test set.
model = Classifier() images, labels = next(iter(testloader)) # Get the class probabilities ps = torch.exp(model(images)) # Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples print(ps.shape)
torch.Size([64, 10])
Apache-2.0
01_introduction/5_Inference_and_Validation.ipynb
thearonn/dtu_mlops
With the probabilities, we can get the most likely class using the `ps.topk` method. This returns the $k$ highest values. Since we just want the most likely class, we can use `ps.topk(1)`. This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index.
top_p, top_class = ps.topk(1, dim=1) # Look at the most likely classes for the first 10 examples print(top_class[:10,:])
tensor([[1], [1], [1], [1], [1], [1], [1], [1], [1], [1]])
Apache-2.0
01_introduction/5_Inference_and_Validation.ipynb
thearonn/dtu_mlops
Now we can check if the predicted classes match the labels. This is simple to do by equating `top_class` and `labels`, but we have to be careful of the shapes. Here `top_class` is a 2D tensor with shape `(64, 1)` while `labels` is 1D with shape `(64)`. To get the equality to work out the way we want, `top_class` and `labels` must have the same shape.If we do```pythonequals = top_class == labels````equals` will have shape `(64, 64)`, try it yourself. What it's doing is comparing the one element in each row of `top_class` with each element in `labels` which returns 64 True/False boolean values for each row.
equals = top_class == labels.view(*top_class.shape) #print(equals)
_____no_output_____
Apache-2.0
01_introduction/5_Inference_and_Validation.ipynb
thearonn/dtu_mlops
Now we need to calculate the percentage of correct predictions. `equals` has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to `torch.mean`. If only it was that simple. If you try `torch.mean(equals)`, you'll get an error```RuntimeError: mean is not implemented for type torch.ByteTensor```This happens because `equals` has type `torch.ByteTensor` but `torch.mean` isn't implemented for tensors with that type. So we'll need to convert `equals` to a float tensor. Note that when we take `torch.mean` it returns a scalar tensor, to get the actual value as a float we'll need to do `accuracy.item()`.
accuracy = torch.mean(equals.type(torch.FloatTensor)) print(f'Accuracy: {accuracy.item()*100}%')
Accuracy: 10.9375%
Apache-2.0
01_introduction/5_Inference_and_Validation.ipynb
thearonn/dtu_mlops
The network is untrained so it's making random guesses and we should see an accuracy around 10%. Now let's train our network and include our validation pass so we can measure how well the network is performing on the test set. Since we're not updating our parameters in the validation pass, we can speed up our code by turning off gradients using `torch.no_grad()`:```python turn off gradientswith torch.no_grad(): validation pass here for images, labels in testloader: ...```>**Exercise:** Implement the validation loop below and print out the total accuracy after the loop. You can largely copy and paste the code from above, but I suggest typing it in because writing it out yourself is essential for building the skill. In general you'll always learn more by typing it rather than copy-pasting. You should be able to get an accuracy above 80%.
model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: # validation pass here for images, labels in testloader: ps = torch.exp(model(images)) op_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy = torch.mean(equals.type(torch.FloatTensor)) print(f'Accuracy: {accuracy.item()*100}%')
racy: 82.8125% Accuracy: 85.9375% Accuracy: 75.0% Accuracy: 90.625% Accuracy: 89.0625% Accuracy: 93.75% Accuracy: 82.8125% Accuracy: 79.6875% Accuracy: 79.6875% Accuracy: 82.8125% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 92.1875% Accuracy: 82.8125% Accuracy: 89.0625% Accuracy: 81.25% Accuracy: 81.25% Accuracy: 89.0625% Accuracy: 81.25% Accuracy: 87.5% Accuracy: 78.125% Accuracy: 84.375% Accuracy: 75.0% Accuracy: 79.6875% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 82.8125% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 92.1875% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 90.625% Accuracy: 82.8125% Accuracy: 93.75% Accuracy: 85.9375% Accuracy: 98.4375% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 90.625% Accuracy: 82.8125% Accuracy: 96.875% Accuracy: 79.6875% Accuracy: 76.5625% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 76.5625% Accuracy: 87.5% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 82.8125% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 81.25% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 82.8125% Accuracy: 82.8125% Accuracy: 92.1875% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 87.5% Accuracy: 93.75% Accuracy: 87.5% Accuracy: 79.6875% Accuracy: 81.25% Accuracy: 90.625% Accuracy: 82.8125% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 81.25% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 82.8125% Accuracy: 93.75% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 82.8125% Accuracy: 92.1875% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 79.6875% Accuracy: 90.625% Accuracy: 82.8125% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 79.6875% Accuracy: 92.1875% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 92.1875% Accuracy: 79.6875% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 78.125% Accuracy: 79.6875% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 92.1875% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 79.6875% Accuracy: 87.5% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 79.6875% Accuracy: 82.8125% Accuracy: 78.125% Accuracy: 92.1875% Accuracy: 90.625% Accuracy: 75.0% Accuracy: 79.6875% Accuracy: 79.6875% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 75.0% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 81.25% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 85.9375% Accuracy: 90.625% Accuracy: 92.1875% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 93.75% Accuracy: 92.1875% Accuracy: 81.25% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 92.1875% Accuracy: 93.75% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 95.3125% Accuracy: 79.6875% Accuracy: 89.0625% Accuracy: 81.25% Accuracy: 92.1875% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 90.625% Accuracy: 82.8125% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 90.625% Accuracy: 92.1875% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 81.25% Accuracy: 90.625% Accuracy: 92.1875% Accuracy: 81.25% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 81.25% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 93.75% Accuracy: 84.375% Accuracy: 93.75% Accuracy: 90.625% Accuracy: 76.5625% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 92.1875% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 93.75% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 79.6875% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 79.6875% Accuracy: 79.6875% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 92.1875% Accuracy: 95.3125% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 81.25% Accuracy: 89.0625% Accuracy: 92.1875% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 79.6875% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 78.125% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 82.8125% Accuracy: 95.3125% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 79.6875% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 81.25% Accuracy: 93.75% Accuracy: 79.6875% Accuracy: 79.6875% Accuracy: 90.625% Accuracy: 92.1875% Accuracy: 87.5% Accuracy: 92.1875% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 92.1875% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 85.9375% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 89.0625% Accuracy: 79.6875% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 93.75% Accuracy: 75.0% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 92.1875% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 92.1875% Accuracy: 76.5625% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 79.6875% Accuracy: 78.125% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 92.1875% Accuracy: 81.25% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 85.9375% Accuracy: 92.1875% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 95.3125% Accuracy: 81.25% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 90.625% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 81.25% Accuracy: 96.875% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 76.5625% Accuracy: 95.3125% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 82.8125% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 96.875% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 95.3125% Accuracy: 90.625% Accuracy: 89.0625% Accuracy: 81.25% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 82.8125% Accuracy: 93.75% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 79.6875% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 93.75% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 78.125% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 92.1875% Accuracy: 90.625% Accuracy: 92.1875% Accuracy: 79.6875% Accuracy: 85.9375% Accuracy: 92.1875% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 82.8125% Accuracy: 79.6875% Accuracy: 79.6875% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 81.25% Accuracy: 82.8125% Accuracy: 84.375% Accuracy: 93.75% Accuracy: 93.75% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 93.75% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 93.75% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 93.75% Accuracy: 85.9375% Accuracy: 92.1875% Accuracy: 82.8125% Accuracy: 93.75% Accuracy: 85.9375% Accuracy: 90.625% Accuracy: 95.3125% Accuracy: 73.4375% Accuracy: 93.75% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 93.75% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 79.6875% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 93.75% Accuracy: 82.8125% Accuracy: 78.125% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 96.875% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 92.1875% Accuracy: 76.5625% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 92.1875% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 84.375% Accuracy: 92.1875% Accuracy: 82.8125% Accuracy: 95.3125% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 93.75% Accuracy: 90.625% Accuracy: 90.625% Accuracy: 95.3125% Accuracy: 93.75% Accuracy: 87.5% Accuracy: 81.25% Accuracy: 79.6875% Accuracy: 89.0625% Accuracy: 75.0% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 90.625% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 95.3125% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 90.625% Accuracy: 93.75% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 81.25% Accuracy: 81.25% Accuracy: 95.3125% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 93.75% Accuracy: 95.3125% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 82.8125% Accuracy: 84.375% Accuracy: 93.75% Accuracy: 84.375% Accuracy: 79.6875% Accuracy: 89.0625% Accuracy: 79.6875% Accuracy: 93.75% Accuracy: 87.5% Accuracy: 93.75% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 92.1875% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 79.6875% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 93.75% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 93.75% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 90.625% Accuracy: 82.8125% Accuracy: 93.75% Accuracy: 90.625% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 90.625% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 81.25% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 81.25% Accuracy: 90.625% Accuracy: 92.1875% Accuracy: 82.8125% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 89.0625% Accuracy: 93.75% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 93.75% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 93.75% Accuracy: 93.75% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 95.3125% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 93.75% Accuracy: 92.1875% Accuracy: 92.1875% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 96.875% Accuracy: 87.5% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 93.75% Accuracy: 85.9375% Accuracy: 93.75% Accuracy: 82.8125% Accuracy: 71.875% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 81.25% Accuracy: 79.6875% Accuracy: 90.625% Accuracy: 81.25% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 93.75% Accuracy: 81.25% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 92.1875% Accuracy: 82.8125% Accuracy: 89.0625% Accuracy: 95.3125% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 78.125% Accuracy: 93.75% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 79.6875% Accuracy: 78.125% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 95.3125% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 92.1875% Accuracy: 93.75% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 93.75% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 90.625% Accuracy: 82.8125% Accuracy: 81.25% Accuracy: 87.5% Accuracy: 81.25% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 95.3125% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 79.6875% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 92.1875% Accuracy: 92.1875% Accuracy: 82.8125% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 92.1875% Accuracy: 87.5% Accuracy: 90.625% Accuracy: 73.4375% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 92.1875% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 92.1875% Accuracy: 81.25% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 92.1875% Accuracy: 92.1875% Accuracy: 90.625% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 90.625% Accuracy: 95.3125% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 81.25% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 92.1875% Accuracy: 82.8125% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 81.25% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 96.875% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 92.1875% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 92.1875% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 95.3125% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 82.8125% Accuracy: 92.1875% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 95.3125% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 79.6875% Accuracy: 90.625% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 87.5% Accuracy: 92.1875% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 79.6875% Accuracy: 87.5% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 92.1875% Accuracy: 93.75% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 92.1875% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 92.1875% Accuracy: 82.8125% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 92.1875% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 82.8125% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 81.25% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 85.9375% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 79.6875% Accuracy: 87.5% Accuracy: 78.125% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 90.625% Accuracy: 92.1875% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 92.1875% Accuracy: 92.1875% Accuracy: 87.5% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 92.1875% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 96.875% Accuracy: 82.8125% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 92.1875% Accuracy: 93.75% Accuracy: 95.3125% Accuracy: 85.9375% Accuracy: 92.1875% Accuracy: 96.875% Accuracy: 85.9375% Accuracy: 93.75% Accuracy: 84.375% Accuracy: 100.0% Accuracy: 95.3125% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 98.4375% Accuracy: 84.375% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 79.6875% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 79.6875% Accuracy: 93.75% Accuracy: 89.0625% Accuracy: 81.25% Accuracy: 93.75% Accuracy: 87.5% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 90.625% Accuracy: 82.8125% Accuracy: 82.8125% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 81.25% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 95.3125% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 92.1875% Accuracy: 90.625% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 75.0% Accuracy: 84.375% Accuracy: 92.1875% Accuracy: 92.1875% Accuracy: 93.75% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 85.9375% Accuracy: 90.625% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 92.1875% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 85.9375% Accuracy: 92.1875% Accuracy: 93.75% Accuracy: 81.25% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 84.375% Accuracy: 92.1875% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 84.375% Accuracy: 92.1875% Accuracy: 82.8125% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 81.25% Accuracy: 81.25% Accuracy: 85.9375% Accuracy: 81.25% Accuracy: 84.375% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 81.25% Accuracy: 90.625% Accuracy: 92.1875% Accuracy: 82.8125% Accuracy: 84.375% Accuracy: 82.8125% Accuracy: 90.625% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 81.25% Accuracy: 90.625% Accuracy: 85.9375% Accuracy: 85.9375% Accuracy: 90.625% Accuracy: 93.75% Accuracy: 95.3125% Accuracy: 84.375% Accuracy: 92.1875% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 89.0625% Accuracy: 89.0625% Accuracy: 85.9375% Accuracy: 75.0% Accuracy: 84.375% Accuracy: 89.0625% Accuracy: 84.375% Accuracy: 87.5% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 78.125% Accuracy: 87.5% Accuracy: 92.1875% Accuracy: 84.375% Accuracy: 75.0% Accuracy: 92.1875% Accuracy: 90.625% Accuracy: 87.5% Accuracy: 90.625% Accuracy: 93.75% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 92.1875% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 78.125% Accuracy: 87.5% Accuracy: 89.0625% Accuracy: 78.125% Accuracy: 82.8125% Accuracy: 85.9375% Accuracy: 90.625% Accuracy: 89.0625% Accuracy: 90.625% Accuracy: 90.625% Accuracy: 90.625% Accuracy: 79.6875% Accuracy: 85.9375% Accuracy: 87.5% Accuracy: 90.625% Accuracy: 89.0625% Accuracy: 93.75% Accuracy: 93.75%
Apache-2.0
01_introduction/5_Inference_and_Validation.ipynb
thearonn/dtu_mlops
OverfittingIf we look at the training and validation losses as we train the network, we can see a phenomenon known as overfitting.The network learns the training set better and better, resulting in lower training losses. However, it starts having problems generalizing to data outside the training set leading to the validation loss increasing. The ultimate goal of any deep learning model is to make predictions on new data, so we should strive to get the lowest validation loss possible. One option is to use the version of the model with the lowest validation loss, here the one around 8-10 training epochs. This strategy is called *early-stopping*. In practice, you'd save the model frequently as you're training then later choose the model with the lowest validation loss.The most common method to reduce overfitting (outside of early-stopping) is *dropout*, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data. Adding dropout in PyTorch is straightforward using the [`nn.Dropout`](https://pytorch.org/docs/stable/nn.htmltorch.nn.Dropout) module.```pythonclass Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) Dropout module with 0.2 drop probability self.dropout = nn.Dropout(p=0.2) def forward(self, x): make sure input tensor is flattened x = x.view(x.shape[0], -1) Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) output so no dropout here x = F.log_softmax(self.fc4(x), dim=1) return x```During training we want to use dropout to prevent overfitting, but during inference we want to use the entire network. So, we need to turn off dropout during validation, testing, and whenever we're using the network to make predictions. To do this, you use `model.eval()`. This sets the model to evaluation mode where the dropout probability is 0. You can turn dropout back on by setting the model to train mode with `model.train()`. In general, the pattern for the validation loop will look like this, where you turn off gradients, set the model to evaluation mode, calculate the validation loss and metric, then set the model back to train mode.```python turn off gradientswith torch.no_grad(): set model to evaluation mode model.eval() validation pass here for images, labels in testloader: ... set model back to train modemodel.train()``` > **Exercise:** Add dropout to your model and train it on Fashion-MNIST again. See if you can get a lower validation loss or higher accuracy.
from torch import nn, optim import torch.nn.functional as F class Classifier2(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(784, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 64) self.fc4 = nn.Linear(64, 10) self.dropout = nn.Dropout(p=0.2) def forward(self, x): # make sure input tensor is flattened x = x.view(x.shape[0], -1) # Now with dropout x = self.dropout(F.relu(self.fc1(x))) x = self.dropout(F.relu(self.fc2(x))) x = self.dropout(F.relu(self.fc3(x))) x = F.log_softmax(self.fc4(x), dim=1) return x model = Classifier2() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) epochs = 30 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in trainloader: model.train() optimizer.zero_grad() log_ps = model(images) loss = criterion(log_ps, labels) loss.backward() optimizer.step() print('loss: ', loss) running_loss += loss.item() else: model.eval() # validation pass here images, labels next(iter(testloader)) ps = torch.exp(model(images)) op_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy = torch.mean(equals.type(torch.FloatTensor)) print(f'Accuracy: {accuracy.item()*100}%')
_____no_output_____
Apache-2.0
01_introduction/5_Inference_and_Validation.ipynb
thearonn/dtu_mlops
InferenceNow that the model is trained, we can use it for inference. We've done this before, but now we need to remember to set the model in inference mode with `model.eval()`. You'll also want to turn off autograd with the `torch.no_grad()` context.
# Import helper module (should be in the repo) import helper # Test out your network! model.eval() dataiter = iter(testloader) images, labels = dataiter.next() img = images[0] # Convert 2D image to 1D vector img = img.view(1, 784) # Calculate the class probabilities (softmax) for img with torch.no_grad(): output = model.forward(img) ps = torch.exp(output) # Plot the image and probabilities helper.view_classify(img.view(1, 28, 28), ps, version='Fashion')
_____no_output_____
Apache-2.0
01_introduction/5_Inference_and_Validation.ipynb
thearonn/dtu_mlops
TimeEval shared parameter optimization result analysis
# Automatically reload packages: %load_ext autoreload %autoreload 2 # imports import json import warnings import pandas as pd import numpy as np import scipy as sp import plotly.offline as py import plotly.graph_objects as go import plotly.figure_factory as ff import plotly.express as px from plotly.subplots import make_subplots from pathlib import Path from timeeval import Datasets
_____no_output_____
MIT
notebooks/TimeEval shared param optimization analysis 2.ipynb
HPI-Information-Systems/TimeEval
ConfigurationTarget parameters that were optimized in this run (per algorithm):
algo_param_mapping = { "HBOS": ["n_bins"], "MultiHMM": ["n_bins"], "MTAD-GAT": ["context_window_size", "mag_window_size", "score_window_size"], "PST": ["n_bins"] }
_____no_output_____
MIT
notebooks/TimeEval shared param optimization analysis 2.ipynb
HPI-Information-Systems/TimeEval
Define data and results folder:
# constants and configuration data_path = Path("../../data") / "test-cases" result_root_path = Path("../timeeval_experiments/results") experiment_result_folder = "2021-10-04_shared-optim2" # build paths result_paths = [d for d in result_root_path.iterdir() if d.is_dir()] print("Available result directories:") display(result_paths) result_path = result_root_path / experiment_result_folder print("\nSelecting:") print(f"Data path: {data_path.resolve()}") print(f"Result path: {result_path.resolve()}")
Available result directories:
MIT
notebooks/TimeEval shared param optimization analysis 2.ipynb
HPI-Information-Systems/TimeEval
Load results and dataset metadata:
def extract_hyper_params(param_names): def extract(value): params = json.loads(value) result = None for name in param_names: try: value = params[name] result = pd.Series([name, value], index=["optim_param_name", "optim_param_value"]) break except KeyError: pass if result is None: raise ValueError(f"Parameters {param_names} not found in '{value}'") return result return extract # load results print(f"Reading results from {result_path.resolve()}") df = pd.read_csv(result_path / "results.csv") # add dataset_name column df["dataset_name"] = df["dataset"].str.split(".").str[0] # add optim_params column df[["optim_param_name", "optim_param_value"]] = "" for algo in algo_param_mapping: df_algo = df.loc[df["algorithm"] == algo] df.loc[df_algo.index, ["optim_param_name", "optim_param_value"]] = df_algo["hyper_params"].apply(extract_hyper_params(algo_param_mapping[algo])) # load dataset metadata dmgr = Datasets(data_path)
Reading results from /home/sebastian/Documents/Projects/akita/timeeval/timeeval_experiments/results/2021-10-04_shared-optim2
MIT
notebooks/TimeEval shared param optimization analysis 2.ipynb
HPI-Information-Systems/TimeEval
Define plotting functions:
def load_scores_df(algorithm_name, dataset_id, optim_params, repetition=1): params_id = df.loc[(df["algorithm"] == algorithm_name) & (df["collection"] == dataset_id[0]) & (df["dataset"] == dataset_id[1]) & (df["optim_param_name"] == optim_params[0]) & (df["optim_param_value"] == optim_params[1]), "hyper_params_id"].item() path = ( result_path / algorithm_name / params_id / dataset_id[0] / dataset_id[1] / str(repetition) / "anomaly_scores.ts" ) return pd.read_csv(path, header=None) def plot_scores(algorithm_name, dataset_name): if isinstance(algorithm_name, tuple): algorithms = [algorithm_name] elif not isinstance(algorithm_name, list): raise ValueError("Please supply a tuple (algorithm_name, optim_param_name, optim_param_value) or a list thereof as first argument!") else: algorithms = algorithm_name # construct dataset ID dataset_id = ("GutenTAG", f"{dataset_name}.unsupervised") # load dataset details df_dataset = dmgr.get_dataset_df(dataset_id) # check if dataset is multivariate dataset_dim = df.loc[df["dataset_name"] == dataset_name, "dataset_input_dimensionality"].unique().item() dataset_dim = dataset_dim.lower() auroc = {} df_scores = pd.DataFrame(index=df_dataset.index) skip_algos = [] algos = [] for algo, optim_param_name, optim_param_value in algorithms: optim_params = f"{optim_param_name}={optim_param_value}" algos.append((algo, optim_params)) # get algorithm metric results try: auroc[(algo, optim_params)] = df.loc[ (df["algorithm"] == algo) & (df["dataset_name"] == dataset_name) & (df["optim_param_name"] == optim_param_name) & (df["optim_param_value"] == optim_param_value), "ROC_AUC" ].item() except ValueError: warnings.warn(f"No ROC_AUC score found! Probably {algo} with params {optim_params} was not executed on {dataset_name}.") auroc[(algo, optim_params)] = -1 skip_algos.append((algo, optim_params)) continue # load scores training_type = df.loc[df["algorithm"] == algo, "algo_training_type"].values[0].lower().replace("_", "-") try: df_scores[(algo, optim_params)] = load_scores_df(algo, ("GutenTAG", f"{dataset_name}.{training_type}"), (optim_param_name, optim_param_value)).iloc[:, 0] except (ValueError, FileNotFoundError): warnings.warn(f"No anomaly scores found! Probably {algo} was not executed on {dataset_name} with params {optim_params}.") df_scores[(algo, optim_params)] = np.nan skip_algos.append((algo, optim_params)) algorithms = [a for a in algos if a not in skip_algos] # Create plot fig = make_subplots(2, 1) if dataset_dim == "multivariate": for i in range(1, df_dataset.shape[1]-1): fig.add_trace(go.Scatter(x=df_dataset.index, y=df_dataset.iloc[:, i], name=f"channel-{i}"), 1, 1) else: fig.add_trace(go.Scatter(x=df_dataset.index, y=df_dataset.iloc[:, 1], name="timeseries"), 1, 1) fig.add_trace(go.Scatter(x=df_dataset.index, y=df_dataset["is_anomaly"], name="label"), 2, 1) for item in algorithms: algo, optim_params = item fig.add_trace(go.Scatter(x=df_scores.index, y=df_scores[item], name=f"{algo}={auroc[item]:.4f} ({optim_params})"), 2, 1) fig.update_xaxes(matches="x") fig.update_layout( title=f"Results of {','.join(np.unique([a for a, _ in algorithms]))} on {dataset_name}", height=400 ) return py.iplot(fig)
_____no_output_____
MIT
notebooks/TimeEval shared param optimization analysis 2.ipynb
HPI-Information-Systems/TimeEval
Analyze TimeEval results
df[["algorithm", "dataset_name", "status", "AVERAGE_PRECISION", "PR_AUC", "RANGE_PR_AUC", "ROC_AUC", "execute_main_time", "optim_param_name", "optim_param_value"]]
_____no_output_____
MIT
notebooks/TimeEval shared param optimization analysis 2.ipynb
HPI-Information-Systems/TimeEval
--- Errors
df_error_counts = df.pivot_table(index=["algo_training_type", "algorithm"], columns=["status"], values="repetition", aggfunc="count") df_error_counts = df_error_counts.fillna(value=0).astype(np.int64)
_____no_output_____
MIT
notebooks/TimeEval shared param optimization analysis 2.ipynb
HPI-Information-Systems/TimeEval
Aggregation of errors per algorithm grouped by algorithm training type
for tpe in ["SEMI_SUPERVISED", "SUPERVISED", "UNSUPERVISED"]: if tpe in df_error_counts.index: print(tpe) display(df_error_counts.loc[tpe])
SEMI_SUPERVISED
MIT
notebooks/TimeEval shared param optimization analysis 2.ipynb
HPI-Information-Systems/TimeEval
Slow algorithmsAlgorithms, for which more than 50% of all executions ran into the timeout.
df_error_counts[df_error_counts["Status.TIMEOUT"] > (df_error_counts["Status.ERROR"] + df_error_counts["Status.OK"])]
_____no_output_____
MIT
notebooks/TimeEval shared param optimization analysis 2.ipynb
HPI-Information-Systems/TimeEval
Broken algorithmsAlgorithms, which failed for at least 50% of the executions.
error_threshold = 0.5 df_error_counts[df_error_counts["Status.ERROR"] > error_threshold*( df_error_counts["Status.TIMEOUT"] + df_error_counts["Status.ERROR"] + df_error_counts["Status.OK"] )]
_____no_output_____
MIT
notebooks/TimeEval shared param optimization analysis 2.ipynb
HPI-Information-Systems/TimeEval
Detail errors
algo_list = ["MTAD-GAT", "MultiHMM"] error_list = ["OOM", "Segfault", "ZeroDivisionError", "IncompatibleParameterConfig", "WrongDBNState", "SyntaxError", "other"] errors = pd.DataFrame(0, index=error_list, columns=algo_list, dtype=np.int_) for algo in algo_list: df_tmp = df[(df["algorithm"] == algo) & (df["status"] == "Status.ERROR")] for i, run in df_tmp.iterrows(): path = result_path / run["algorithm"] / run["hyper_params_id"] / run["collection"] / run["dataset"] / str(run["repetition"]) / "execution.log" with path.open("r") as fh: log = fh.read() if "status code '139'" in log: errors.loc["Segfault", algo] += 1 elif "status code '137'" in log: errors.loc["OOM", algo] += 1 elif "Expected n_neighbors <= n_samples" in log: errors.loc["IncompatibleParameterConfig", algo] += 1 elif "ZeroDivisionError" in log: errors.loc["ZeroDivisionError", algo] += 1 elif "does not have key" in log: errors.loc["WrongDBNState", algo] += 1 elif "NameError" in log: errors.loc["SyntaxError", algo] += 1 else: print(f'\n\n#### {run["dataset"]} ({run["optim_param_name"]}:{run["optim_param_value"]})') print(log) errors.loc["other", algo] += 1 errors.T
_____no_output_____
MIT
notebooks/TimeEval shared param optimization analysis 2.ipynb
HPI-Information-Systems/TimeEval
--- Parameter assessment
sort_by = ("ROC_AUC", "mean") metric_agg_type = ["mean", "median"] time_agg_type = "mean" aggs = { "AVERAGE_PRECISION": metric_agg_type, "RANGE_PR_AUC": metric_agg_type, "PR_AUC": metric_agg_type, "ROC_AUC": metric_agg_type, "train_main_time": time_agg_type, "execute_main_time": time_agg_type, "repetition": "count" } df_tmp = df.reset_index() df_tmp = df_tmp.groupby(by=["algorithm", "optim_param_name", "optim_param_value"]).agg(aggs) df_tmp = df_tmp.reset_index() df_tmp = df_tmp.sort_values(by=["algorithm", "optim_param_name", sort_by], ascending=False) df_tmp = df_tmp.set_index(["algorithm", "optim_param_name", "optim_param_value"]) with pd.option_context("display.max_rows", None, "display.max_columns", None): display(df_tmp)
_____no_output_____
MIT
notebooks/TimeEval shared param optimization analysis 2.ipynb
HPI-Information-Systems/TimeEval
Selected parameters- HBOS: `n_bins=20` (more is better)- MultiHMM: `n_bins=5` (8 is slightly better, but takes way longer. The scores are very bad anyway!)- MTAD-GAT: `context_window_size=30,mag_window_size=40,score_window_size=52` (very slow)- PST: `n_bins=5` (less is better)> **Note**>> MTAD-GAT is very slow! Exclude from further runs!
plot_scores([("MultiHMM", "n_bins", 5), ("MultiHMM", "n_bins", 8)], "sinus-type-mean") plot_scores([("MTAD-GAT", "context_window_size", 30), ("MTAD-GAT", "context_window_size", 40)], "sinus-type-mean")
_____no_output_____
MIT
notebooks/TimeEval shared param optimization analysis 2.ipynb
HPI-Information-Systems/TimeEval
Tacotron 2 inference code Edit the variables **checkpoint_path** and **text** to match yours and run the entire code to generate plots of mel outputs, alignments and audio synthesis from the generated mel-spectrogram using Griffin-Lim. Import libraries and setup matplotlib
import matplotlib %matplotlib inline import matplotlib.pylab as plt import IPython.display as ipd import sys sys.path.append('waveglow/') import numpy as np import torch from hparams import create_hparams from model import Tacotron2 from layers import TacotronSTFT, STFT from audio_processing import griffin_lim from train import load_model from text import text_to_sequence from denoiser import Denoiser def plot_data(data, figsize=(16, 4)): fig, axes = plt.subplots(1, len(data), figsize=figsize) for i in range(len(data)): axes[i].imshow(data[i], aspect='auto', origin='bottom', interpolation='none')
_____no_output_____
BSD-3-Clause
inference.ipynb
ncantrell/tacotron2
Setup hparams
hparams = create_hparams() hparams.sampling_rate = 22050
WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md * https://github.com/tensorflow/addons If you depend on functionality not listed there, please file an issue.
BSD-3-Clause
inference.ipynb
ncantrell/tacotron2
Load model from checkpoint
checkpoint_path = "tacotron2_statedict.pt" model = load_model(hparams) model.load_state_dict(torch.load(checkpoint_path)['state_dict']) _ = model.cuda().eval().half()
_____no_output_____
BSD-3-Clause
inference.ipynb
ncantrell/tacotron2
Load WaveGlow for mel2audio synthesis and denoiser
waveglow_path = 'waveglow_256channels.pt' waveglow = torch.load(waveglow_path)['model'] waveglow.cuda().eval().half() for m in waveglow.modules(): if 'Conv' in str(type(m)): setattr(m, 'padding_mode', 'zeros') for k in waveglow.convinv: k.float() denoiser = Denoiser(waveglow)
/home/kabakov/VOICE/venv/lib/python3.6/site-packages/torch/serialization.py:454: SourceChangeWarning: source code of class 'torch.nn.modules.conv.ConvTranspose1d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) /home/kabakov/VOICE/venv/lib/python3.6/site-packages/torch/serialization.py:454: SourceChangeWarning: source code of class 'torch.nn.modules.conv.Conv1d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) waveglow/glow.py:162: RuntimeWarning: nn.functional.tanh is deprecated. Use torch.tanh instead. torch.IntTensor([self.n_channels])) waveglow/glow.py:162: RuntimeWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead. torch.IntTensor([self.n_channels]))
BSD-3-Clause
inference.ipynb
ncantrell/tacotron2
Prepare text input
#%%timeit 77.9 µs ± 237 ns text = "Waveglow is really awesome!" sequence = np.array(text_to_sequence(text, ['english_cleaners']))[None, :] sequence = torch.autograd.Variable( torch.from_numpy(sequence)).cuda().long()
_____no_output_____
BSD-3-Clause
inference.ipynb
ncantrell/tacotron2
Decode text input and plot results
#%%timeit 240 ms ± 9.72 ms mel_outputs, mel_outputs_postnet, _, alignments = model.inference(sequence) plot_data((mel_outputs.float().data.cpu().numpy()[0], mel_outputs_postnet.float().data.cpu().numpy()[0], alignments.float().data.cpu().numpy()[0].T))
_____no_output_____
BSD-3-Clause
inference.ipynb
ncantrell/tacotron2
Synthesize audio from spectrogram using WaveGlow
#%%timeit 193 ms ± 4.87 ms with torch.no_grad(): audio = waveglow.infer(mel_outputs_postnet, sigma=0.666) ipd.Audio(audio[0].data.cpu().numpy(), rate=hparams.sampling_rate)
waveglow/glow.py:162: RuntimeWarning: nn.functional.tanh is deprecated. Use torch.tanh instead. torch.IntTensor([self.n_channels])) waveglow/glow.py:162: RuntimeWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead. torch.IntTensor([self.n_channels]))
BSD-3-Clause
inference.ipynb
ncantrell/tacotron2
(Optional) Remove WaveGlow bias
audio_denoised = denoiser(audio, strength=0.01)[:, 0] ipd.Audio(audio_denoised.cpu().numpy(), rate=hparams.sampling_rate)
_____no_output_____
BSD-3-Clause
inference.ipynb
ncantrell/tacotron2
Save result as wav
import librosa # save librosa.output.write_wav('./out.wav', audio[0].data.cpu().numpy().astype(np.float32), 22050) # check y, sr = librosa.load('out.wav') ipd.Audio(y, rate=sr)
_____no_output_____
BSD-3-Clause
inference.ipynb
ncantrell/tacotron2
Table of Contents1&nbsp;&nbsp;label identity hairstyle2&nbsp;&nbsp;Prepare hairstyle images3&nbsp;&nbsp;prepare hairstyle manifest
from query.models import Video, FaceIdentity, Identity from esper.widget import * from esper.prelude import collect, esper_widget import pickle import os import random get_ipython().magic('matplotlib inline') get_ipython().magic('reload_ext autoreload') get_ipython().magic('autoreload 2')
_____no_output_____
Apache-2.0
app/notebooks/hairstyle.ipynb
scanner-research/esper-tv
label identity hairstyle
identity_hair_dict = {} identities = Identity.objects.all() identity_list = [(i.id, i.name) for i in identities] identity_list.sort() # 154 hair_color_3 = {0: 'black', 1: 'white', 2: 'blond'} hair_color_5 = {0: 'black', 1: 'white', 2: 'blond', 3: 'brown', 4: 'gray'} hair_length = {0: 'long', 1: 'medium', 2: 'short', 3: 'bald'} identity_label = [id for id in identity_label if id not in identity_hair_dict] # idx += 1 # iid = identity_list[idx][0] # name = identity_list[idx][1] # iid = identity_label[idx] # print(name) print(iid) result = qs_to_result( FaceIdentity.objects \ .filter(identity__id=1365) \ .filter(probability__gt=0.8), limit=30) esper_widget(result) ''' {'black' : 0, 'white': 1, 'blond' : 2}, # hair_color_3 {'black' : 0, 'white': 1, 'blond' : 2, 'brown' : 3, 'gray' : 4}, # hair_color_5 {'long' : 0, 'medium' : 1, 'short' : 2, 'bald' : 3} # hair_length ''' label = identity_hair_dict[iid] = (2,2,0) print(hair_color_3[label[0]], hair_color_5[label[1]], hair_length[label[2]]) pickle.dump(identity_hair_dict, open('/app/data/identity_hair_dict.pkl', 'wb'))
_____no_output_____
Apache-2.0
app/notebooks/hairstyle.ipynb
scanner-research/esper-tv
Prepare hairstyle images
faceIdentities = FaceIdentity.objects \ .filter(identity__name='melania trump') \ .filter(probability__gt=0.9) \ .select_related('face__frame__video') faceIdentities_sampled = random.sample(list(faceIdentities), 1000) print("Load %d face identities" % len(faceIdentities_sampled)) identity_grouped = collect(list(faceIdentities_sampled), lambda identity: identity.face.frame.video.id) print("Group into %d videos" % len(identity_grouped)) face_dict = {} for video_id, fis in identity_grouped.items(): video = Video.objects.filter(id=video_id)[0] face_list = [] for i in fis: face_id = i.face.id frame_id = i.face.frame.number identity_id = i.identity.id x1, y1, x2, y2 = i.face.bbox_x1, i.face.bbox_y1, i.face.bbox_x2, i.face.bbox_y2 bbox = (x1, y1, x2, y2) face_list.append((frame_id, face_id, identity_id, bbox)) face_list.sort() face_dict[video.path] = face_list print("Preload face bbox done") if __name__ == "__main__": solve_parallel(face_dict, res_dict_path='/app/result/clothing/fina_dict.pkl', workers=10)
_____no_output_____
Apache-2.0
app/notebooks/hairstyle.ipynb
scanner-research/esper-tv
prepare hairstyle manifest
img_list = os.listdir('/app/result/clothing/images/') len(img_list) group_by_identity = {} for name in img_list: iid = int(name.split('_')[0]) if iid not in group_by_identity: group_by_identity[iid] = [] else: group_by_identity[iid].append(name) identity_label = [id for id, img_list in group_by_identity.items() if len(img_list) > 10] identity_label.sort() identity_hair_dict = pickle.load(open('/app/data/identity_hair_dict.pkl', 'rb')) NUM_PER_ID = 1000 hairstyle_manifest = [] for iid, img_list in group_by_identity.items(): if len(img_list) > 10 and iid in identity_hair_dict: if len(img_list) < NUM_PER_ID: img_list_sample = img_list else: img_list_sample = random.sample(img_list, NUM_PER_ID) attrib = identity_hair_dict[iid] hairstyle_manifest += [(path, attrib) for path in img_list_sample] random.shuffle(hairstyle_manifest) len(hairstyle_manifest) pickle.dump(hairstyle_manifest, open('/app/result/clothing/hairstyle_manifest.pkl', 'wb'))
_____no_output_____
Apache-2.0
app/notebooks/hairstyle.ipynb
scanner-research/esper-tv
loading dataset
data = pd.read_csv("student-data.csv") data.head() data.shape type(data)
_____no_output_____
MIT
020_NIRMAL.ipynb
NirmalSilwal/Machine-Learning
Exploratory data analysis
import matplotlib.pyplot as plt import seaborn as sns a = data.plot() data.info() data.isnull().sum() a = sns.heatmap(data.isnull(),cmap='Blues') a = sns.heatmap(data.isnull(),cmap='Blues',yticklabels=False)
_____no_output_____
MIT
020_NIRMAL.ipynb
NirmalSilwal/Machine-Learning
this indicates that we have no any null values in the dataset
a = sns.heatmap(data.isna(),yticklabels=False)
_____no_output_____
MIT
020_NIRMAL.ipynb
NirmalSilwal/Machine-Learning
this heatmap indicates that we have no any 'NA' values in the dataset
sns.set(style='darkgrid') sns.countplot(data=data,x='reason')
_____no_output_____
MIT
020_NIRMAL.ipynb
NirmalSilwal/Machine-Learning
This indicates the count for choosing school of various reasons.A count plot can be thought of as a histogram across a categorical, instead of quantitative, variable.
data.head(7)
_____no_output_____
MIT
020_NIRMAL.ipynb
NirmalSilwal/Machine-Learning
calculating total passed students
passed = data.loc[data.passed == 'yes'] passed.shape tot_passed=passed.shape[0] print('total passed students is: {} '.format(tot_passed))
total passed students is: 265
MIT
020_NIRMAL.ipynb
NirmalSilwal/Machine-Learning
calculating total failed students
failed = data.loc[data.passed == 'no'] print('total failed students is: {}'.format(failed.shape[0]))
total failed students is: 130
MIT
020_NIRMAL.ipynb
NirmalSilwal/Machine-Learning
Feature Engineering
data.head()
_____no_output_____
MIT
020_NIRMAL.ipynb
NirmalSilwal/Machine-Learning
To identity feature and target variable lets first do some feature engineering stuff!
data.columns data.columns[-1]
_____no_output_____
MIT
020_NIRMAL.ipynb
NirmalSilwal/Machine-Learning
Here 'passed' is our target variable. Since in this system we need to develop the model that will predict the likelihood that a given student will pass, quantifying whether an intervention is necessary.
target = data.columns[-1] data.columns[:-1] #initially taking all columns as our feature variables feature = list(data.columns[:-1]) data[target].head() data[feature].head()
_____no_output_____
MIT
020_NIRMAL.ipynb
NirmalSilwal/Machine-Learning
Now taking feature and target data in seperate dataframe
featuredata = data[feature] targetdata = data[target]
_____no_output_____
MIT
020_NIRMAL.ipynb
NirmalSilwal/Machine-Learning
Now we need to convert several non-numeric columns like 'internet' into numerical form for the model to process
def preprocess_features(X): output = pd.DataFrame(index = X.index) for col, col_data in X.iteritems(): if col_data.dtype == object: col_data = col_data.replace(['yes', 'no'], [1, 0]) if col_data.dtype == object: col_data = pd.get_dummies(col_data, prefix = col) output = output.join(col_data) return output featuredata = preprocess_features(featuredata) type(featuredata) featuredata.head() featuredata.drop(['address_R','sex_F'],axis=1,inplace=True) featuredata.columns featuredata.drop(['famsize_GT3','Pstatus_A',],axis=1,inplace=True)
_____no_output_____
MIT
020_NIRMAL.ipynb
NirmalSilwal/Machine-Learning
MODEL IMPLEMENTATION Decision tree
from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split model=DecisionTreeClassifier() X_train, X_test, y_train, y_test = train_test_split(featuredata, targetdata, test_size=0.33, random_state=6) model.fit(X_train,y_train) from sklearn.metrics import accuracy_score predictions = model.predict(X_test) accuracy_score(y_test,predictions)*100
_____no_output_____
MIT
020_NIRMAL.ipynb
NirmalSilwal/Machine-Learning
K-Nearest Neighbours
from sklearn.neighbors import KNeighborsClassifier new_classifier = KNeighborsClassifier(n_neighbors=7) new_classifier.fit(X_train,y_train) predictions2 = new_classifier.predict(X_test) accuracy_score(y_test,predictions2)*100
_____no_output_____
MIT
020_NIRMAL.ipynb
NirmalSilwal/Machine-Learning
SVM
from sklearn import svm clf = svm.SVC(random_state=6) clf.fit(featuredata,targetdata) clf.score(featuredata,targetdata) predictions3= clf.predict(X_test) accuracy_score(y_test,predictions3)*100
_____no_output_____
MIT
020_NIRMAL.ipynb
NirmalSilwal/Machine-Learning
k-means clustering
import warnings warnings.filterwarnings('ignore') %matplotlib inline import scipy as sc import scipy.stats as stats from scipy.spatial.distance import euclidean import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.colors as mcolors plt.style.use('fivethirtyeight') plt.rcParams['font.family'] = 'sans-serif' plt.rcParams['font.serif'] = 'Ubuntu' plt.rcParams['font.monospace'] = 'Ubuntu Mono' plt.rcParams['font.size'] = 10 plt.rcParams['axes.labelsize'] = 10 plt.rcParams['axes.labelweight'] = 'bold' plt.rcParams['axes.titlesize'] = 10 plt.rcParams['xtick.labelsize'] = 8 plt.rcParams['ytick.labelsize'] = 8 plt.rcParams['legend.fontsize'] = 10 plt.rcParams['figure.titlesize'] = 12 plt.rcParams['image.cmap'] = 'jet' plt.rcParams['image.interpolation'] = 'none' plt.rcParams['figure.figsize'] = (16, 8) plt.rcParams['lines.linewidth'] = 2 plt.rcParams['lines.markersize'] = 8 colors = ['#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c', '#137e6d', '#be0119', '#3b638c', '#af6f09', '#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c', '#137e6d', '#be0119', '#3b638c', '#af6f09'] cmap = mcolors.LinearSegmentedColormap.from_list("", ["#82cafc", "#069af3", "#0485d1", colors[0], colors[8]]) rv0 = stats.multivariate_normal(mean=[3, 3], cov=[[.3, .3],[.3,.4]]) rv1 = stats.multivariate_normal(mean=[1.5, 1], cov=[[.5, -.5],[-.5,.7]]) rv2 = stats.multivariate_normal(mean=[0, 1.2], cov=[[.15, .1],[.1,.3]]) rv3 = stats.multivariate_normal(mean=[3.2, 1], cov=[[.2, 0],[0,.1]]) z0 = rv0.rvs(size=300) z1 = rv1.rvs(size=300) z2 = rv2.rvs(size=300) z3 = rv3.rvs(size=300) z=np.concatenate((z0, z1, z2, z3), axis=0) fig, ax = plt.subplots() ax.scatter(z0[:,0], z0[:,1], s=40, color='C0', alpha =.8, edgecolors='k', label=r'$C_0$') ax.scatter(z1[:,0], z1[:,1], s=40, color='C1', alpha =.8, edgecolors='k', label=r'$C_1$') ax.scatter(z2[:,0], z2[:,1], s=40, color='C2', alpha =.8, edgecolors='k', label=r'$C_2$') ax.scatter(z3[:,0], z3[:,1], s=40, color='C3', alpha =.8, edgecolors='k', label=r'$C_3$') plt.xlabel('$x$') plt.ylabel('$y$') plt.legend() plt.show() cc='xkcd:turquoise' fig = plt.figure(figsize=(16,8)) ax = fig.gca() plt.scatter(z[:,0], z[:,1], s=40, color=cc, edgecolors='k', alpha=.8) plt.ylabel('$x_2$', fontsize=12) plt.xlabel('$x_1$', fontsize=12) plt.title('Data set', fontsize=12) plt.show() # Number of clusters nc = 3 # X coordinates of random centroids C_x = np.random.sample(nc)*(np.max(z[:,0])-np.min(z[:,0]))*.7+np.min(z[:,0])*.7 # Y coordinates of random centroids C_y = np.random.sample(nc)*(np.max(z[:,1])-np.min(z[:,1]))*.7+np.min(z[:,0])*.7 C = np.array(list(zip(C_x, C_y)), dtype=np.float32) fig = plt.figure(figsize=(16,8)) ax = fig.gca() plt.scatter(z[:,0], z[:,1], s=40, color=cc, edgecolors='k', alpha=.5) for i in range(nc): plt.scatter(C_x[i], C_y[i], marker='*', s=500, c=colors[i], edgecolors='k', linewidth=1.5) plt.ylabel('$x_2$', fontsize=12) plt.xlabel('$x_1$', fontsize=12) plt.title('Data set', fontsize=12) plt.show() C_list = [] errors = [] # Cluster Labels(0, 1, 2, 3) clusters = np.zeros(z.shape[0]) C_list.append(C) # Error func. - Distance between new centroids and old centroids error = np.linalg.norm([euclidean(C[i,:], [0,0]) for i in range(nc)]) errors.append(error) print("Error: {0:3.5f}".format(error)) for l in range(10): # Assigning each value to its closest cluster for i in range(z.shape[0]): distances = [euclidean(z[i,:], C[j,:]) for j in range(nc)] cluster = np.argmin(distances) clusters[i] = cluster # Storing the old centroid values C = np.zeros([nc,2]) # Finding the new centroids by taking the average value for i in range(nc): points = [z[j,:] for j in range(z.shape[0]) if clusters[j] == i] C[i] = np.mean(points, axis=0) error = np.linalg.norm([euclidean(C[i,:], C_list[-1][i,:]) for i in range(nc)]) errors.append(error) C_list.append(C) fig = plt.figure(figsize=(16,8)) ax = fig.gca() for cl in range(nc): z1 = z[clusters==cl] plt.scatter(z1[:,0],z1[:,1], c=colors[cl], marker='o', s=40, edgecolors='k', alpha=.7) for i in range(nc): plt.scatter(C[i,0], C[i,1], marker='*', s=400, c=colors[i], edgecolors='k', linewidth=1.5) plt.ylabel('$x_2$', fontsize=12) plt.xlabel('$x_1$', fontsize=12) plt.title('Data set', fontsize=12) plt.show() C_list print("Error: {0:3.5f}".format(error)) errors
_____no_output_____
MIT
codici/kmeans.ipynb
tvml/ml2122
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
!pip install git+https://github.com/google/starthinker
_____no_output_____
Apache-2.0
colabs/smartsheet_to_bigquery.ipynb
fivestones-apac/starthinker
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
CLOUD_PROJECT = 'PASTE PROJECT ID HERE' print("Cloud Project Set To: %s" % CLOUD_PROJECT)
_____no_output_____
Apache-2.0
colabs/smartsheet_to_bigquery.ipynb
fivestones-apac/starthinker
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE' print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
_____no_output_____
Apache-2.0
colabs/smartsheet_to_bigquery.ipynb
fivestones-apac/starthinker
4. Enter SmartSheet Sheet To BigQuery ParametersMove sheet data into a BigQuery table. 1. Specify SmartSheet token. 1. Locate the ID of a sheet by viewing its properties. 1. Provide a BigQuery dataset ( must exist ) and table to write the data into. 1. StarThinker will automatically map the correct schema.Modify the values below for your use case, can be done multiple times, then click play.
FIELDS = { 'auth_read': 'user', # Credentials used for reading data. 'auth_write': 'service', # Credentials used for writing data. 'token': '', # Retrieve from SmartSheet account settings. 'sheet': '', # Retrieve from sheet properties. 'dataset': '', # Existing BigQuery dataset. 'table': '', # Table to create from this report. 'schema': '', # Schema provided in JSON list format or leave empty to auto detect. 'link': True, # Add a link to each row as the first column. } print("Parameters Set To: %s" % FIELDS)
_____no_output_____
Apache-2.0
colabs/smartsheet_to_bigquery.ipynb
fivestones-apac/starthinker
5. Execute SmartSheet Sheet To BigQueryThis does NOT need to be modified unless you are changing the recipe, click play.
from starthinker.util.project import project from starthinker.script.parse import json_set_fields USER_CREDENTIALS = '/content/user.json' TASKS = [ { 'smartsheet': { 'auth': 'user', 'token': {'field': {'name': 'token','kind': 'string','order': 2,'default': '','description': 'Retrieve from SmartSheet account settings.'}}, 'sheet': {'field': {'name': 'sheet','kind': 'string','order': 3,'description': 'Retrieve from sheet properties.'}}, 'link': {'field': {'name': 'link','kind': 'boolean','order': 7,'default': True,'description': 'Add a link to each row as the first column.'}}, 'out': { 'bigquery': { 'auth': 'user', 'dataset': {'field': {'name': 'dataset','kind': 'string','order': 4,'default': '','description': 'Existing BigQuery dataset.'}}, 'table': {'field': {'name': 'table','kind': 'string','order': 5,'default': '','description': 'Table to create from this report.'}}, 'schema': {'field': {'name': 'schema','kind': 'json','order': 6,'description': 'Schema provided in JSON list format or leave empty to auto detect.'}} } } } } ] json_set_fields(TASKS, FIELDS) project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True) project.execute(_force=True)
_____no_output_____
Apache-2.0
colabs/smartsheet_to_bigquery.ipynb
fivestones-apac/starthinker
TD: prédiction du vote 2016 aux Etats-Unis par arbres de décisions et méthodes ensemblistesLa séance d'aujourd'hui porte sur la prévision du vote en 2016 aux États-Unis. Précisément, les données d'un recensement sont fournies avec diverses informations par comté à travers les États-Unis. L'objectif est de construire des prédicteurs de leur couleur politique (républicain ou démocrate) à partir de ces données. Exécuter les commandes suivantes pour charger l'environnement.
%matplotlib inline from pylab import * import numpy as np import os import random import matplotlib.pyplot as plt
_____no_output_____
MIT
MI203-td2_tree_and_forest.ipynb
NataliaDiaz/colab
Accès aux données* Elles sont disponibles: https://github.com/stepherbin/teaching/tree/master/ENSTA/TD2* Charger le fichier the combined_data.csv sur votre drive puis monter le depuis colab
USE_COLAB = True UPLOAD_OUTPUTS = False if USE_COLAB: # mount the google drive from google.colab import drive drive.mount('/content/drive', force_remount=True) # download data on GoogleDrive data_dir = "/content/drive/My Drive/teaching/ENSTA/TD_tree/" else: data_dir = "data/" import pandas as pd census_data = pd.read_csv( os.path.join(data_dir, 'combined_data.csv') )
_____no_output_____
MIT
MI203-td2_tree_and_forest.ipynb
NataliaDiaz/colab
Analyse préliminaire des donnéesLes données sont organisées en champs:* fips = code du comté à 5 chiffres, le premier ou les deux premiers chiffres indiquent l'état.* votes = nombre de votants* etc..Regarder leur structure, quantité, nature.Où se trouvent les informations pour former les ensembles d'apprentissage et de test?Où se trouvent les classes à prédire?Visualiser quelques distributions.Le format de données python est décrit ici:https://pandas.pydata.org/pandas-docs/stable/reference/frame.html
# Exemples de moyens d'accéder aux caractéristiques des données print(census_data.shape ) print(census_data.columns.values) print(census_data['fips']) print(census_data.head(3)) iattr = 10 attrname = census_data.columns[iattr] print("Mean of {} is {:.1f}".format(attrname,np.array(census_data[attrname]).mean())) ######################### ## METTRE VOTRE CODE ICI ######################### print("Nombre de données = {}".format(7878912123)) # à modifier print("Nombre d'attributs utiles = {}".format(4564564654)) # à modifier #hist....
_____no_output_____
MIT
MI203-td2_tree_and_forest.ipynb
NataliaDiaz/colab
La classe à prédire ('Democrat') n'est décrite que par un seul attribut binaire.Calculer la répartition des couleurs politiques (quel est a priori la probabilité qu'un comté soit démocrate vs. républicain)
######################### ## METTRE VOTRE CODE ICI ######################### print("La probabilité qu'un comté soit démocrate est de {:.2f}%%".format(100*proba_dem))
La probabilité qu'un comté soit démocrate est de 15.45%%
MIT
MI203-td2_tree_and_forest.ipynb
NataliaDiaz/colab
Préparation du chantier d'apprentissageOn va préparer les ensembles d'apprentissage et de test. Pour éviter des problèmes de format de données, on choisit une liste d'attributs utiles dans la liste "feature_cols" ci dessous.L'ensemble de test sera constitué des comtés d'un seul état.Info: https://scikit-learn.org/stable/model_selection.htmlListe des états et leurs codes FIPS code (2 digits):https://en.wikipedia.org/wiki/Federal_Information_Processing_Standard_state_code
## Sous ensembles d'attributs informatifs pour la suite feature_cols = ['BLACK_FEMALE_rate', 'BLACK_MALE_rate', 'Percent of adults with a bachelor\'s degree or higher, 2011-2015', 'ASIAN_MALE_rate', 'ASIAN_FEMALE_rate', '25-29_rate', 'age_total_pop', '20-24_rate', 'Deep_Pov_All', '30-34_rate', 'Density per square mile of land area - Population', 'Density per square mile of land area - Housing units', 'Unemployment_rate_2015', 'Deep_Pov_Children', 'PovertyAllAgesPct2014', 'TOT_FEMALE_rate', 'PerCapitaInc', 'MULTI_FEMALE_rate', '35-39_rate', 'MULTI_MALE_rate', 'Percent of adults completing some college or associate\'s degree, 2011-2015', '60-64_rate', '55-59_rate', '65-69_rate', 'TOT_MALE_rate', '85+_rate', '70-74_rate', '80-84_rate', '75-79_rate', 'Percent of adults with a high school diploma only, 2011-2015', 'WHITE_FEMALE_rate', 'WHITE_MALE_rate', 'Amish', 'Buddhist', 'Catholic', 'Christian Generic', 'Eastern Orthodox', 'Hindu', 'Jewish', 'Mainline Christian', 'Mormon', 'Muslim', 'Non-Catholic Christian', 'Other', 'Other Christian', 'Other Misc', 'Pentecostal / Charismatic', 'Protestant Denomination', 'Zoroastrian'] filtered_cols = ['Percent of adults with a bachelor\'s degree or higher, 2011-2015', 'Percent of adults completing some college or associate\'s degree, 2011-2015', 'Percent of adults with a high school diploma only, 2011-2015', 'Density per square mile of land area - Population', 'Density per square mile of land area - Housing units', 'WHITE_FEMALE_rate', 'WHITE_MALE_rate', 'BLACK_FEMALE_rate', 'BLACK_MALE_rate', 'ASIAN_FEMALE_rate', 'Catholic', 'Christian Generic', 'Jewish', '70-74_rate', 'D', 'R'] ## 1-state test split def county_data(census_data, fips_code=17): #fips_code 48=Texas, 34=New Jersey, 31=Nebraska, 17=Illinois, 06=California, 36=New York mask = census_data['fips'].between(fips_code*1000, fips_code*1000 + 999) census_data_train = census_data[~mask] census_data_test = census_data[mask] XTrain = census_data_train[feature_cols] yTrain = census_data_train['Democrat'] XTest = census_data_test[feature_cols] yTest = census_data_test['Democrat'] return XTrain, yTrain, XTest, yTest STATE_FIPS_CODE = 17 X_train, y_train, X_test, y_test = county_data(census_data, STATE_FIPS_CODE) #print(X_train.head(2)) #print(y_test.head(2))
BLACK_FEMALE_rate BLACK_MALE_rate ... Protestant Denomination Zoroastrian 0 0.067586 0.062079 ... 0 0 1 0.067586 0.062079 ... 0 0 [2 rows x 49 columns] 598 0 599 0 Name: Democrat, dtype: int64
MIT
MI203-td2_tree_and_forest.ipynb
NataliaDiaz/colab
Apprentissage d'un arbre de décisionOn utilisera la bibliothèque scikit learn * Construire l'arbre sur les données d'entrainement* Prédire le vote sur les comtés de test* Calculer l'erreur et la matrice de confusionFaire varier certains paramètres (profondeur max, pureté, critère...) et visualisez leur influence.Info: https://scikit-learn.org/stable/modules/tree.htmlInfo: https://scikit-learn.org/stable/modules/model_evaluation.html
from sklearn import tree ######################### ## METTRE VOTRE CODE ICI #########################
_____no_output_____
MIT
MI203-td2_tree_and_forest.ipynb
NataliaDiaz/colab
Les instructions suivantes permettent de visualiser l'arbre.Interpréter le contenu de la représentation.
import graphviz dot_data = tree.export_graphviz(clf, out_file=None) graph = graphviz.Source(dot_data) dot_data = tree.export_graphviz(clf, out_file=None, feature_names=X_train.columns.values, class_names=["R","D"], filled=True, rounded=True, special_characters=True) graph = graphviz.Source(dot_data) graph # Prédiction et évaluation ######################### ## METTRE VOTRE CODE ICI #########################
Predictions per county in state #34 are [0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 1] Votes per county in state #34 are [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1] 0.9251968503937008 0.7949910262685593 [[218 9] [ 10 17]]
MIT
MI203-td2_tree_and_forest.ipynb
NataliaDiaz/colab
--- BaggingL'objectif de cette partie est de construire **à la main** une approche de bagging.Le principe de l'approche est de:* Apprendre et collecter plusieurs arbres sur des échantillonnages aléatoires des données d'apprentissage* Agréger les prédictions par vote * Evaluer: Les prédictions agrégées* Comparer avec les arbres individuels et le résultat précédentUtiliser les fonctions de construction d'ensemble d'apprentissage/test de scikit-learn https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html pour générer les sous-esnembles échantillonnés.**Comparer après le cours** les fonctions de scikit-learn: https://scikit-learn.org/stable/modules/ensemble.htmlNumpy tips: [np.arange](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.arange.html), [numpy.sum](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.sum.html), [numpy.mean](https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.mean.html), [numpy.where](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.where.html)
from sklearn.model_selection import train_test_split # Données d'apprentissage: X_train, y_train, idx_train # Données de test: X_test, y_test, idx_test # Les étapes de conception du prédicteur (apprentissage) sont les suivantes: # - Construction des sous-ensembles de données # - Apprentissage d'un arbre # - Agrégation de l'arbre dans la forêt # # Pour le test def learn_forest(XTrain, yTrain, nb_trees, depth=15): ######################### ## COMPLETER LE CODE ######################### forest = [] singleperf=[] for ss in range(nb_trees): # bagging for subset # single tree training # grow the forest # single tree evaluation return forest,singleperf def predict_forest(forest, XTest, yTest = None): singleperf=[] all_preds=[] nb_trees = len(forest) ######################### ## METTRE VOTRE CODE ICI ######################### if (yTest is not None): return final_pred,singleperf else: return final_pred ######################### ## METTRE VOTRE CODE ICI ######################### X_train, y_train, X_test, y_test = county_data(census_data, 6) F,singleperf = learn_forest(X_train, y_train, 20, depth=15) pred, singleperftest = predict_forest(F, X_test, y_test) acc = perf.balanced_accuracy_score( y_test, pred ) print("Taux de bonne prédiction = {:.2f}%".format(100*acc)) print(mean(singleperftest)) #print(singleperftest) #print(singleperf)
Taux de bonne prédiction = 76.32% 0.6837740384615385
MIT
MI203-td2_tree_and_forest.ipynb
NataliaDiaz/colab
Osnabrück University - Machine Learning (Summer Term 2018) - Prof. Dr.-Ing. G. Heidemann, Ulf Krumnack Exercise Sheet 08 IntroductionThis week's sheet should be solved and handed in before the end of **Sunday, June 3, 2018**. If you need help (and Google and other resources were not enough), feel free to contact your groups' designated tutor or whomever of us you run into first. Please upload your results to your group's Stud.IP folder. Assignment 0: Math recap (Conditional Probability) [2 Bonus Points]This exercise is supposed to be very easy and is voluntary. There will be a similar exercise on every sheet. It is intended to revise some basic mathematical notions that are assumed throughout this class and to allow you to check if you are comfortable with them. Usually you should have no problem to answer these questions offhand, but if you feel unsure, this is a good time to look them up again. You are always welcome to discuss questions with the tutors or in the practice session. Also, if you have a (math) topic you would like to recap, please let us know. **a)** Explain the idea of conditional probability. How is it defined? Conditional probability is the probability that an event A happens, given that another event B happened.For example:The probability of rain is $$P(weather="rain") = 0.3$$. But if you observe, if the street is wet you would get the conditional probability $$P(weather= "rain" |~ street="wet") = 0.95$$The definition is:$$ P(A|B) = \frac{P(A,B)}{P(B)} $$ **b)** What is Bayes' theorem? What are its applications? Bayes Theorem states:$$ P(B|A) = \frac{P(A|B) \cdot P(B)}{P(A)} $$The most important application is in reasoning backwards from event to cause (from data to parameters of your distribution):$$ P(\Theta|Data) = \frac{P(Data|\Theta)P(\Theta)}{P(Data)}$$ **c)** What does the law of total probability state? The law of total probability states, that the probabilty of an event occuring is the same as the sum of the probabilities of this event occuring together with all possible states of an other event:$$P(A) = \sum_b P(A,B=b) = \sum_b P(A|B=b) P(B=b)$$ Assignment 1: Multilayer Perceptron (MLP) [10 Points]Last week you implemented a simple perceptron. We discussed that one can use multiple perceptrons to build a network. This week you will build your own MLP. Again the following code cells are just a guideline. If you feel like it, just follow the algorithm steps and implement the MLP yourself. ImplementationIn the following you will be guided through implementing an MLP step by step. Instead of sticking to this guide, you are free to take a complete custom approach instead if you wish.We will take a bottom-up approach: Starting from an individual **perceptron** (aka neuron), we will derive a **layer of perceptrons** and end up with a **multilayer perceptron** (aka neural network). Each step will be implemented as its own python *class*. Such a class defines a type of element which can be instantiated multiple times. You can think of the relation between such instances and their designated classes as individuals of a specific population (e.g. Bernard and Bianca are both individuals of the population mice). Class definitions contain methods, which can be used to manipulate instance of that class or to make it perform specific actions — again, taking the population reference, each mouse of the mice population would for example have the method `eat_cheese()`.To guide you along, all required classes and functions are outlined in valid python code with extensive comments. You just need to fill in the gaps. For each method the [docstring](https://www.python.org/dev/peps/pep-0257/what-is-a-docstring) (the big comment contained by triple quotes at the beginning of the method) describes the arguments that each specific method accepts (`Args`) and the values it is expected to return (`Returns`). PerceptronSimilar to last week you here need to implement a perceptron. But instead of directly applying it, we will define a class which is reusable to instantiate a theoretically infinite amount of individual perceptrons. We will need the following three functionalities: Weight initializationThe weights are initialized by sampling values from a standard normal distribution. There are as many weights as there are values in the input vector and an additional one for the perceptron's bias. Forward-Propagation / ActivationCalculate the weighted sums of a neuron's inputs and apply it's activation function $\sigma$. The output vector $o$ of perceptron $j$ of layer $k$ given an input $x$ (the output of the previous layer) in a neural network is given by the following formula. Note: $N$ gives the number of values of a given vector, $w_{j,0}(k)$ specifies the bias of perceptron $j$ in layer $k$ and $w_{j,1...N(x)}(k)$ the other weights of perceptron $j$ in layer $k$.$$o_{k,j}(x) = \sigma\left(w_{j,0}(k)+\sum\limits_{i=1}^{N(x)} x_i w_{j,i}(k)\right)$$Think of the weights $w(k)$ as a matrix being located in-between layer $k$ and the layer located *to its left* in the network. So values flowing from layer $k-1$ to layer $k$ are weighted by the values of $w(k)$. As activation function we will use the sigmoid function because of its nice derivative (needed later):$$\begin{align*}\sigma(x) &= \frac{1}{1 + \exp{(-x)}}\\\frac{d\sigma}{dx}(x) &= \sigma(x) \cdot (1 - \sigma(x))\end{align*}$$ Back-Propagation / AdaptationIn order to learn something the perceptron needs to slowly adjust its weights. Each weight $w_{j,i}$ in layer $k$ is adjusted by a value $\Delta w_{j,i}$ given a learning rate $\epsilon$, the previous layer's output (or, for the first hidden layer, the network's input) $o_{k-1,i}(x)$ and the layer's error signals $\delta(k)$ (which will be calculated by the MultilayerPerceptron):$$\Delta w_{j,i}(k) = \epsilon\, \delta_j(k) o_{k-1,i}(x)$$
import numpy as np # Activation function σ. # We use scipy's builtin because it fixes some NaN problems for us. # sigmoid = lambda x: 1 / (1 + np.exp(-x)) from scipy.special import expit as sigmoid class Perceptron: """Single neuron handling its own weights and bias.""" def __init__(self, dim_in, act_func=sigmoid): """Initialize a new neuron with its weights and bias. Args: dim_in (int): Dimensionality of the data coming into this perceptron. In a network of perceptrons this basically represents the number of neurons in the layer before this neuron's layer. Used for generating the perceptron's weights vector, which not only includes one weight per input but also an additional bias weight. act_fun (function): Function to apply on activation. """ self.act_func = act_func # Set self.weights ### BEGIN SOLUTION self.weights = np.random.normal(size=dim_in + 1) ### END SOLUTION def activate(self, x): """Activate this neuron with a specific input. Calculate the weighted sum of inputs and apply the activation function. Args: x (ndarray): Vector of input values. Returns: float: A real number representing the perceptron's activation after calculating the weighted sum of inputs and applying the perceptron's activation function. """ # Return the activation value ### BEGIN SOLUTION return self.act_func(self.weights @ np.append(1, x)) ### END SOLUTION def adapt(self, x, delta, rate=0.03): """Adapt this neuron's weights by a specific delta. Args: x (ndarray): Vector of input values. delta (float): Weight adaptation delta value. rate (float): Learning rate. """ # Adapt self.weights according to the update rule ### BEGIN SOLUTION self.weights += rate * delta * np.append(1, x) ### END SOLUTION _p = Perceptron(2) assert _p.weights.size == 3, "Should have a weight per input and a bias." assert isinstance(_p.activate([2, 1]), float), "Should activate as scalar." assert -1 <= _p.activate([100, 100]) <= 1, "Should activate using sigmoid." _p.weights = np.array([.5, .5, .5]) _p.adapt(np.array([2, 3]), np.array(.5)) assert np.allclose(_p.weights, [0.515, 0.53, 0.545]), \ "Should update weights correctly."
_____no_output_____
MIT
sheet_08/sheet_08_machine-learning_solution.ipynb
ArielMant0/ml2018
PerceptronLayerA `PerceptronLayer` is a combination of multiple `Perceptron` instances. It mainly is concerened with passing input and delta values to its individual neurons. There is no math to be done here! InitializationWhen initializing a `PerceptronLayer` (like this: `layer = PerceptronLayer(5, 3)`), the `__init__` function is called. It creates a list of `Perceptron`s: For each output value there must be one perceptron. Each of those perceptrons receives the same inputs and the same activation function as the perceptron layer. ActivationDuring the activation step, the perceptron layer activates each of its perceptrons. These values will not only be needed for forward propagation but will also be needed for implementing backpropagation in the `MultilayerPerceptron` (coming up next). AdaptationTo update its perceptrons, the perceptron layer adapts each one with the corresponding delta. For this purpose, the MLP passes a list of input values and a list of deltas to the adaptation function. The inputs are passed to *all* perceptrons. The list of deltas is exactly as long as the list of perceptrons: The first delta is for the first perceptron, the second for the second, etc. The delta values themselves will be computed by the MLP.
class PerceptronLayer: """Layer of multiple neurons. Attributes: perceptrons (list): List of perceptron instances in the layer. """ def __init__(self, dim_in, dim_out, act_func=sigmoid): """Initialize the layer as a list of individual neurons. A layer contains as many neurons as it has outputs, each neuron has as many input weights (+ bias) as the layer has inputs. Args: dim_in (int): Dimensionality of the expected input values, also the size of the previous layer of a neural network. dim_out (int): Dimensionality of the output, also the requested amount of in this layer and the input dimension of the next layer. act_func (function): Activation function to use in each perceptron of this layer. """ # Set self.perceptrons to a list of Perceptrons ### BEGIN SOLUTION self.perceptrons = [Perceptron(dim_in, act_func) for _ in range(dim_out)] ### END SOLUTION def activate(self, x): """Activate this layer by activating each individual neuron. Args: x (ndarray): Vector of input values. Retuns: ndarray: Vector of output values which can be used as input to another PerceptronLayer instance. """ # return the vector of activation values ### BEGIN SOLUTION return np.array([p.activate(x) for p in self.perceptrons]) ### END SOLUTION def adapt(self, x, deltas, rate=0.03): """Adapt this layer by adapting each individual neuron. Args: x (ndarray): Vector of input values. deltas (ndarray): Vector of delta values. rate (float): Learning rate. """ # Update all the perceptrons in this layer ### BEGIN SOLUTION for perceptron, delta in zip(self.perceptrons, deltas): perceptron.adapt(x, delta, rate) ### END SOLUTION @property def weight_matrix(self): """Helper property for getting this layer's weight matrix. Returns: ndarray: All the weights for this perceptron layer. """ return np.asarray([p.weights for p in self.perceptrons]).T _l = PerceptronLayer(3, 2) assert len(_l.perceptrons) == 2, "Should have as many perceptrons as outputs." assert len(_l.activate([1,2,3])) == 2, "Should provide correct output amount."
_____no_output_____
MIT
sheet_08/sheet_08_machine-learning_solution.ipynb
ArielMant0/ml2018
MultilayerPerceptron Forward-Propagation / ActivationPropagate the input value $x$ through each layer of the network, employing the output of the previous layer as input to the next layer. Back-Propagation / AdaptationThis is the most complex step of the whole task. Split into three separate parts:1. ***Forward propagation***: Compute the outputs for each individual layer – similar to the forward-propagation step above, but we need to keep track of the intermediate results to compute each layer's errors. That means: Store the input as the first "output" and then activate each of the network's layers using the *previous* layer's output and store the layer's activation result.2. ***Backward propagation***: Calculate each layer's error signals $\delta_i(k)$. The important part here is to do so from the last to the first array, because each layer's error depends on the error from its following layer. Note: The first part of this formula makes use of the activation functions derivative $\frac{d\sigma}{dx}(k)$. $$\delta_i(k) = o_i(k)\ (1 - o_i(k))\ \sum\limits_{j=1}^{N(k+1)} w_{ji}(k+1,k)\delta_j(k+1)$$ (*Hint*: For the last layer (i.e. the first you calculate the $\delta$ for) the sum in the formula above is the total network error. For all preceding layers $k$ you need to recalculate `e` using the $\delta$ and weights of layer $k+1$. We already implemented a helper function for you to access the weights of a specific layer. Check the `PerceptronLayer` if you did not find it yet.)3. ***Adaptation***: Call each layers adaptation function with its input, its designated error signals and the given learning rate.Hint: The last two steps can be performed in a single loop if you wish, but make sure to use the non-updated weights for the calculation of the next layer's error signals!
class MultilayerPerceptron: """Network of perceptrons, also a set of multiple perceptron layers. Attributes: layers (list): List of perceptron layers in the network. """ def __init__(self, *layers): """Initialize a new network, madeup of individual PerceptronLayers. Args: *layers: Arbitrarily many PerceptronLayer instances. """ self.layers = layers def activate(self, x): """Activate network and return the last layer's output. Args: x (ndarray): Vector of input values. Returns: (ndarray): Vector of output values from the last layer of the network after propagating forward through the network. """ # Propagate activation through the network # and return output for last layer ### BEGIN SOLUTION for layer in self.layers: x = layer.activate(x) return x ### END SOLUTION def adapt(self, x, t, rate=0.03): """Adapt the whole network given an input and expected output. Args: x (ndarray): Vector of input values. t (ndarray): Vector of target values (expected outputs). rate (float): Learning rate. """ # Activate each layer and collect intermediate outputs. ### BEGIN SOLUTION outputs = [x] for layer in self.layers: outputs.append(layer.activate(outputs[-1])) ### END SOLUTION # Calculate error 'e' between t and network output. ### BEGIN SOLUTION e = t - outputs[-1] ### END SOLUTION # Backpropagate error through the network computing # intermediate delta and adapting each layer. ### BEGIN SOLUTION for k, layer in reversed(list(enumerate(self.layers, 1))): layer_input = outputs[k - 1] layer_output = outputs[k] delta = (layer_output * (1 - layer_output)) * e e = (layer.weight_matrix @ delta)[1:] layer.adapt(layer_input, delta, rate) ### END SOLUTION
_____no_output_____
MIT
sheet_08/sheet_08_machine-learning_solution.ipynb
ArielMant0/ml2018
Classification Problem DefinitionBefore we start, we need a problem to solve. In the following cell we first generate some three dimensional data (= $\text{input_dim}$) between 0 and 1 and label all data according to a binary classification: If the data is close to the center (radius < 2.5), it belongs to one class, if it is further away from the center it belongs to the other class.In the cell below we visualize the data set.
def uniform(a, b, n=1): """Returns n floats uniformly distributed between a and b.""" return (b - a) * np.random.random_sample(n) + a n = 1000 radius = 5 r = np.append(uniform(0, radius * .5, n // 2), uniform(radius * .7, radius, n // 2)) angle = uniform(0, 2 * np.pi, n) x = r * np.sin(angle) + uniform(-radius, radius, n) y = r * np.cos(angle) + uniform(-radius, radius, n) inputs = np.vstack((x, y)).T targets = np.less(np.linalg.norm(inputs, axis=1), radius * .5) %matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots(num='Data') ax.set(title='Labeled Data') ax.scatter(*inputs.T, 2, c=targets, cmap='RdYlBu') plt.show()
_____no_output_____
MIT
sheet_08/sheet_08_machine-learning_solution.ipynb
ArielMant0/ml2018
Model DesignThe following cell already contains a simple model with a single layer. Play around with some different configurations!
MLP = MultilayerPerceptron( PerceptronLayer(2, 1), ) # Adapt this MLP ### BEGIN SOLUTION MLP = MultilayerPerceptron( PerceptronLayer(2, 4), PerceptronLayer(4, 2), PerceptronLayer(2, 1), ) ### END SOLUTION
_____no_output_____
MIT
sheet_08/sheet_08_machine-learning_solution.ipynb
ArielMant0/ml2018