code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Analyzing data with Pandas
First a little setup. Importing the pandas library as ```pd```
```
import pandas as pd
```
Set some helpful display options. Uncomment the boilerplate in this cell.
```
%matplotlib inline
pd.set_option("max_columns", 150)
pd.set_option('max_colwidth',40)
pd.options.display.float_format = '{:,.2f}'.format
```
open and read in the Master.csv and Salaries.csv tables in the ```data/2017/``` directory
```
master = pd.read_csv('../project3/data/2017/Master.csv') # File with player details
salary = pd.read_csv('../project3/data/2017/Salaries.csv') #File with baseball players' salaries
```
check to see what type each object is with `print(table_name)`. You can also use the ```.info()``` method to explore the data's structure.
```
master.info()
salary.info()
```
print out sample data for each table with `table.head()`<br>
see additional options by pressing `tab` after you type the `head()` method
```
master.head()
salary.head()
```
Now we join the two csv's using `pd.merge`.<br>
We want to keep all the players names in the `master` data set<br>
even if their salary is missing from the `salary` data set.<br>
We can always filter the NaN values out later
```
joined = pd.merge(left=master, right=salary, how="left")
```
see what columns the `joined` table contains
```
joined.info()
```
check if all the players have a salary assigned. The easiest way is to deduct the length of the `joined` table from the `master` table
```
len(master) - len(joined)
```
Something went wrong. There are now more players in the `joined` data set than in the `master` data set.<br>
Some entries probably got duplicated<br>
Let's check if we have duplicate `playerIDs` by using `.value_counts()`
```
joined["playerID"].value_counts()
```
Yep, we do.<br>
Let's filter out an arbitrary player to see why there is duplication
```
joined[joined["playerID"] == "moyerja01"]
```
As we can see, there are now salaries in the dataset for each year of the players carreer.<br>
We only want to have the most recent salary though.<br>
We therefore need to 'deduplicate' the data set.
But first, let's make sure we get the newest year. We can do this by sorting the data on the newest entry
```
joined = joined.sort_values(["playerID","yearID"])
```
Now we deduplicate
```
deduplicated = joined.drop_duplicates("playerID", keep="last")
```
And let's do the check again
```
len(master) - len(deduplicated)
```
Now we van get into the interesting part: analysis!
## What is the average (mean, median, max, min) salary?
```
deduplicated["salary"].describe()
```
## Who makes the most money?
```
max_salary = deduplicated["salary"].max()
deduplicated[deduplicated["salary"] == max_salary]
```
## What are the most common baseball players salaries?
Draw a histogram. <br>
```
deduplicated.hist("salary")
```
We can do the same with the column `yearID` to see how recent our data is.<br>
We have 30 years in our data set, so we need to do some minor tweaking
```
deduplicated.hist("yearID", bins=30)
```
## Who are the top 10% highest-paid players?
calculate the 90 percentile cutoff
```
top_10_p = deduplicated["salary"].quantile(q=0.9)
top_10_p
```
filter out players that make more money than the cutoff
```
best_paid = deduplicated[deduplicated["salary"] >= top_10_p]
best_paid
```
use the `nlargest` to see the top 10 best paid players
```
best_paid_top_10 = best_paid.nlargest(10, "salary")
best_paid_top_10
```
draw a chart
```
best_paid_top_10.plot(kind="barh", x="nameLast", y="salary")
```
save the data
```
best_paid.to_csv('highest-paid.csv', index=False)
```
| github_jupyter |
<div>
<img src="https://drive.google.com/uc?export=view&id=1vK33e_EqaHgBHcbRV_m38hx6IkG0blK_" width="350"/>
</div>
#**Artificial Intelligence - MSc**
##CS6462 - PROBABILISTIC AND EXPLAINABLE AI
###CS6462_Etivity_1a
###Author: Enrique Naredo
##Rolling a Die
Code in the cells are only hints and you must verify its correct implementation.
Lab_1.1
###Imports
Add any library you require.
```
# Here your code
import
```
###Random generator
Write a code to get one integer from 1 to 6, randomly chosen to simulate a die.
```
# Here your code
def DieRoller():
return number
```
### Roll a die N times
Write a code to roll a six-sided die N times
```
# Here your code
N = 10000000
for i in range(N):
# use your previous function
face[i] = DieRoller()
# increment counter
frequency[face] += 1
```
###Print frequency
Display frequency for each face.
```
# Here your code
print('Face', "Frequency")
for :
print()
```
### Plot a histogram
```
# Here your code
sns.displot(frequency, kde = False)
plt.title("Histogram of simulated dice rolls")
plt.xlim([0,7])
```
### Dice rolling simulator
Use the nicer Dice rolling simulator from the Lab_1.1.
* Do not use the while loop (code line 63), because you already have a random generator.
```
# Here your code
# string var
x = "y"
# dictionary
face = {
# case 1
1: (
"┌─────────┐\n"
"│ │\n"
"│ ● │\n"
"│ │\n"
"└─────────┘\n"
),
```
###Show first events
Show the first 10 events using the nicer Dice rolling simulator from the Lab_1.1
```
# Here your code
for :
print(face[number])
```
###Expected value
**Expected value of a dice roll.**
The expected value of a dice roll is
$$ \sum_{i=1}^6 i \times \frac{1}{6} = 3.5 $$
That means that if we toss a dice a large number of times, the mean value should converge to 3.5.
```
# Here your code
# N could be same from the previous cells
N =
roll = np.zeros(N, dtype=int)
expectation = np.zeros(N)
# you could use your previous results
# face[i]
for i in range(1, N):
expectation[i] = np.mean(face[0:i])
plt.plot(expectation)
plt.title("Expectation of a dice roll")
plt.show()
```
##Card Probabilities
```
# Here your code
import
# Here your code
# define the number of events
M =
```
###Deck of Cards
Use the code from the Lab_2.2 to run 4 experiments related to the questions.
* Address the questions using the code from the Lab_2.2
* Add code line to save the frequency of the event for each question.
* In the last cells add a code to print the frequency of the events for each question.
* Finally, plot a histogram with the results.
Generate a Deck of Cards uaing the code from the Lab_2.2
```
# Here your code
# make a deck of cards
deck =
```
##Question 1
What is the probability that when two cards are drawn from a deck of cards without a replacement that both of them will be Ace?
```
# Here your code
```
###Question 2
What is the probability of two Aces in 5 card hand without replacement
```
# Here your code
```
###Question 3
What is the probability of being dealt a flush (5 cards of all the same suit) from the first 5 cards in a deck?
```
# Here your code
```
###Question 4
What is the probability of being dealt a royal flush from the first 5 cards in a deck?
```
# Here your code
```
###Print frequency
Display frequency for each face.
```
# Here your code
print('Questions', "Frequency")
for :
print()
```
### Plot a histogram
```
# Here your code
sns.displot(frequency, kde = False)
plt.title("Histogram of Questions")
plt.xlim([0,7])
```
| github_jupyter |
```
import requests
from matplotlib import pyplot as plt
from bs4 import BeautifulSoup
def gasoline_price():
"""Gets current gasoline price in Swedish Krona"""
try:
page = requests.get('https://www.globalpetrolprices.com/Sweden/gasoline_prices/')
soup = BeautifulSoup(page.content, 'html.parser')
body = soup.body
table = body.find('div', {'id': 'contentHolder'}).table
entries = table.tbody.find_all('tr')
# Scraped cost in liters
cost_liters = [float(entry.find_all('td')[0].text) for entry in entries]
return cost_liters[0]
except:
return None
def electricity_price():
"""Gets current electricity price in Swedish Krona"""
try:
page = requests.get('https://www.vattenfall.se/elavtal/elmarknaden/elmarknaden-just-nu/')
soup = BeautifulSoup(page.content, 'html.parser')
body = soup.body
tbody = body.find('table').tbody
entries = tbody.find_all('tr')
data = entries[1:]
# Scraped Values
prices_2022 = [datum.find_all('td')[1].text for datum in data]
prices_as_float = [float(price.split()[0].replace(',', '.')) for price in prices_2022]
max_price_sek = max(prices_as_float) / 100
return max_price_sek
except:
return None
def gasoline_cost_sek(dist, sekpl=20.0, kmpl=9.4):
"""Gets cost of commute via car in Swedish Krona.
Inputs:
dist: distance in kilometers (numeric)
sekpl: Swedish Krona (SEK) per liter (L). Obtained from gasoline_price() function.
kmpl: Kilometers (km) per liter (L). (Fuel efficiency)
kmpl estimation from:
https://www.bts.gov/content/average-fuel-efficiency-us-passenger-cars-and-light-trucks
"""
return sekpl * dist / kmpl
def electricity_cost_sek(dist, sekpkwh=.85, kmpkwh=100):
"""Gets cost of commute via ebike in Swedish Krona.
Inputs:
dist: distance in kilometers (numeric)
sekpkwh: Swedish Krona (SEK) per kilowatt-hour (kWh). Obtained from electricity_price() function.
kmpkwh: Kilometers (km) per kilowatt-hour (kWh).
ebikes: 80-100 kilometers per kWh?
https://www.ebikejourney.com/ebike/
"""
return sekpkwh * dist / kmpkwh
def circle_plot(dist, gas_price, elec_price, kmpl=9.4, kmpkwh=100):
"""Generates a plot where the area of each circle is proportional to the cost of the commute."""
gas_cost = gasoline_cost_sek(dist, gas_price, kmpl)
elec_cost = electricity_cost_sek(dist, elec_price, kmpkwh)
radius_g = gas_cost**.5
radius_e = elec_cost**.5
circle_g = plt.Circle((radius_g, radius_g), radius_g, color='r', label='gasoline cost', alpha=0.8)
circle_e = plt.Circle((0.7*radius_g, 0.7*radius_g), radius_e, color='g', label='electricity cost', alpha=0.8)
fig, ax = plt.subplots()
ax.set_xlim([0, 2.02*radius_g])
ax.set_ylim([0, 2.05*radius_g])
ax.add_patch(circle_g)
ax.add_patch(circle_e)
ax.axis('off')
ax.legend()
plt.savefig('circleplot.png', transparent=True)
elec_price = electricity_price() or 0.85
gas_price = gasoline_price() or 20.0
circle_plot(50, gas_price, elec_price)
```
| github_jupyter |
## Face and Facial Keypoint detection
After you've trained a neural network to detect facial keypoints, you can then apply this network to *any* image that includes faces. The neural network expects a Tensor of a certain size as input and, so, to detect any face, you'll first have to do some pre-processing.
1. Detect all the faces in an image using a face detector (we'll be using a Haar Cascade detector in this notebook).
2. Pre-process those face images so that they are grayscale, and transformed to a Tensor of the input size that your net expects. This step will be similar to the `data_transform` you created and applied in Notebook 2, whose job was tp rescale, normalize, and turn any image into a Tensor to be accepted as input to your CNN.
3. Use your trained model to detect facial keypoints on the image.
---
In the next python cell we load in required libraries for this section of the project.
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
```
#### Select an image
Select an image to perform facial keypoint detection on; you can select any image of faces in the `images/` directory.
```
import cv2
# load in color image for face detection
image = cv2.imread('images/obamas.jpg')
# switch red and blue color channels
# --> by default OpenCV assumes BLUE comes first, not RED as in many images
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# plot the image
fig = plt.figure(figsize=(9,9))
plt.imshow(image)
```
## Detect all faces in an image
Next, you'll use one of OpenCV's pre-trained Haar Cascade classifiers, all of which can be found in the `detector_architectures/` directory, to find any faces in your selected image.
In the code below, we loop over each face in the original image and draw a red square on each face (in a copy of the original image, so as not to modify the original). You can even [add eye detections](https://docs.opencv.org/3.4.1/d7/d8b/tutorial_py_face_detection.html) as an *optional* exercise in using Haar detectors.
An example of face detection on a variety of images is shown below.
<img src='images/haar_cascade_ex.png' width=80% height=80%/>
```
# load in a haar cascade classifier for detecting frontal faces
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')
# run the detector
# the output here is an array of detections; the corners of each detection box
# if necessary, modify these parameters until you successfully identify every face in a given image
faces = face_cascade.detectMultiScale(image, 1.2, 2)
# make a copy of the original image to plot detections on
image_with_detections = image.copy()
# loop over the detected faces, mark the image where each face is found
for (x,y,w,h) in faces:
# draw a rectangle around each detected face
# you may also need to change the width of the rectangle drawn depending on image resolution
cv2.rectangle(image_with_detections,(x,y),(x+w,y+h),(255,0,0),3)
fig = plt.figure(figsize=(9,9))
plt.imshow(image_with_detections)
```
## Loading in a trained model
Once you have an image to work with (and, again, you can select any image of faces in the `images/` directory), the next step is to pre-process that image and feed it into your CNN facial keypoint detector.
First, load your best model by its filename.
```
import torch
from models import Net
net = Net().double()
## TODO: load the best saved model parameters (by your path name)
## You'll need to un-comment the line below and add the correct name for *your* saved model
net.load_state_dict(torch.load('keypoints_model_drop.pt'))
## print out your net and prepare it for testing (uncomment the line below)
net.eval()
```
## Keypoint detection
Now, we'll loop over each detected face in an image (again!) only this time, you'll transform those faces in Tensors that your CNN can accept as input images.
### TODO: Transform each detected face into an input Tensor
You'll need to perform the following steps for each detected face:
1. Convert the face from RGB to grayscale
2. Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255]
3. Rescale the detected face to be the expected square size for your CNN (224x224, suggested)
4. Reshape the numpy image into a torch image.
You may find it useful to consult to transformation code in `data_load.py` to help you perform these processing steps.
### TODO: Detect and display the predicted keypoints
After each face has been appropriately converted into an input Tensor for your network to see as input, you'll wrap that Tensor in a Variable() and can apply your `net` to each face. The ouput should be the predicted the facial keypoints. These keypoints will need to be "un-normalized" for display, and you may find it helpful to write a helper function like `show_keypoints`. You should end up with an image like the following with facial keypoints that closely match the facial features on each individual face:
<img src='images/michelle_detected.png' width=30% height=30%/>
```
from torchvision import transforms, utils
from data_load import FacialKeypointsDataset
from data_load import Rescale, RandomCrop, Normalize, ToTensor
image_copy = np.copy(image)
# loop over the detected faces from your haar cascade
for (x,y,w,h) in faces:
# Select the region of interest that is the face in the image
roi = image_copy[y:y+h, x:x+w]
plt.imshow(roi)
roi = cv2.cvtColor(roi, cv2.COLOR_RGB2GRAY)
roi = roi/255.0
roi = cv2.resize(roi, (244,244))
if(len(roi.shape) == 2):
roi = roi.reshape(roi.shape[0], roi.shape[1], 1)
roi = roi.transpose((2, 0, 1))
roi = torch.from_numpy(roi)
print(roi.shape)
output_pts = net(roi[None, ...])
print(output_pts)
## TODO: Convert the face region from RGB to grayscale
## TODO: Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255]
## TODO: Rescale the detected face to be the expected square size for your CNN (224x224, suggested)
## TODO: Reshape the numpy image shape (H x W x C) into a torch image shape (C x H x W)
## TODO: Make facial keypoint predictions using your loaded, trained network
## perform a forward pass to get the predicted facial keypoints
## TODO: Display each detected face and the corresponding keypoints
predicted_key_pts = output_pts.data
predicted_key_pts = predicted_key_pts.numpy()
# undo normalization of keypoints
predicted_key_pts = (predicted_key_pts*50.0)+100
# plot ground truth points for comparison, if they exist
predicted_key_pts = predicted_key_pts.astype('float').reshape(-1, 2)
print(predicted_key_pts)
plt.imshow(image, cmap='gray')
plt.scatter(predicted_key_pts[:, 0], predicted_key_pts[:, 1], s=20, marker='.', c='m')
```
| github_jupyter |
# K-Naive Bayes
code by xbwei, adapted for use by daviscj & mathi2ma.
## Import and Prepare the Data
[pandas](https://pandas.pydata.org/) provides excellent data reading and querying module,[dataframe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html), which allows you to import structured data and perform SQL-like queries. We also use the [mglearn](https://github.com/amueller/mglearn) package to help us visualize the data and models.
Here we imported some house price records from [Trulia](https://www.trulia.com/?cid=sem|google|tbw_br_nat_x_x_nat!53f9be4f|Trulia-Exact_352364665_22475209465_aud-278383240986:kwd-1967776155_260498918114_). For more about extracting data from Trulia, please check [my previous tutorial](https://www.youtube.com/watch?v=qB418v3k2vk).
We use the house type as the [dependent variable](https://en.wikipedia.org/wiki/Dependent_and_independent_variables) and the house ages and house prices as the [independent variables](https://en.wikipedia.org/wiki/Dependent_and_independent_variables).
```
import sklearn
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
%matplotlib inline
import pandas
import numpy as np
import mglearn
from collections import Counter
df = pandas.read_excel('house_price_label.xlsx')
# combine multipl columns into a 2D array
# also convert the integer data to float data
X = np.column_stack((df.built_in.astype(float),df.price.astype(float)))
y = df.house_type
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size =0.3,stratify = y, random_state=0)
# for classification, make sure a stratify splitting method is selected
mglearn.discrete_scatter(X[:,0],X[:,1],y) # use mglearn to visualize data
plt.legend(y,loc='best')
plt.xlabel('house age')
plt.ylabel('house price')
plt.show()
```
## Classification
The [Naive Bayes](http://scikit-learn.org/stable/modules/naive_bayes.html) model is used to classify the house types based on the house ages and prices. Specifically, the [Gaussian Naive Bayes](http://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.GaussianNB.html#sklearn.naive_bayes.GaussianNB) is selected in this classification. We also calculate the [Accuracy](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression.score) and the [Kappa](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.cohen_kappa_score.html) score of our classification on the training and test data.
```
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import cohen_kappa_score
gnb = GaussianNB()
gnb.fit(X_train,y_train)
print("Training set accuracy: {:.2f}".format(gnb.score(X_train, y_train)))
print ("Training Kappa: {:.3f}".format(cohen_kappa_score(y_train,gnb.predict(X_train))))
print("Test set accuracy: {:.2f}".format(gnb.score(X_test, y_test)))
print ("Test Kappa: {:.3f}".format(cohen_kappa_score(y_test,gnb.predict(X_test))))
```
## Visualize the Result
We plot the predicted results of the training data and the test data.
```
fig, axes = plt.subplots(1, 2, figsize=(20, 6))
for ax,data in zip(axes,[X_train,X_test]):
mglearn.discrete_scatter(data[:,0],data[:,1],gnb.predict(data),ax=ax) # use mglearn to visualize data
ax.set_title("{}".format('Predicted House Type'))
ax.set_xlabel("house age")
ax.set_ylabel("house price")
ax.legend(loc='best')
```
We check the distribution of the independent variables for each house type.
```
df.groupby('house_type').hist(figsize=(14,2),column=['price','built_in'])
```
The [Gaussian Naive Bayes](http://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.GaussianNB.html#sklearn.naive_bayes.GaussianNB) assumes that each feature follows a normal distribution. Here we plot the [pdf](https://en.wikipedia.org/wiki/Probability_density_function) of each feature in the training data for each house type.
```
import scipy.stats
house_type = ['condo','land and lot','single-family','townhouse']
house_feature =['huilt_in','price']
fig, axes = plt.subplots(4, 2, figsize=(12, 10))
for j in range(4):
for i in range(2):
mu = gnb.theta_[j,i] # get mean value of each feature for each class
sigma=np.sqrt(gnb.sigma_ [j,i])# get std value of each feature for each class
x = np.linspace(mu - 3*sigma, mu + 3*sigma, 100)
axes[j][i].plot(x,scipy.stats.norm.pdf(x, mu, sigma) )
axes[j][i].set_title("{}".format(house_type[j]))
axes[j][i].set_xlabel(house_feature[i])
plt.subplots_adjust(hspace=0.5)
```
# Bernoulli Model
```
from sklearn.naive_bayes import BernoulliNB
from sklearn.metrics import cohen_kappa_score
gnb = BernoulliNB()
gnb.fit(X_train,y_train)
print("Training set accuracy: {:.2f}".format(gnb.score(X_train, y_train)))
print ("Training Kappa: {:.3f}".format(cohen_kappa_score(y_train,gnb.predict(X_train))))
print("Test set accuracy: {:.2f}".format(gnb.score(X_test, y_test)))
print ("Test Kappa: {:.3f}".format(cohen_kappa_score(y_test,gnb.predict(X_test))))
```
# Multinomial Model
```
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import cohen_kappa_score
gnb = MultinomialNB()
gnb.fit(X_train,y_train)
print("Training set accuracy: {:.2f}".format(gnb.score(X_train, y_train)))
print ("Training Kappa: {:.3f}".format(cohen_kappa_score(y_train,gnb.predict(X_train))))
print("Test set accuracy: {:.2f}".format(gnb.score(X_test, y_test)))
print ("Test Kappa: {:.3f}".format(cohen_kappa_score(y_test,gnb.predict(X_test))))
```
| github_jupyter |
```
import numpy as np
import functools
from sklearn.base import BaseEstimator, ClusterMixin, TransformerMixin
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from clustering_by_classification import ClusterByClassifier
from sklearn import cluster, datasets, mixture
import matplotlib.pyplot as plt
import seaborn as sns
from itertools import cycle, islice
%matplotlib inline
np.random.seed(0)
# ============
# Generate datasets. We choose the size big enough to see the scalability
# of the algorithms, but not too big to avoid too long running times
# ============
n_samples = 5000
noisy_circles = datasets.make_circles(n_samples=n_samples, factor=.5,
noise=.05)
noisy_moons = datasets.make_moons(n_samples=n_samples, noise=.05)
blobs = datasets.make_blobs(n_samples=n_samples, random_state=8)
no_structure = np.random.rand(n_samples, 2), None
# Anisotropicly distributed data
random_state = 170
X, y = datasets.make_blobs(n_samples=n_samples, random_state=random_state)
transformation = [[0.6, -0.6], [-0.4, 0.8]]
X_aniso = np.dot(X, transformation)
aniso = (X_aniso, y)
# blobs with varied variances
varied = datasets.make_blobs(n_samples=n_samples,
cluster_std=[1.0, 2.5, 0.5],
random_state=random_state)
dataset_types = [
noisy_circles,
noisy_moons,
blobs,
no_structure,
varied
]
plt.figure(figsize=(9 * 2 + 3, 12.5))
plt.subplots_adjust(left=.02, right=.98, bottom=.001, top=.96, wspace=.05,
hspace=.01)
lr = LogisticRegression()
dt = DecisionTreeClassifier()
svm = SVC()
rf = RandomForestClassifier()
for i, dataset in enumerate(dataset_types):
X = dataset[0]
cl = ClusterByClassifier(rf,
n_clusters=5,
max_iters=50,
soft_clustering=True)
y_pred = cl.fit_predict(X)
plt.subplot(len(dataset_types), 1, i+1)
colors = np.array(list(islice(cycle(['#377eb8', '#ff7f00', '#4daf4a',
'#f781bf', '#a65628', '#984ea3',
'#999999', '#e41a1c', '#dede00']),
int(max(y_pred) + 1))))
# add black color for outliers (if any)
colors = np.append(colors, ["#000000"])
plt.scatter(X[:, 0], X[:, 1], s=10, color=colors[y_pred])
plt.xlim(-2.5, 2.5)
plt.ylim(-2.5, 2.5)
plt.xticks(())
plt.yticks(())
plt.show()
```
| github_jupyter |
```
library(caret, quiet=TRUE);
library(base64enc)
library(httr, quiet=TRUE)
```
# Build a Model
```
set.seed(1960)
create_model = function() {
model <- train(Species ~ ., data = iris, method = "ctree2")
return(model)
}
# dataset
model = create_model()
# pred <- predict(model, as.matrix(iris[, -5]) , type="prob")
pred_labels <- predict(model, as.matrix(iris[, -5]) , type="raw")
sum(pred_labels != iris$Species)/length(pred_labels)
```
# SQL Code Generation
```
test_ws_sql_gen = function(mod) {
WS_URL = "https://sklearn2sql.herokuapp.com/model"
WS_URL = "http://localhost:1888/model"
model_serialized <- serialize(mod, NULL)
b64_data = base64encode(model_serialized)
data = list(Name = "caret_rpart_test_model", SerializedModel = b64_data , SQLDialect = "postgresql" , Mode="caret")
r = POST(WS_URL, body = data, encode = "json")
# print(r)
content = content(r)
# print(content)
lSQL = content$model$SQLGenrationResult[[1]]$SQL # content["model"]["SQLGenrationResult"][0]["SQL"]
return(lSQL);
}
lModelSQL = test_ws_sql_gen(model)
cat(lModelSQL)
```
# Execute the SQL Code
```
library(RODBC)
conn = odbcConnect("pgsql", uid="db", pwd="db", case="nochange")
odbcSetAutoCommit(conn , autoCommit = TRUE)
dataset = iris[,-5]
df_sql = as.data.frame(dataset)
names(df_sql) = sprintf("Feature_%d",0:(ncol(df_sql)-1))
df_sql$KEY = seq.int(nrow(dataset))
sqlDrop(conn , "INPUT_DATA" , errors = FALSE)
sqlSave(conn, df_sql, tablename = "INPUT_DATA", verbose = FALSE)
head(df_sql)
# colnames(df_sql)
# odbcGetInfo(conn)
# sqlTables(conn)
df_sql_out = sqlQuery(conn, lModelSQL)
head(df_sql_out)
```
# R Caret Rpart Output
```
# pred_proba = predict(model, as.matrix(iris[,-5]), type = "prob")
df_r_out = data.frame(seq.int(nrow(dataset))) # (pred_proba)
names(df_r_out) = c("KEY") # sprintf("Proba_%s",model$levels)
df_r_out$KEY = seq.int(nrow(dataset))
df_r_out$Score_setosa = NA
df_r_out$Score_versicolor = NA
df_r_out$Score_virginica = NA
df_r_out$Proba_setosa = NA
df_r_out$Proba_versicolor = NA
df_r_out$Proba_virginica = NA
df_r_out$LogProba_setosa = log(df_r_out$Proba_setosa)
df_r_out$LogProba_versicolor = log(df_r_out$Proba_versicolor)
df_r_out$LogProba_virginica = log(df_r_out$Proba_virginica)
df_r_out$Decision = predict(model, as.matrix(iris[,-5]), type = "raw")
df_r_out$DecisionProba = NA
head(df_r_out)
```
# Compare R and SQL output
```
df_merge = merge(x = df_r_out, y = df_sql_out, by = "KEY", all = TRUE, , suffixes = c("_1","_2"))
head(df_merge)
diffs_df = df_merge[df_merge$Decision_1 != df_merge$Decision_2,]
head(diffs_df)
stopifnot(nrow(diffs_df) == 0)
summary(df_r_out)
summary(df_sql_out)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ferdouszislam/pytorch-practice/blob/main/cnn_cifar10.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
# device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
# hyper parameters
num_epochs = 20
batch_size = 64
learning_rate = 0.001
# load dataset
transform = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
]
)
train_dataset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
test_dataset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset,
batch_size=batch_size, shuffle=False)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# convolutional neural network
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
# 5x5 conv kernel/filter
self.conv1 = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=5)
# 2x2 maxpooling
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
# 5x5 conv kernel/filter
self.conv2 = nn.Conv2d(6, 16, 5)
# 3x32x32 --conv1--> 6x28x28 --maxpool--> 6x14x14
# 6x14x14 --conv2--> 16x10x10 --maxpool--> 16x5x5
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
self.relu = nn.ReLU()
def forward(self, input):
# conv1 -> relu -> maxpool
output = self.pool(self.relu(self.conv1(input)))
# conv2 -> relu -> maxpool
output = self.pool(self.relu(self.conv2(output)))
# flatten
output = output.reshape(-1, 16*5*5)
# -> fc1 -> relu
output = self.relu(self.fc1(output))
# -> fc2 -> relu
output = self.relu(self.fc2(output))
# fc3 (no activation at end)
output = self.fc3(output)
return output
model = ConvNet().to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
for epoch in range(num_epochs):
avg_batch_loss = 0
for step, (images, labels) in enumerate(train_loader):
images = images.to(device)
labels = labels.to(device)
# forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# backward pass
optimizer.zero_grad()
loss.backward()
optimizer.step()
# accumulate batch loss
avg_batch_loss+=loss
avg_batch_loss /= len(train_loader)
print(f'epoch {epoch+1}/{num_epochs}, loss = {avg_batch_loss:.4f}')
PATH = './cnn.pth'
torch.save(model.state_dict(), PATH)
with torch.no_grad():
n_correct = 0
n_samples = 0
for images, labels in test_loader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
# max returns (value ,index)
_, predicted = torch.max(outputs, 1)
n_samples += labels.size(0)
n_correct += (predicted == labels).sum().item()
acc = 100.0 * n_correct / n_samples
print(f'Accuracy of the network: {acc}%')
```
| github_jupyter |
# Python Data Types and Markdown
This tutorial covers the very basics of Python and Markdown.
> Because this file is saved on GitHub, you should feel free to change whatever you want in this file and even try to break it. Breaking code and then trying to fix it again is a fantastic way to learn how code works—AS LONG AS you have a saved checkpoint or backup. It's always good practice to save works in progress frequently and, if you achieve a significant milestone, perhaps copy the file before going forward in case you need to reset. If you break anything in this repository, simply re-download the files that you need and start over.
## Markdown
Jupyter Notebook allows writers to declare cells as "code" or "markdown." Markdown is very useful for commenting on code and is a common syntax for blogging. Cells default to "code." To change a cell to markdown, simply click the dropdown menu at the top center that reads, "Code," and change it to "Markdown."
With the cell now set to markdown, you can now write like most text editors. There are some codes that can change formatting, such as headers, italics, underline, etc. You can also provide hyperlinks and add images. You can browse and become familiar with markdown codes here: https://guides.github.com/pdfs/markdown-cheatsheet-online.pdf.
## Python
## Data Types
Python recognizes different simple and complex data types.
Starting with simple data types, we will see how Python can do math and work with "strings," or text values.
1. Integers
2. Strings
3. Floats
#### Integers
To run a cell, you can press Shift + Enter or press the Run button in the top menu.
```
1 + 1
```
Python can perform mathematical functions using basic notation (+, -, /, \*, ^) as well as follows order of operations.
#### Strings
A string is a value between single or double quotation marks (" " or ' '). Python recognizes strings with these quotation marks. To include quotations marks within the string, you can either escape the quotation mark with a slash (\") or use single quotation marks within double and vice-versa ("...' '...").
```
"Concatenating" + " " + "Strings"
```
Python can also use + to concatenate string types. "Concatenate" is the term common in coding that refers to joining data (often strings) together. Concatenating can take two or more string types and join them together to create a longer string.
#### Floats
```
0.2 + 2
```
"Floats" refer to numerical values with decimal values. Some texts include numerical reports within them. This can cause problems when you're trying to parse "text," data you think is a string, but contains numerical or float values, leading to a TypeError.
We won't need to deal with floats for the most part with the libraries we will eventually be using. But, it's important to know what a "float" is.
### Brief Note on Errors
```
# Integers and Strings >>> Notice, too, this comment, which is ignored when running code.
# Put a '#' in front of text on a line to create a comment.
1 + "1"
```
The error here, TypeError, occurs when a function fails to work with different data types. Here, "1" is not a numerical value. Error reports in Python can be quite confusing. For many errors, you can debug them by first looking at the very bottom of the report for the error type.
"TypeError: unsupported operand type(s) for +: 'int' and 'str'" means that an error occurred around the plus sign operand, "+." More specifically, the error occurred because "+" cannot join or add and integer ('int') with a string ('str').
Sometimes, the report will also tell you where within the cell the error happened. The '----> 2' symbol in the left margin indicates that the error happened on the second line of the cell.
Decoding error reports can be maddening. One common experience for debugging is examining each line for missing punctuation or typo. Learning how to read error reports can be very frustrating at first, but gradually gets better. Copying and pasting the error line ("TypeError: unsupported operand type(s) for +: 'int' and 'str'") can be a productive starting point. But, ultimately, reading errors only gets better with time and practice.
## Defining Variables
Python allows coders to create their own variables, which stores different value types. To create a variable, create a name, in this case "x", and use the equal sign (=) to assign a value to the name.
```
x = 1
1 + x
x = "words"
"concatenating" + " " + x
x + 1
```
We get another TypeError because we overwrote "x," making the integer value into a string type, and we cannot to join strings and integers.
Python offers a way to change value types.
```
x = "1"
# 1 + x : This will also fail because x is still a string value. But,
1 + int(x)
```
Here, we use a Python function to change "x" temporally to an integer data type, which allows us to add 1 + 1.
Python functions follow certain grammatical rules. That is, functions typically have a name, in the case above "int," followed by parenthesis. The function int() acts on the variable "x."
A later notebook will cover functions in more detail, but, here, we can see how data types can be changed.
## Complex Data Types
There are more data types that are more complex. Although the following data types are more complex, they build off simpler types. The following types gather simpler value types into recognizable structures. There are more data types than these, but we will be focusing on the two most common types.
1. Lists
2. Dictionaries
#### Lists
A list is a collection of values that occur between square brackets ([ ]). As a data structure, lists provide diff erent functionlity than other data types.
```
my_list = [1, 2, 3, 4] # Each value in a list is separated by a comma.
my_list
```
Lists are also indexed, so I can access a single value.
```
my_list[0]
```
Notice that Python starts counting at zero. The value in the zero position of my_list is 1.
#### Dictionaries
A list provides a range of values (integers, strings, even lists within a list) that do not necessarily have any relation other than being in a list. A dictionary, on the other hand, is similar to a list except that it has keys and values. That is, it conveys a relationship between a "key" and that key's "value." A dictionary is a collection of information between curly brackets { }.
```
my_dictionary = {
'name': 'Bill',
'grade': 'B'
}
my_dictionary
```
While we will be using dataframes, a data structure we'll discuss later, dictionaries can be very good at structuring data.
It's worth noting as well, that lists and dictionaries can work within each other and become nested. For example, I can put my_list into my_dictionary as a value.
```
my_dictionary['list'] = my_list
my_dictionary
```
The first line in the cell above does two things:
1. Create a new key named 'list'
2. Append a value, my_list to the key
I can also change the value of 'list' using the same syntax.
```
my_dictionary['list'] = [5, 6, 7, 8]
my_dictionary
```
I can retrieve keys and values as well using default functions.
```
my_dictionary.keys()
```
The data types and structures here are not exhaustive. There are other types that you will likely encounter. And, the data types you do encounter will likely be much larger and complex than the ones here. But, this template covers the very basics of data types in Python.
Knowing how to access and change the fundamental data structures and types of Python will become essential for adapting code from the internet. Often, functions require specific data types and structures as input that will likely differ from how your dataset is constructed.
____
# Sandbox
To gain familiarity with data types, try the following exercises:
1. Create a variable that stores a string value.
2. Concatenate that variable with another string.
3. Create a short list of variables.
4. Make a dictionary with a key and set its value to the short list you just created.
| github_jupyter |
<a href="https://colab.research.google.com/github/cocoisland/DS-Unit-1-Sprint-4-Statistical-Tests-and-Experiments/blob/master/module2-sampling-confidence-intervals-and-hypothesis-testing/LS_DS_142_Sampling_Confidence_Intervals_and_Hypothesis_Testing.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Lambda School Data Science Module 142
## Sampling, Confidence Intervals, and Hypothesis Testing
## Prepare - examine other available hypothesis tests
If you had to pick a single hypothesis test in your toolbox, t-test would probably be the best choice - but the good news is you don't have to pick just one! Here's some of the others to be aware of:
```
import numpy as np
from scipy.stats import chisquare # One-way chi square test
# Chi square can take any crosstab/table and test the independence of rows/cols
# The null hypothesis is that the rows/cols are independent -> low chi square
# rows/cols independent -> statistic closer to mean or 0 null distribution
# - pvalue larger than 0.01 or 0.05
# - null hypothesis true, random chance/probability, no relationship
# - eg good rating rave does not apply to drug reviews.
#
# rows/cols dependent -> statistic far out into x-axis infinity on null distribution.
# - pvalue less than 0.05 or 0.01
# - null hypothesis rejected, no random chance, dependent relationship.
# - drug review ratings are true, trusted and accepted.
#
# The alternative is that there is a dependence -> high chi square
# Be aware! Chi square does *not* tell you direction/causation
ind_obs = np.array([[1, 1], [2, 2]]).T
print(ind_obs)
print(chisquare(ind_obs, axis=None))
dep_obs = np.array([[16, 18, 16, 14, 12, 12], [32, 24, 16, 28, 20, 24]]).T
print(dep_obs)
print(chisquare(dep_obs, axis=None))
# Distribution tests:
# We often assume that something is normal, but it can be important to *check*
# For example, later on with predictive modeling, a typical assumption is that
# residuals (prediction errors) are normal - checking is a good diagnostic
from scipy.stats import normaltest
# Poisson models arrival times and is related to the binomial (coinflip)
sample = np.random.poisson(5, 1000)
print(normaltest(sample)) # Pretty clearly not normal
# Kruskal-Wallis H-test - compare the median rank between 2+ groups
# Can be applied to ranking decisions/outcomes/recommendations
# The underlying math comes from chi-square distribution, and is best for n>5
from scipy.stats import kruskal
x1 = [1, 3, 5, 7, 9]
y1 = [2, 4, 6, 8, 10]
print(kruskal(x1, y1)) # x1 is a little better, but not "significantly" so
x2 = [1, 1, 1]
y2 = [2, 2, 2]
z = [2, 2] # Hey, a third group, and of different size!
print(kruskal(x2, y2, z)) # x clearly dominates
```
And there's many more! `scipy.stats` is fairly comprehensive, though there are even more available if you delve into the extended world of statistics packages. As tests get increasingly obscure and specialized, the importance of knowing them by heart becomes small - but being able to look them up and figure them out when they *are* relevant is still important.
## Live Lecture - let's explore some more of scipy.stats
```
# Taking requests! Come to lecture with a topic or problem and we'll try it.
```
### Confidence interval
* Similar to hypothesis testing, but centered at sample mean
* Reporting the 95% confidence interval, is better than reporting the point estimate at sample mean.
```
import numpy as np
from scipy import stats
def confidence_interval(data, confidence=0.95):
'''
Calculate confidence_interval around a sample mean for a given data size.
Using t-distribution and two-tailed test, default 95% confidence
Arguments:
data - iterable(list or np.array) of sample observations
confidence - level of confidence for the interval
Return:
tuples of (mean, lower bound, upper bound)
'''
data = np.array(data)
mean = data.mean()
degree_of_freedom = len(data) - 1
stderr = stats.sem(data)
interval = stderr * stats.t.ppf( (1+confidence)/2., degree_of_freedom )
return(mean, mean-interval, mean+interval)
def report_confidence_interval(confidence_interval):
'''
Arguments:
tuples of (mean, lower bound, upper bound)
Return:
print report of confidence interval
'''
s='"Sample mean in interval {} - {} - {}".format(
confidence_interval[1],
confidence_interval[0]
confidence_interval[2]'
print(s)
```
## Assignment - Build a confidence interval
A confidence interval refers to a neighborhood around some point estimate, the size of which is determined by the desired p-value. For instance, we might say that 52% of Americans prefer tacos to burritos, with a 95% confidence interval of +/- 5%.
52% (0.52) is the point estimate, and +/- 5% (the interval $[0.47, 0.57]$) is the confidence interval. "95% confidence" means a p-value $\leq 1 - 0.95 = 0.05$.
In this case, the confidence interval includes $0.5$ - which is the natural null hypothesis (that half of Americans prefer tacos and half burritos, thus there is no clear favorite). So in this case, we could use the confidence interval to report that we've failed to reject the null hypothesis.
But providing the full analysis with a confidence interval, including a graphical representation of it, can be a helpful and powerful way to tell your story. Done well, it is also more intuitive to a layperson than simply saying "fail to reject the null hypothesis" - it shows that in fact the data does *not* give a single clear result (the point estimate) but a whole range of possibilities.
How is a confidence interval built, and how should it be interpreted? It does *not* mean that 95% of the data lies in that interval - instead, the frequentist interpretation is "if we were to repeat this experiment 100 times, we would expect the average result to lie in this interval ~95 times."
For a 95% confidence interval and a normal(-ish) distribution, you can simply remember that +/-2 standard deviations contains 95% of the probability mass, and so the 95% confidence interval based on a given sample is centered at the mean (point estimate) and has a range of +/- 2 (or technically 1.96) standard deviations.
Different distributions/assumptions (90% confidence, 99% confidence) will require different math, but the overall process and interpretation (with a frequentist approach) will be the same.
Your assignment - using the data from the prior module ([congressional voting records](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records)):
1. Generate and numerically represent a confidence interval
2. Graphically (with a plot) represent the confidence interval
3. Interpret the confidence interval - what does it tell you about the data and its distribution?
Stretch goals:
1. Write a summary of your findings, mixing prose and math/code/results. *Note* - yes, this is by definition a political topic. It is challenging but important to keep your writing voice *neutral* and stick to the facts of the data. Data science often involves considering controversial issues, so it's important to be sensitive about them (especially if you want to publish).
2. Apply the techniques you learned today to your project data or other data of your choice, and write/discuss your findings here.
```
# TODO - your code!
'''
Drug company testing
'''
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/00462/drugsCom_raw.zip'
!wget $url
!unzip drugsCom_raw.zip
import pandas as pd
from scipy.stats import chisquare
df = pd.read_table('drugsComTrain_raw.tsv')
df.head()
df.shape
'''
Given 161297 observation,
pvalue=0.7, this sample distribution greater than 0.05
statistic=733, fat tailed sample data point.
Null hypothesis not rejected, this drug rating can not be trusted with confidence
'''
rating_liraglutide = df[ df['drugName']=='Liraglutide' ]['rating']
chisquare(rating_liraglutide, axis=None)
drugs=df['drugName'].unique()
len(drugs)
drug_rating = pd.DataFrame(columns=['drugName','statistic','pvalue'])
i=0
for drug in drugs:
rating = df[ df['drugName']== drug ]['rating']
s,p = chisquare(rating, axis=None)
drug_rating.loc[i] = [drug,s,p]
i = i + 1
drug_rating.dropna(inplace=True)
#data_plot = drug_rating[ drug_rating['pvalue'] < 0.001 ][['drugName','pvalue']].sort_values('pvalue')
'''
Drugs with lot of review ratings,
- chisquare able to establish dependent relationship
- gives high confidence pvalue with infinity small value
- drug review rating can be trusted because the rating applied to the drugs.
'''
drug_rating.sort_values('pvalue').head()
'''
drug rating with little reviews has high pvalue
- pvalue of 1.0 = blant no relationship between drug and review rating.
- Rating for these drugs can not be trusted.
'''
drug_rating.sort_values('pvalue').tail()
'''
DrugName with the most number of rating entry
'''
df.groupby('drugName').sum().sort_values('rating', ascending=False).head()
'''
drugName with least rating entry
'''
df.groupby('drugName').sum().sort_values('rating').head()
import matplotlib.pyplot as plt
import numpy as np
'''
order does not display right
'''
data_plot['order']= [10,8,6,4,2]
y_pos = np.arange(len(data_plot))
plt.barh(data_plot.drugName, data_plot.order)
plt.yticks(y_pos, data_plot.drugName)
plt.show()
```
| github_jupyter |
# Predicting NYC Taxi Fares with RAPIDS
Process 380 million rides in NYC from 2015-2017.
RAPIDS is a suite of GPU accelerated data science libraries with APIs that should be familiar to users of Pandas, Dask, and Scikitlearn.
This notebook focuses on showing how to use cuDF with Dask & XGBoost to scale GPU DataFrame ETL-style operations & model training out to multiple GPUs.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import cupy
import cudf
import dask
import dask_cudf
import xgboost as xgb
from dask.distributed import Client, wait
from dask.utils import parse_bytes
from dask_cuda import LocalCUDACluster
cluster = LocalCUDACluster(rmm_pool_size=parse_bytes("25 GB"),
scheduler_port=9888,
dashboard_address=9787,
)
client = Client(cluster)
client
```
# Inspecting the Data
Now that we have a cluster of GPU workers, we'll use [dask-cudf](https://github.com/rapidsai/dask-cudf/) to load and parse a bunch of CSV files into a distributed DataFrame.
```
base_path = "/raid/vjawa/nyc_taxi/data/"
import dask_cudf
df_2014 = dask_cudf.read_csv(base_path+'2014/yellow_*.csv', chunksize='256 MiB')
df_2014.head()
```
# Data Cleanup
As usual, the data needs to be massaged a bit before we can start adding features that are useful to an ML model.
For example, in the 2014 taxi CSV files, there are `pickup_datetime` and `dropoff_datetime` columns. The 2015 CSVs have `tpep_pickup_datetime` and `tpep_dropoff_datetime`, which are the same columns. One year has `rate_code`, and another `RateCodeID`.
Also, some CSV files have column names with extraneous spaces in them.
Worst of all, starting in the July 2016 CSVs, pickup & dropoff latitude and longitude data were replaced by location IDs, making the second half of the year useless to us.
We'll do a little string manipulation, column renaming, and concatenating of DataFrames to sidestep the problems.
```
#Dictionary of required columns and their datatypes
must_haves = {
'pickup_datetime': 'datetime64[s]',
'dropoff_datetime': 'datetime64[s]',
'passenger_count': 'int32',
'trip_distance': 'float32',
'pickup_longitude': 'float32',
'pickup_latitude': 'float32',
'rate_code': 'int32',
'dropoff_longitude': 'float32',
'dropoff_latitude': 'float32',
'fare_amount': 'float32'
}
def clean(ddf, must_haves):
# replace the extraneous spaces in column names and lower the font type
tmp = {col:col.strip().lower() for col in list(ddf.columns)}
ddf = ddf.rename(columns=tmp)
ddf = ddf.rename(columns={
'tpep_pickup_datetime': 'pickup_datetime',
'tpep_dropoff_datetime': 'dropoff_datetime',
'ratecodeid': 'rate_code'
})
ddf['pickup_datetime'] = ddf['pickup_datetime'].astype('datetime64[ms]')
ddf['dropoff_datetime'] = ddf['dropoff_datetime'].astype('datetime64[ms]')
for col in ddf.columns:
if col not in must_haves:
ddf = ddf.drop(columns=col)
continue
# if column was read as a string, recast as float
if ddf[col].dtype == 'object':
ddf[col] = ddf[col].str.fillna('-1')
ddf[col] = ddf[col].astype('float32')
else:
# downcast from 64bit to 32bit types
# Tesla T4 are faster on 32bit ops
if 'int' in str(ddf[col].dtype):
ddf[col] = ddf[col].astype('int32')
if 'float' in str(ddf[col].dtype):
ddf[col] = ddf[col].astype('float32')
ddf[col] = ddf[col].fillna(-1)
return ddf
```
<b> NOTE: </b>We will realize that some of 2015 data has column name as `RateCodeID` and others have `RatecodeID`. When we rename the columns in the clean function, it internally doesn't pass meta while calling map_partitions(). This leads to the error of column name mismatch in the returned data. For this reason, we will call the clean function with map_partition and pass the meta to it. Here is the link to the bug created for that: https://github.com/rapidsai/cudf/issues/5413
```
df_2014 = df_2014.map_partitions(clean, must_haves, meta=must_haves)
```
We still have 2015 and the first half of 2016's data to read and clean. Let's increase our dataset.
```
df_2015 = dask_cudf.read_csv(base_path+'2015/yellow_*.csv', chunksize='1024 MiB')
df_2015 = df_2015.map_partitions(clean, must_haves, meta=must_haves)
```
# Handling 2016's Mid-Year Schema Change
In 2016, only January - June CSVs have the columns we need. If we try to read base_path+2016/yellow_*.csv, Dask will not appreciate having differing schemas in the same DataFrame.
Instead, we'll need to create a list of the valid months and read them independently.
```
months = [str(x).rjust(2, '0') for x in range(1, 7)]
valid_files = [base_path+'2016/yellow_tripdata_2016-'+month+'.csv' for month in months]
#read & clean 2016 data and concat all DFs
df_2016 = dask_cudf.read_csv(valid_files, chunksize='512 MiB').map_partitions(clean, must_haves, meta=must_haves)
#concatenate multiple DataFrames into one bigger one
taxi_df = dask.dataframe.multi.concat([df_2014, df_2015, df_2016])
```
## Exploratory Data Analysis (EDA)
Here, we are checking out if there are any non-sensical records and outliers, and in such case, we need to remove them from the dataset.
```
# check out if there is any negative total trip time
taxi_df[taxi_df.dropoff_datetime <= taxi_df.pickup_datetime].head()
# check out if there is any abnormal data where trip distance is short, but the fare is very high.
taxi_df[(taxi_df.trip_distance < 10) & (taxi_df.fare_amount > 300)].head()
# check out if there is any abnormal data where trip distance is long, but the fare is very low.
taxi_df[(taxi_df.trip_distance > 50) & (taxi_df.fare_amount < 50)].head()
```
EDA visuals and additional analysis yield the filter logic below.
```
# apply a list of filter conditions to throw out records with missing or outlier values
query_frags = [
'fare_amount > 1 and fare_amount < 500',
'passenger_count > 0 and passenger_count < 6',
'pickup_longitude > -75 and pickup_longitude < -73',
'dropoff_longitude > -75 and dropoff_longitude < -73',
'pickup_latitude > 40 and pickup_latitude < 42',
'dropoff_latitude > 40 and dropoff_latitude < 42',
'trip_distance > 0 and trip_distance < 500',
'not (trip_distance > 50 and fare_amount < 50)',
'not (trip_distance < 10 and fare_amount > 300)',
'not dropoff_datetime <= pickup_datetime'
]
taxi_df = taxi_df.query(' and '.join(query_frags))
# reset_index and drop index column
taxi_df = taxi_df.reset_index(drop=True)
taxi_df.head()
```
# Adding Interesting Features
Dask & cuDF provide standard DataFrame operations, but also let you run "user defined functions" on the underlying data. Here we use [dask.dataframe's map_partitions](https://docs.dask.org/en/latest/dataframe-api.html#dask.dataframe.DataFrame.map_partitions) to apply user defined python function on each DataFrame partition.
We'll use a Haversine Distance calculation to find total trip distance, and extract additional useful variables from the datetime fields.
```
## add features
taxi_df['hour'] = taxi_df['pickup_datetime'].dt.hour
taxi_df['year'] = taxi_df['pickup_datetime'].dt.year
taxi_df['month'] = taxi_df['pickup_datetime'].dt.month
taxi_df['day'] = taxi_df['pickup_datetime'].dt.day
taxi_df['day_of_week'] = taxi_df['pickup_datetime'].dt.weekday
taxi_df['is_weekend'] = (taxi_df['day_of_week']>=5).astype('int32')
#calculate the time difference between dropoff and pickup.
taxi_df['diff'] = taxi_df['dropoff_datetime'].astype('int64') - taxi_df['pickup_datetime'].astype('int64')
taxi_df['diff']=(taxi_df['diff']/1000).astype('int64')
taxi_df['pickup_latitude_r'] = taxi_df['pickup_latitude']//.01*.01
taxi_df['pickup_longitude_r'] = taxi_df['pickup_longitude']//.01*.01
taxi_df['dropoff_latitude_r'] = taxi_df['dropoff_latitude']//.01*.01
taxi_df['dropoff_longitude_r'] = taxi_df['dropoff_longitude']//.01*.01
taxi_df = taxi_df.drop('pickup_datetime', axis=1)
taxi_df = taxi_df.drop('dropoff_datetime', axis=1)
import cupy as cp
def cudf_haversine_distance(lon1, lat1, lon2, lat2):
lon1, lat1, lon2, lat2 = map(cp.radians, [lon1, lat1, lon2, lat2])
newlon = lon2 - lon1
newlat = lat2 - lat1
haver_formula = cp.sin(newlat/2.0)**2 + cp.cos(lat1) * cp.cos(lat2) * cp.sin(newlon/2.0)**2
dist = 2 * cp.arcsin(cp.sqrt(haver_formula ))
km = 6367 * dist #6367 for distance in KM for miles use 3958
return km
def haversine_dist(df):
df['h_distance']= cudf_haversine_distance(
df['pickup_longitude'],
df['pickup_latitude'],
df['dropoff_longitude'],
df['dropoff_latitude']
)
df['h_distance']= df['h_distance'].astype('float32')
return df
taxi_df = taxi_df.map_partitions(haversine_dist)
taxi_df.head()
len(taxi_df)
%%time
taxi_df = taxi_df.persist()
x = wait(taxi_df);
```
# Pick a Training Set
Let's imagine you're making a trip to New York on the 25th and want to build a model to predict what fare prices will be like the last few days of the month based on the first part of the month. We'll use a query expression to identify the `day` of the month to use to divide the data into train and test sets.
The wall-time below represents how long it takes your GPU cluster to load data from the Google Cloud Storage bucket and the ETL portion of the workflow.
```
#since we calculated the h_distance let's drop the trip_distance column, and then do model training with XGB.
taxi_df = taxi_df.drop('trip_distance', axis=1)
# this is the original data partition for train and test sets.
X_train = taxi_df.query('day < 25')
# create a Y_train ddf with just the target variable
Y_train = X_train[['fare_amount']].persist()
# drop the target variable from the training ddf
X_train = X_train[X_train.columns.difference(['fare_amount'])].persist()
# # this wont return until all data is in GPU memory
a = wait([X_train, Y_train]);
```
# Train the XGBoost Regression Model
The wall time output below indicates how long it took your GPU cluster to train an XGBoost model over the training set.
```
dtrain = xgb.dask.DaskDMatrix(client, X_train, Y_train)
%%time
params = {
'learning_rate': 0.3,
'max_depth': 8,
'objective': 'reg:squarederror',
'subsample': 0.6,
'gamma': 1,
'silent': False,
'verbose_eval': True,
'tree_method':'gpu_hist'
}
trained_model = xgb.dask.train(
client,
params,
dtrain,
num_boost_round=12,
evals=[(dtrain, 'train')]
)
ax = xgb.plot_importance(trained_model['booster'], height=0.8, max_num_features=10, importance_type="gain")
ax.grid(False, axis="y")
ax.set_title('Estimated feature importance')
ax.set_xlabel('importance')
plt.show()
```
# How Good is Our Model?
Now that we have a trained model, we need to test it with the 25% of records we held out.
Based on the filtering conditions applied to this dataset, many of the DataFrame partitions will wind up having 0 rows. This is a problem for XGBoost which doesn't know what to do with 0 length arrays. We'll repartition the data.
```
def drop_empty_partitions(df):
lengths = df.map_partitions(len).compute()
nonempty = [length > 0 for length in lengths]
return df.partitions[nonempty]
X_test = taxi_df.query('day >= 25').persist()
X_test = drop_empty_partitions(X_test)
# Create Y_test with just the fare amount
Y_test = X_test[['fare_amount']].persist()
# Drop the fare amount from X_test
X_test = X_test[X_test.columns.difference(['fare_amount'])]
# this wont return until all data is in GPU memory
done = wait([X_test, Y_test])
# display test set size
len(X_test)
```
## Calculate Prediction
```
# generate predictions on the test set
booster = trained_model["booster"] # "Booster" is the trained model
booster.set_param({'predictor': 'gpu_predictor'})
prediction = xgb.dask.inplace_predict(client, booster, X_test).persist()
wait(prediction);
y = Y_test['fare_amount'].reset_index(drop=True)
# Calculate RMSE
squared_error = ((prediction-y)**2)
cupy.sqrt(squared_error.mean().compute())
```
## Save Trained Model for Later Use¶
We often need to store our models on a persistent filesystem for future deployment. Let's save our model.
```
trained_model
import joblib
# Save the booster to file
joblib.dump(trained_model, "xgboost-model")
len(taxi_df)
```
## Reload a Saved Model from Disk
You can also read the saved model back into a normal XGBoost model object.
```
with open("xgboost-model", 'rb') as fh:
loaded_model = joblib.load(fh)
# Generate predictions on the test set again, but this time using the reloaded model
loaded_booster = loaded_model["booster"]
loaded_booster.set_param({'predictor': 'gpu_predictor'})
new_preds = xgb.dask.inplace_predict(client, loaded_booster, X_test).persist()
# Verify that the predictions result in the same RMSE error
squared_error = ((new_preds - y)**2)
cp.sqrt(squared_error.mean().compute())
```
| github_jupyter |
<img align="left" src="https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png" width=200>
<br></br>
<br></br>
## *Data Science Unit 4 Sprint 3 Assignment 1*
# Recurrent Neural Networks and Long Short Term Memory (LSTM)

It is said that [infinite monkeys typing for an infinite amount of time](https://en.wikipedia.org/wiki/Infinite_monkey_theorem) will eventually type, among other things, the complete works of Wiliam Shakespeare. Let's see if we can get there a bit faster, with the power of Recurrent Neural Networks and LSTM.
This text file contains the complete works of Shakespeare: https://www.gutenberg.org/files/100/100-0.txt
Use it as training data for an RNN - you can keep it simple and train character level, and that is suggested as an initial approach.
Then, use that trained RNN to generate Shakespearean-ish text. Your goal - a function that can take, as an argument, the size of text (e.g. number of characters or lines) to generate, and returns generated text of that size.
Note - Shakespeare wrote an awful lot. It's OK, especially initially, to sample/use smaller data and parameters, so you can have a tighter feedback loop when you're trying to get things running. Then, once you've got a proof of concept - start pushing it more!
```
import requests
from tensorflow.keras.callbacks import LambdaCallback
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM
from tensorflow.keras.optimizers import RMSprop
import numpy as np
import random
import sys
import os
r = requests.get(url = 'https://www.gutenberg.org/files/100/100-0.txt')
r.text[0:1000]
giant_string = (str(r.text).split("Contents\r\n\r\n\r\n\r\n", 1)[1]).split("FINIS\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r", 1)[0]
chars = list(set(giant_string))
char_int = {c:i for i,c in enumerate(chars)}
int_char = {i:c for i,c in enumerate(chars)}
indices_char = int_char
char_indices = char_int
maxlen = 40
step = 5
encoded = [char_int[c] for c in giant_string]
sequences = [] # 40 characters
next_chars = [] # 1 character
for i in range(0, len(encoded) - maxlen, step):
sequences.append(encoded[i: i + maxlen])
next_chars.append(encoded[i + maxlen])
print('sequences: ', len(sequences))
# Specify x & y
X = np.zeros((len(sequences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sequences), len(chars)), dtype=np.bool)
for i, sequence in enumerate(sequences):
for t, char in enumerate(sequence):
X[i, t, char] = 1
y[i, next_chars[i]] = 1
X.shape
# build the model: a single LSTM
model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
model.add(Dense(len(chars), activation='softmax'))
optimizer = RMSprop(learning_rate=0.01)
model.compile(loss='categorical_crossentropy', optimizer = optimizer)
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
text = giant_string
def on_epoch_end(epoch, _):
# Function invoked at end of each epoch. Prints generated text.
print()
print('----- Generating text after Epoch: %d' % epoch)
start_index = random.randint(0, len(text) - maxlen - 1)
for diversity in [0.2, 0.5, 1.0, 1.2]:
print('----- diversity:', diversity)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_indices[char]] = 1.
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(X, y,
batch_size=64,
epochs=5,
callbacks=[print_callback])
```
# Resources and Stretch Goals
## Stretch goals:
- Refine the training and generation of text to be able to ask for different genres/styles of Shakespearean text (e.g. plays versus sonnets)
- Train a classification model that takes text and returns which work of Shakespeare it is most likely to be from
- Make it more performant! Many possible routes here - lean on Keras, optimize the code, and/or use more resources (AWS, etc.)
- Revisit the news example from class, and improve it - use categories or tags to refine the model/generation, or train a news classifier
- Run on bigger, better data
## Resources:
- [The Unreasonable Effectiveness of Recurrent Neural Networks](https://karpathy.github.io/2015/05/21/rnn-effectiveness/) - a seminal writeup demonstrating a simple but effective character-level NLP RNN
- [Simple NumPy implementation of RNN](https://github.com/JY-Yoon/RNN-Implementation-using-NumPy/blob/master/RNN%20Implementation%20using%20NumPy.ipynb) - Python 3 version of the code from "Unreasonable Effectiveness"
- [TensorFlow RNN Tutorial](https://github.com/tensorflow/models/tree/master/tutorials/rnn) - code for training a RNN on the Penn Tree Bank language dataset
- [4 part tutorial on RNN](http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/) - relates RNN to the vanishing gradient problem, and provides example implementation
- [RNN training tips and tricks](https://github.com/karpathy/char-rnn#tips-and-tricks) - some rules of thumb for parameterizing and training your RNN
| github_jupyter |
**Chapter 1 – The Machine Learning landscape**
_This is the code used to generate some of the figures in chapter 1._
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/01_the_machine_learning_landscape.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
# Code example 1-1
Although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead.
```
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
```
This function just merges the OECD's life satisfaction data and the IMF's GDP per capita data. It's a bit too long and boring and it's not specific to Machine Learning, which is why I left it out of the book.
```
def prepare_country_stats(oecd_bli, gdp_per_capita):
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita,
left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
remove_indices = [0, 1, 6, 8, 33, 34, 35]
keep_indices = list(set(range(36)) - set(remove_indices))
return full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices]
```
The code in the book expects the data files to be located in the current directory. I just tweaked it here to fetch the files in datasets/lifesat.
```
import os
datapath = os.path.join("datasets", "lifesat", "")
# To plot pretty figures directly within Jupyter
%matplotlib inline
import matplotlib as mpl
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Download the data
import urllib
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml2/master/"
os.makedirs(datapath, exist_ok=True)
for filename in ("oecd_bli_2015.csv", "gdp_per_capita.csv"):
print("Downloading", filename)
url = DOWNLOAD_ROOT + "datasets/lifesat/" + filename
urllib.request.urlretrieve(url, datapath + filename)
# Code example
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn.linear_model
# Load the data
oecd_bli = pd.read_csv(datapath + "oecd_bli_2015.csv", thousands=',')
gdp_per_capita = pd.read_csv(datapath + "gdp_per_capita.csv",thousands=',',delimiter='\t',
encoding='latin1', na_values="n/a")
# Prepare the data
country_stats = prepare_country_stats(oecd_bli, gdp_per_capita)
X = np.c_[country_stats["GDP per capita"]]
y = np.c_[country_stats["Life satisfaction"]]
# Visualize the data
country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction')
plt.show()
# Select a linear model
model = sklearn.linear_model.LinearRegression()
# Train the model
model.fit(X, y)
# Make a prediction for Cyprus
X_new = [[22587]] # Cyprus' GDP per capita
print(model.predict(X_new)) # outputs [[ 5.96242338]]
```
# Note: you can ignore the rest of this notebook, it just generates many of the figures in chapter 1.
Create a function to save the figures.
```
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "fundamentals"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
Make this notebook's output stable across runs:
```
np.random.seed(42)
```
# Load and prepare Life satisfaction data
If you want, you can get fresh data from the OECD's website.
Download the CSV from http://stats.oecd.org/index.aspx?DataSetCode=BLI
and save it to `datasets/lifesat/`.
```
oecd_bli = pd.read_csv(datapath + "oecd_bli_2015.csv", thousands=',')
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
oecd_bli.head(2)
oecd_bli["Life satisfaction"].head()
```
# Load and prepare GDP per capita data
Just like above, you can update the GDP per capita data if you want. Just download data from http://goo.gl/j1MSKe (=> imf.org) and save it to `datasets/lifesat/`.
```
gdp_per_capita = pd.read_csv(datapath+"gdp_per_capita.csv", thousands=',', delimiter='\t',
encoding='latin1', na_values="n/a")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
gdp_per_capita.head(2)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita, left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
full_country_stats
full_country_stats[["GDP per capita", 'Life satisfaction']].loc["United States"]
remove_indices = [0, 1, 6, 8, 33, 34, 35]
keep_indices = list(set(range(36)) - set(remove_indices))
sample_data = full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices]
missing_data = full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[remove_indices]
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.axis([0, 60000, 0, 10])
position_text = {
"Hungary": (5000, 1),
"Korea": (18000, 1.7),
"France": (29000, 2.4),
"Australia": (40000, 3.0),
"United States": (52000, 3.8),
}
for country, pos_text in position_text.items():
pos_data_x, pos_data_y = sample_data.loc[country]
country = "U.S." if country == "United States" else country
plt.annotate(country, xy=(pos_data_x, pos_data_y), xytext=pos_text,
arrowprops=dict(facecolor='black', width=0.5, shrink=0.1, headwidth=5))
plt.plot(pos_data_x, pos_data_y, "ro")
plt.xlabel("GDP per capita (USD)")
save_fig('money_happy_scatterplot')
plt.show()
sample_data.to_csv(os.path.join("datasets", "lifesat", "lifesat.csv"))
sample_data.loc[list(position_text.keys())]
import numpy as np
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.xlabel("GDP per capita (USD)")
plt.axis([0, 60000, 0, 10])
X=np.linspace(0, 60000, 1000)
plt.plot(X, 2*X/100000, "r")
plt.text(40000, 2.7, r"$\theta_0 = 0$", fontsize=14, color="r")
plt.text(40000, 1.8, r"$\theta_1 = 2 \times 10^{-5}$", fontsize=14, color="r")
plt.plot(X, 8 - 5*X/100000, "g")
plt.text(5000, 9.1, r"$\theta_0 = 8$", fontsize=14, color="g")
plt.text(5000, 8.2, r"$\theta_1 = -5 \times 10^{-5}$", fontsize=14, color="g")
plt.plot(X, 4 + 5*X/100000, "b")
plt.text(5000, 3.5, r"$\theta_0 = 4$", fontsize=14, color="b")
plt.text(5000, 2.6, r"$\theta_1 = 5 \times 10^{-5}$", fontsize=14, color="b")
save_fig('tweaking_model_params_plot')
plt.show()
from sklearn import linear_model
lin1 = linear_model.LinearRegression()
Xsample = np.c_[sample_data["GDP per capita"]]
ysample = np.c_[sample_data["Life satisfaction"]]
lin1.fit(Xsample, ysample)
t0, t1 = lin1.intercept_[0], lin1.coef_[0][0]
t0, t1
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3))
plt.xlabel("GDP per capita (USD)")
plt.axis([0, 60000, 0, 10])
X=np.linspace(0, 60000, 1000)
plt.plot(X, t0 + t1*X, "b")
plt.text(5000, 3.1, r"$\theta_0 = 4.85$", fontsize=14, color="b")
plt.text(5000, 2.2, r"$\theta_1 = 4.91 \times 10^{-5}$", fontsize=14, color="b")
save_fig('best_fit_model_plot')
plt.show()
cyprus_gdp_per_capita = gdp_per_capita.loc["Cyprus"]["GDP per capita"]
print(cyprus_gdp_per_capita)
cyprus_predicted_life_satisfaction = lin1.predict([[cyprus_gdp_per_capita]])[0][0]
cyprus_predicted_life_satisfaction
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(5,3), s=1)
plt.xlabel("GDP per capita (USD)")
X=np.linspace(0, 60000, 1000)
plt.plot(X, t0 + t1*X, "b")
plt.axis([0, 60000, 0, 10])
plt.text(5000, 7.5, r"$\theta_0 = 4.85$", fontsize=14, color="b")
plt.text(5000, 6.6, r"$\theta_1 = 4.91 \times 10^{-5}$", fontsize=14, color="b")
plt.plot([cyprus_gdp_per_capita, cyprus_gdp_per_capita], [0, cyprus_predicted_life_satisfaction], "r--")
plt.text(25000, 5.0, r"Prediction = 5.96", fontsize=14, color="b")
plt.plot(cyprus_gdp_per_capita, cyprus_predicted_life_satisfaction, "ro")
save_fig('cyprus_prediction_plot')
plt.show()
sample_data[7:10]
(5.1+5.7+6.5)/3
backup = oecd_bli, gdp_per_capita
def prepare_country_stats(oecd_bli, gdp_per_capita):
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita,
left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
remove_indices = [0, 1, 6, 8, 33, 34, 35]
keep_indices = list(set(range(36)) - set(remove_indices))
return full_country_stats[["GDP per capita", 'Life satisfaction']].iloc[keep_indices]
# Code example
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn.linear_model
# Load the data
oecd_bli = pd.read_csv(datapath + "oecd_bli_2015.csv", thousands=',')
gdp_per_capita = pd.read_csv(datapath + "gdp_per_capita.csv",thousands=',',delimiter='\t',
encoding='latin1', na_values="n/a")
# Prepare the data
country_stats = prepare_country_stats(oecd_bli, gdp_per_capita)
X = np.c_[country_stats["GDP per capita"]]
y = np.c_[country_stats["Life satisfaction"]]
# Visualize the data
country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction')
plt.show()
# Select a linear model
model = sklearn.linear_model.LinearRegression()
# Train the model
model.fit(X, y)
# Make a prediction for Cyprus
X_new = [[22587]] # Cyprus' GDP per capita
print(model.predict(X_new)) # outputs [[ 5.96242338]]
oecd_bli, gdp_per_capita = backup
missing_data
position_text2 = {
"Brazil": (1000, 9.0),
"Mexico": (11000, 9.0),
"Chile": (25000, 9.0),
"Czech Republic": (35000, 9.0),
"Norway": (60000, 3),
"Switzerland": (72000, 3.0),
"Luxembourg": (90000, 3.0),
}
sample_data.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(8,3))
plt.axis([0, 110000, 0, 10])
for country, pos_text in position_text2.items():
pos_data_x, pos_data_y = missing_data.loc[country]
plt.annotate(country, xy=(pos_data_x, pos_data_y), xytext=pos_text,
arrowprops=dict(facecolor='black', width=0.5, shrink=0.1, headwidth=5))
plt.plot(pos_data_x, pos_data_y, "rs")
X=np.linspace(0, 110000, 1000)
plt.plot(X, t0 + t1*X, "b:")
lin_reg_full = linear_model.LinearRegression()
Xfull = np.c_[full_country_stats["GDP per capita"]]
yfull = np.c_[full_country_stats["Life satisfaction"]]
lin_reg_full.fit(Xfull, yfull)
t0full, t1full = lin_reg_full.intercept_[0], lin_reg_full.coef_[0][0]
X = np.linspace(0, 110000, 1000)
plt.plot(X, t0full + t1full * X, "k")
plt.xlabel("GDP per capita (USD)")
save_fig('representative_training_data_scatterplot')
plt.show()
full_country_stats.plot(kind='scatter', x="GDP per capita", y='Life satisfaction', figsize=(8,3))
plt.axis([0, 110000, 0, 10])
from sklearn import preprocessing
from sklearn import pipeline
poly = preprocessing.PolynomialFeatures(degree=60, include_bias=False)
scaler = preprocessing.StandardScaler()
lin_reg2 = linear_model.LinearRegression()
pipeline_reg = pipeline.Pipeline([('poly', poly), ('scal', scaler), ('lin', lin_reg2)])
pipeline_reg.fit(Xfull, yfull)
curve = pipeline_reg.predict(X[:, np.newaxis])
plt.plot(X, curve)
plt.xlabel("GDP per capita (USD)")
save_fig('overfitting_model_plot')
plt.show()
full_country_stats.loc[[c for c in full_country_stats.index if "W" in c.upper()]]["Life satisfaction"]
gdp_per_capita.loc[[c for c in gdp_per_capita.index if "W" in c.upper()]].head()
plt.figure(figsize=(8,3))
plt.xlabel("GDP per capita")
plt.ylabel('Life satisfaction')
plt.plot(list(sample_data["GDP per capita"]), list(sample_data["Life satisfaction"]), "bo")
plt.plot(list(missing_data["GDP per capita"]), list(missing_data["Life satisfaction"]), "rs")
X = np.linspace(0, 110000, 1000)
plt.plot(X, t0full + t1full * X, "r--", label="Linear model on all data")
plt.plot(X, t0 + t1*X, "b:", label="Linear model on partial data")
ridge = linear_model.Ridge(alpha=10**9.5)
Xsample = np.c_[sample_data["GDP per capita"]]
ysample = np.c_[sample_data["Life satisfaction"]]
ridge.fit(Xsample, ysample)
t0ridge, t1ridge = ridge.intercept_[0], ridge.coef_[0][0]
plt.plot(X, t0ridge + t1ridge * X, "b", label="Regularized linear model on partial data")
plt.legend(loc="lower right")
plt.axis([0, 110000, 0, 10])
plt.xlabel("GDP per capita (USD)")
save_fig('ridge_model_plot')
plt.show()
backup = oecd_bli, gdp_per_capita
def prepare_country_stats(oecd_bli, gdp_per_capita):
return sample_data
# Replace this linear model:
import sklearn.linear_model
model = sklearn.linear_model.LinearRegression()
# with this k-neighbors regression model:
import sklearn.neighbors
model = sklearn.neighbors.KNeighborsRegressor(n_neighbors=3)
X = np.c_[country_stats["GDP per capita"]]
y = np.c_[country_stats["Life satisfaction"]]
# Train the model
model.fit(X, y)
# Make a prediction for Cyprus
X_new = np.array([[22587.0]]) # Cyprus' GDP per capita
print(model.predict(X_new)) # outputs [[ 5.76666667]]
```
| github_jupyter |
```
%%html
<!--Script block to left align Markdown Tables-->
<style>
table {margin-left: 0 !important;}
</style>
```
**Download** (right-click, save target as ...) this page as a jupyterlab notebook from: [Lab24](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab24/Lab24.ipynb)
___
# <font color=green>Laboratory 24: "Predictor-Response Data Models"</font>
LAST NAME, FIRST NAME
R00000000
ENGR 1330 Laboratory 24
## Exercise: Watershed Response Metrics
### Background
Rainfall-Runoff response prediction is a vital step in engineering design for mitigating flood-induced infrastructure failure. One easy to measure characteristic of a watershed is its drainage area. Harder to quantify are its characteristic response time, and its conversion (of precipitation into runoff) factor.
### Study Database
The [watersheds.csv](http://54.243.252.9/engr-1330-webroot/4-Databases/watersheds.csv) dataset contains (measured) drainage area for 92 study watersheds in Texas from [Cleveland, et. al., 2006](https://192.168.1.75/documents/about-me/MyWebPapers/journal_papers/ASCE_Irrigation_Drainage_IR-022737/2006_0602_IUHEvalTexas.pdf), and the associated data:
|Columns|Info.|
|:---|:---|
|STATION_ID |USGS HUC-8 Station ID code|
|TDA |Total drainage area (sq. miles) |
|RCOEF|Runoff Ratio (Runoff Depth/Precipitation Depth)|
|TPEAK|Characteristic Time (minutes)|
|FPEAK|Peaking factor (same as NRCS factor)|
|QP_OBS|Observed peak discharge (measured)|
|QP_MOD|Modeled peak discharge (modeled)|
### :
Using the following steps, build a predictor-response type data model.
<hr/><hr/>
**Step 1:**
<hr/>
Read the "watersheds.csv" file as a dataframe. Explore the dataframe and in a markdown cell briefly describe the summarize the dataframe. <br>
```
# import packages
import pandas, numpy
# read data file
mydata = pandas.read_csv("watersheds.csv")
# summarize contents + markdown cell as needed
mydata.head()
```
<hr/><hr/>
**Step 2:** <hr/>
Make a data model using **TDA** as a predictor of **TPEAK** ($T_{peak} = \beta_{0}+\beta_{1}*TDA$) <br> Plot your model and the data on the same plot. Report your values of the parameters.
```
predictor = mydata['TDA'].tolist()
response = mydata['TPEAK'].tolist()
# Our data model
def poly1(b0,b1,x):
# return y = b0 + b1*x
poly1=b0+b1*x
return(poly1)
intercept = 200
slope = 6.0
sortedpred = sorted(predictor)
modelresponse = [] # empty list
for i in range(len(sortedpred)):
modelresponse.append(poly1(intercept,slope,sortedpred[i]))
# Our plotting function
import matplotlib.pyplot as plt
def make2plot(listx1,listy1,listx2,listy2,strlablx,strlably,strtitle):
mydata = plt.figure(figsize = (10,5)) # build a square drawing canvass from figure class
plt.plot(listx1,listy1, c='red', marker='p',linewidth=0) # basic data plot
plt.plot(listx2,listy2, c='blue',linewidth=1) # basic model plot
plt.xlabel(strlablx)
plt.ylabel(strlably)
plt.legend(['Data','Model'])# modify for argument insertion
plt.title(strtitle)
plt.show()
# Plotting results
charttitle="Plot of y=b0+b1*x model and observations \n" + " Model equation: y = " + str(intercept) + " + " + str(slope) + "x"
make2plot(predictor,response,sortedpred,modelresponse,'TDA','TPEAK',charttitle)
# solving the linear system to make a model
##############################
import numpy
X = [numpy.ones(len(predictor)),numpy.array(predictor)] # build the design X matrix #
X = numpy.transpose(X) # get into correct shape for linear solver
Y = numpy.array(response) # build the response Y vector
A = numpy.transpose(X)@X # build the XtX matrix
b = numpy.transpose(X)@Y # build the XtY vector
x = numpy.linalg.solve(A,b) # avoid inversion and just solve the linear system
sortedpred = sorted(predictor)
modelresponse = [] # empty list
for i in range(len(sortedpred)):
modelresponse.append(poly1(x[0],x[1],sortedpred[i]))
# Plotting results
charttitle="Plot of y=b0+b1*x model and observations \n" + " Model equation: y = " + str(x[0]) + " + " + str(x[1]) + "x"
make2plot(predictor,response,sortedpred,modelresponse,'TDA','TPEAK',charttitle)
```
<hr/><hr/>
**Step 3:**
<hr/>
Make a data model using **log(TDA)** as a predictor of **TPEAK** ($T_{peak} = \beta_{0}+\beta_{1}*log(TDA)$)
In your opinion which mapping of **TDA** (arithmetic or logarithmic) produces a more useful graph?
```
#
import math
predictor = mydata['TDA'].apply(math.log).tolist()
response = mydata['TPEAK'].tolist()
# Our data model
def poly1(b0,b1,x):
# return y = b0 + b1*x
poly1=b0+b1*x
return(poly1)
intercept = 1
slope = 120.0
sortedpred = sorted(predictor)
modelresponse = [] # empty list
for i in range(len(sortedpred)):
modelresponse.append(poly1(intercept,slope,sortedpred[i]))
# Our plotting function
import matplotlib.pyplot as plt
def make2plot(listx1,listy1,listx2,listy2,strlablx,strlably,strtitle):
mydata = plt.figure(figsize = (10,5)) # build a square drawing canvass from figure class
plt.plot(listx1,listy1, c='red', marker='p',linewidth=0) # basic data plot
plt.plot(listx2,listy2, c='blue',linewidth=1) # basic model plot
plt.xlabel(strlablx)
plt.ylabel(strlably)
plt.legend(['Data','Model'])# modify for argument insertion
plt.title(strtitle)
plt.show()
# Plotting results
charttitle="Plot of y=b0+b1*x model and observations \n" + " Model equation: y = " + str(intercept) + " + " + str(slope) + "x"
make2plot(predictor,response,sortedpred,modelresponse,'logTDA','TPEAK',charttitle)
# lets try a quadratic
def poly2(b0,b1,b2,x):
# return y = b0 + b1*x
poly2=b0+b1*x+b2*x**2
return(poly2)
# solving the linear system to make a model
##############################
import numpy
X = [numpy.ones(len(predictor)),numpy.array(predictor),numpy.array(predictor)**2] # build the design X matrix #
X = numpy.transpose(X) # get into correct shape for linear solver
Y = numpy.array(response) # build the response Y vector
A = numpy.transpose(X)@X # build the XtX matrix
b = numpy.transpose(X)@Y # build the XtY vector
x = numpy.linalg.solve(A,b) # avoid inversion and just solve the linear system
predictor.sort()
sortedpred = sorted(predictor)
modelresponse = [] # empty list
for i in range(len(sortedpred)):
modelresponse.append(poly2(x[0],x[1],x[2],sortedpred[i]))
# Plotting results
charttitle="Plot of y=b0+b1*x model and observations \n" + " Model equation: y = " + str(x[0]) + " + " + str(x[1]) + "x +" + str(x[2]) + "x^2"
make2plot(predictor,response,sortedpred,modelresponse,'logTDA','TPEAK',charttitle)
```
| github_jupyter |
# Instructor Turn Get Home Tweets
```
!pip install tweepy
# Dependencies
import json
import tweepy
# Import Twitter API Keys
from config import consumer_key, consumer_secret, access_token, access_token_secret
# Setup Tweepy API Authentication
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
# Get all tweets from home feed
public_tweets = api.home_timeline()
print(public_tweets)
# Loop through all tweets
for tweet in public_tweets:
# Utilize JSON dumps to generate a pretty-printed json
print(json.dumps(tweet, sort_keys=True, indent=4))
```
# Students Turn Activity 2
# Get My Tweets
## Instructions
* In this activity, you will retrieve the most recent tweets sent out from your Twitter account using the Tweepy library.
* Import the necessary libraries needed to talk to Twitter's API.
* Import your keys from your `config.py` file.
* Set up Tweepy authentication.
* Write code to fetch all tweets from your home feed.
* Loop through and print out tweets.
## Hint:
* Consult the [Tweepy](http://docs.tweepy.org/en/v3.5.0/api.html?) documentation for the method used to accomplish this task.
## Bonus
* If you finish the activity early, try tweaking the parameters. For example, can you fetch ten tweets with the script? If you haven't yet sent out more than ten tweets from your account, do that first.
```
# Dependencies
import tweepy
import json
# Import Twitter API Keys
from config import consumer_key, consumer_secret, access_token, access_token_secret
# Setup Tweepy API Authentication
# YOUR CODE HERE
# Get all tweets from home feed
# YOUR CODE HERE
# Loop through all tweets and print a prettified JSON of each tweet
# YOUR CODE HERE
```
# Students Turn Activity 3
# Get Other Tweets
* In this activity, we will retrieve tweets sent out by any account of our choosing!
## Instructions
* Choose a twitter account and save the screen name to a variable.
* Retrieve the latest tweets sent out from that account.
* Use the code from the previous activities to get you started!
* Consult the [Tweepy Docs](http://docs.tweepy.org/en/v3.5.0/api.html) as needed.
```
# Dependencies
import tweepy
import json
from config import consumer_key, consumer_secret, access_token, access_token_secret
# Setup Tweepy API Authentication
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
# Target User
# CODE HERE
# Get all tweets from home feed
# CODE HERE
# Loop through all tweets
# CODE HERE
```
# Everyone Turn Activity 4
```
# Dependencies
import json
# Read JSONs
sample = "./Resources/SampleX.json"
def load_json(jsonfile):
"""Load JSON from a file"""
with open(jsonfile) as file_handle:
return json.load(file_handle)
sample_data = load_json(sample)
print(json.dumps(sample_data[0], indent=4, sort_keys=True))
print("------------------------------------------------------------------")
# Using the Sample_Data provided above, write code to answer each of the
# following questions:
# Question 1: What user account is the tweets in the Sample associated
# with?
user_account = sample_data[0]["user"]["name"]
print(f"User Account: {user_account}")
# Question 2: How many followers does this account have associated with it?
follower_count = sample_data[0]["user"]["followers_count"]
print(f"Follower Count: {follower_count}")
# Question 3: How many tweets are included in the Sample?
print(f"Tweet Count (In Sample): {len(sample_data)}")
# Question 4: How many tweets total has this account made?
total_tweets = sample_data[0]["user"]["statuses_count"]
print(f"Tweet Count (Total): {total_tweets}")
# Question 5: What was the text in the most recent tweet?
recent_tweet = sample_data[0]["text"]
print(f"Most Recent Tweet: {recent_tweet}")
# Question 6: What was the text associated with each of the tweets
# included in this sample data?
print("All Tweets:")
for tweet in sample_data:
print(f'{tweet["text"]}')
# Question 7 (Bonus): Which of the user's tweets was most frequently
# retweeted? How many retweets were there?
top_tweet = ""
top_tweet_follower_count = 0
for tweet in sample_data:
if(tweet["retweet_count"] > top_tweet_follower_count):
top_tweet = tweet["text"]
top_tweet_follower_count = tweet["retweet_count"]
print(f"Most Popular Tweet: {top_tweet}")
print(f"Number of Retweets: {top_tweet_follower_count}")
```
# Students Activity 5
### Popular Users
### Instructions
* In this activity, you are given an incomplete CSV file of Twitter's most popular accounts. You will use this CSV file in conjunction with Tweepy's API to create a pandas DataFrame.
* Consult the Jupyter Notebook file for instructions, but here are your tasks:
* The "PopularAccounts.csv" file has columns whose info needs to be filled in.
* Import the CSV into a pandas DataFrame.
* Iterate over the rows and use Tweepy to retrieve the info for the missing columns. Add the information to the DataFrame.
* Export the results to a new CSV file called "PopularAccounts_New.csv"
* Calculate the averages of each column and create a DataFrame of the averages.
### Hints:
* Make sure to use the appropriate method from the [Tweepy Docs](http://docs.tweepy.org/en/v3.5.0/api.html)
* You may have to use try/except to avoid missing users.
```
# Import Dependencies
import tweepy
import json
import pandas as pd
from config import consumer_key, consumer_secret, access_token, access_token_secret
# Setup Tweepy API Authentication
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
# Import CSV file into Data Frame
# Iterate through DataFrame
# Export the new CSV
# View the DataFrame
# Calculate Averages
# Create DataFrame
# Create a Dataframe of hte averages
```
# Students Turn Activity 6
# Plot Popular Accounts
## Instructions
* In this activity, you will use MatPlotLib to render three scatterplot charts of the results from the last activity.
1. The first scatterplot will plot Tweet Counts (x-axis) vs Follower Counts (y-axis) to determine a relationship, if any, between the two sets of values. It should look like this:

2. Likewise, build a scatterplot for Number Following (x-axis) vs Follower Counts (y-axis).
3. Finally, build a scatterplot for Number of Favorites (x-axis) vs Follower Counts (y-axis).
```
# Dependencies
import tweepy
import json
import pandas as pd
import matplotlib.pyplot as plt
from config import consumer_key, consumer_secret, access_token, access_token_secret
# Setup Tweepy API Authentication
# Import CSV file into Data Frame
popular_tweeters = pd.read_csv("./Resources/PopularAccounts.csv", dtype=str)
# Iterate through DataFrame
# Export the new CSV
# View the DataFrame
# Calculate Averages
# Create DataFrame
# Create a Dataframe of hte averages
# Extract Tweet Counts and Follower Counts
# Easy preview of headers
# Tweet Counts vs Follower Counts
# Number Following vs Follower Counts
# Number of Favorites vs Follower Counts
```
# Instructors Turn Activity 7 Pagination
```
# Dependencies
import tweepy
import json
from config import consumer_key, consumer_secret, access_token, access_token_secret
# Setup Tweepy API Authentication
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
# Target User
target_user = "GuardianData"
# Tweet Texts
tweet_texts = []
# Get all tweets from home feed
public_tweets = api.user_timeline(target_user)
# Loop through all tweets
for tweet in public_tweets:
# Print Tweet
print(tweet["text"])
# Store Tweet in Array
tweet_texts.append(tweet["text"])
# Print the Tweet Count
print(f"Tweet Count: {len(tweet_texts)}")
# With pagination
# Dependencies
import tweepy
import json
from config import consumer_key, consumer_secret, access_token, access_token_secret
# Setup Tweepy API Authentication
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
# Target User
target_user = "GuardianData"
# Tweet Texts
tweet_texts = []
# Create a loop to iteratively run API requests
for x in range(1, 11):
# Get all tweets from home feed (for each page specified)
public_tweets = api.user_timeline(target_user, page=x)
# Loop through all tweets
for tweet in public_tweets:
# Print Tweet
print(tweet["text"])
# Store Tweet in Array
tweet_texts.append(tweet["text"])
# Print the Tweet Count# Print
print(f"Tweet Count: {len(tweet_texts)}")
```
# Everyone Turn Activity 8 Datetime Objects
```
# New Dependency
from datetime import datetime
# Dependencies
import tweepy
import json
import numpy as np
import matplotlib.pyplot as plt
from config import consumer_key, consumer_secret, access_token, access_token_secret
# Setup Tweepy API Authentication
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
# Target User
target_user = 'latimes'
# Get all tweets from home feed
public_tweets = api.user_timeline(target_user)
# A list to hold tweet timestamps
tweet_times = []
# Loop through all tweets
for tweet in public_tweets:
raw_time = tweet["created_at"]
print(raw_time)
tweet_times.append(raw_time)
# Convert tweet timestamps to datetime objects that can be manipulated by
# Python
converted_timestamps = []
for raw_time in tweet_times:
# https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior
# http://strftime.org/
converted_time = datetime.strptime(raw_time, "%a %b %d %H:%M:%S %z %Y")
converted_timestamps.append(converted_time)
print(tweet_times[0])
print(tweet_times[1])
print(converted_timestamps[0])
print(converted_timestamps[1])
diff = converted_timestamps[0] - converted_timestamps[1]
print("Time difference: ", diff)
print('seconds: {}'.format(diff.seconds))
converted_length = len(converted_timestamps)
print(f"length of converted timestamps list: {converted_length}")
time_diffs = []
for x in range(converted_length - 1):
time_diff = converted_timestamps[x] - converted_timestamps[x + 1]
# print(f'time diff: {time_diff}')
# print(f'time diff (in seconds): {time_diff.seconds}')
# print(f'time diff (in minutes): {time_diff.seconds / 60}')
# print(f'time diff (in hours): {time_diff.seconds / 3600}')
# convert time_diff to hours
time_diff = time_diff.seconds / 3600
time_diffs.append(time_diff)
print(f"Avg. Hours Between Tweets: {np.mean(time_diffs)}")
```
# Students Turn Activity 9
# Twitter Velocity
## Instructions
* You are a political data consultant, and have been asked to evaluate how frequently Donald Trump tweets. As a savvy data visualization specialist, you decide on the following course of action: first, you will collect the timestamps of the 500 most recent tweets sent out by Trump. After making a list of the timestamps, you will convert the timestamps into datetime objects. Then you will calculate the time difference from one tweet to the next, and plot those data points in a scatterplot chart.
* The tools you will use for this task are Tweepy and MatPlotLib. You will also need to use the `datetime` library to convert Twitter timestamps to Python datetime objects.
* Your plot should look something like this:

* This handy chart visually demonstrates Trump's tweet pattern: the majority of his tweets are sent within five hours of each other, but he goes up to 24 hours without tweeting!
* See, in contrast, the tweet pattern of a major news organization, the LA Times, whose tweets are sent out much more frequently:

* **Note**: Feel free to plot the tweets of another Twitter account. It does not have to be Donald Trump's!
```
# Dependencies
import tweepy
import json
import numpy as np
from datetime import datetime
import matplotlib.pyplot as plt
from config import consumer_key, consumer_secret, access_token, access_token_secret
# Setup Tweepy API Authentication
# Target User
# Create array to record all date-times of tweets
# Create a counter for viewing every 100 tweets
# Loop through 500 tweets
# Confirm tweet counts
# Convert all tweet times into datetime objects
# Add each datetime object into the array
# Calculate the time between tweets
# Calculate the time in between each tweet
# Hours Between Tweets
# Plot Time Between Tweets
```
# Instructor Turn Activity 10
```
# Dependencies
import tweepy
import json
from config import consumer_key, consumer_secret, access_token, access_token_secret
# Setup Tweepy API Authentication
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
# Search Term
search_term = input("Which term would you like to search for? ")
# Search for all tweets
public_tweets = api.search(search_term)
public_tweets
# View Search Object
# print(public_tweets)
# Loop through all tweets
for tweet in public_tweets["statuses"]:
# Utilize JSON dumps to generate a pretty-printed json
# print(json.dumps(tweet, sort_keys=True, indent=4, separators=(',', ': ')))
print(tweet["text"])
```
# Students Turn Activity 11
# Hashtag Popularity
## Instructions
* In this activity, you will calculate the frequency of tweets containing the following hashtags: **#bigdata, #ai, #vr, #foreverchuck**
* Accomplish this task by using Tweepy to search for tweets containing these search terms (Hint: use a loop), then identifying how frequently tweets are sent that contain these keywords.
* You may, of course, use other hashtags.
* You can do this. Good luck!
```
# Dependencies
import tweepy
import json
import numpy as np
from datetime import datetime
import matplotlib.pyplot as plt
from config import consumer_key, consumer_secret, access_token, access_token_secret
# Setup Tweepy API Authentication
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
# Target Hashtags
# Loop through each hashtag
```
# Students turn Activity 11
# Subway Delays in New York City
## Instructions
* In this activity, you will use data gathered from Twitter to plot which trains in the NYC subway system most frequently cause delays.
* The Twitter account **SubwayStats** announces delays and changes in the NYC subway system.
* Your goal is to pull the 1,000 most recent tweets from that account and use MatPlotLib to generate a bar chart of the number of delays per each train:

* Accomplish this task by first compiling a Python dictionary, whose key value pairs consist of each train and the number of delays:

* In order to build such a dictionary, you will need to filter the tweet texts.
* See the Jupyter Notebook file for more specific instructions at each step. Good luck!
```
# Dependencies
import tweepy
import json
import pandas as pd
import matplotlib.pyplot as plt
from config import consumer_key, consumer_secret, access_token, access_token_secret
# Setup Tweepy API Authentication
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
# Array of Trains
delayed_trains = {}
# Target User
target_user = "SubwayStats"
# Loop through 50 pages of
# Print the Train Delay counts
# Convert Train Delay object into a series
# Preview the results
# Create a plot
```
| github_jupyter |
```
#importing modules
import os
import codecs
import numpy as np
import string
import pandas as pd
```
# **Data Preprocessing**
```
#downloading and extracting the files on colab server
import urllib.request
urllib.request.urlretrieve ("https://archive.ics.uci.edu/ml/machine-learning-databases/20newsgroups-mld/20_newsgroups.tar.gz", "a.tar.gz")
import tarfile
tar = tarfile.open("a.tar.gz")
tar.extractall()
tar.close()
#making a list of all the file paths and their corresponding class
f_paths=[]
i=-1
path="20_newsgroups"
folderlist=os.listdir(path)
if ".DS_Store" in folderlist:
folderlist.remove('.DS_Store')
for folder in folderlist:
i+=1
filelist=os.listdir(path+'/'+folder)
for file in filelist:
f_paths.append((path+'/'+folder+'/'+file,i))
len(f_paths)
#splitting the list of paths into training and testing data
from sklearn import model_selection
x_train,x_test=model_selection.train_test_split(f_paths)
len(x_train),len(x_test)
#Making the lists X_train and X_test containg only the paths of the files in training and testing data
#First making lists Y_train and Y_test containing the classes of the training and testing data
X_train=[]
X_test=[]
Y_train=[]
Y_test=[]
for i in range(len(x_train)):
X_train.append(x_train[i][0])
Y_train.append(x_train[i][1])
for i in range(len(x_test)):
X_test.append(x_test[i][0])
Y_test.append(x_test[i][1])
#Transforming Y_train and Y_test into 1 dimensional np arrays
Y_train=(np.array([Y_train])).reshape(-1)
Y_test=(np.array([Y_test])).reshape(-1)
#shape of Y_train and Y_test np arrays
Y_train.shape,Y_test.shape
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop=set(stopwords.words("english"))
#adding all the above lists and including punctuations to stop words
stop_words=list(stop)+list(set(string.punctuation))
len(stop_words)
#making vocabulary from the files in X_train i.e. training data
vocab={}
count =0
for filename in X_train:
count+=1
f = open(filename,'r',errors='ignore')
record=f.read()
words=record.split()
for word in words:
if len(word)>2:
if word.lower() not in stop_words:
if word.lower() in vocab:
vocab[word.lower()]+=1
else:
vocab[word.lower()]=1
f.close()
#length of the vocabulary
len(vocab)
#sorting the vocabulary on the basis of the frequency of the word
#making the sorted vocabulary
import operator
sorted_vocab = sorted(vocab.items(), key= operator.itemgetter(1), reverse= True) # sort the vocab based on frequency
#making the list feature_names containg the words with the frequency of the top 2000 words
feature_names = []
for i in range(len(sorted_vocab)):
if(sorted_vocab[2000][1] <= sorted_vocab[i][1]):
feature_names.append(sorted_vocab[i][0])
#length of the feature_names i.e. number of our features
print(len(feature_names))
#making dataframes df_train and df_test with columns having the feature names i.e. the words
df_train=pd.DataFrame(columns=feature_names)
df_test=pd.DataFrame(columns=feature_names)
count_train,count_test=0,0
#transforming each file in X_train into a row in the dataframe df_train having columns as feature names and values as the frequency of that feature name i.e that word
for filename in X_train:
count_train+=1
#adding a row of zeros for each file
df_train.loc[len(df_train)]=np.zeros(len(feature_names))
f = open(filename,'r',errors='ignore')
record=f.read()
words=record.split()
#parsing through all the words of the file
for word in words:
if word.lower() in df_train.columns:
df_train[word.lower()][len(df_train)-1]+=1 #if the word is in the column names then adding 1 to the frequency of that word in the row
f.close()
#transforming each file in X_test into a row in the dataframe df_test having columns as feature names and values as the frequency of that feature name i.e that word
for filename in X_test:
count_test+=1
#adding a row of zeros for each file
df_test.loc[len(df_test)]=np.zeros(len(feature_names))
f = open(filename,'r',errors='ignore')
record=f.read()
words=record.split()
#parsing through all the words of the file
for word in words:
if word.lower() in df_test.columns:
df_test[word.lower()][len(df_test)-1]+=1 #if the word is in the column names then adding 1 to the frequency of that word in the row
f.close()
#printing the number files tranformed in training and testing data
print(count_train,count_test)
#putting the values of the datafames into X_train and X_test
X_train=df_train.values
X_test=df_test.values
```
# **Using the inbuilt Multinomial Naive Bayes classifier from sklearn**
```
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import classification_report,confusion_matrix
clf=MultinomialNB()
#fitting the classifier on training data
clf.fit(X_train,Y_train)
#prediciting the classes of the testing data
Y_pred=clf.predict(X_test)
#classification report
print(classification_report(Y_test,Y_pred))
#testing score
print("Testing: ",clf.score(X_test,Y_test))
```
# **Self implemented Multinomial Naive Bayes**
```
#makes the nested dictionary required for NB using the training data
def fit(X,Y):
dictionary={}
y_classes=set(Y)
#iterating over each class of y
for y_class in y_classes:
#adding the class as a key to the dictionary
dictionary[y_class]={}
n_features=X.shape[1]
rows=(Y==y_class)
#making the arrays having only those rows where class is y_class
X_y_class=X[rows]
Y_y_class=Y[rows]
#adding the total number of files as total_data
dictionary["total_data"]=X.shape[0]
#iterating over each feature
for i in range(n_features):
#adding the feature as a key which has the count of that word in Y=y_class as its value
dictionary[y_class][i]=X_y_class[:,i].sum()
#adding the total number of files as total_class
dictionary[y_class]["total_class"]=X_y_class.shape[0]
#adding the sum of all the words in Y=y_class i.e. total no. of words in Y=y_class
dictionary[y_class]["total_words"]=X_y_class.sum()
return dictionary
#calculates the probability of the feature vector belonging to a particular class and the probability of the class
#returns the product of the above 2 probabilities
def probability(x,dictionary,y_class):
#output intially has probability of the particular class in log terms
output=np.log(dictionary[y_class]["total_class"])-np.log(dictionary["total_data"])
n_features=len(dictionary[y_class].keys())-2
#calculates probability of x being in a particular class by calulating probability of each word being in that class
for i in range(n_features):
if x[i]>0:
#probability of the ith word being in this class in terms of log
p_i=x[i]*(np.log(dictionary[y_class][i] + 1) - np.log(dictionary[y_class]["total_words"]+n_features))
output+=p_i
return output
#predicts the class to which a single file feature vector belongs to
def predictSinglePoint(x,dictionary):
classes=dictionary.keys()
#contains the class having the max probability
best_class=1
#max probability
best_prob=-1000
first=True
#iterating over all the classes
for y_class in classes:
if y_class=="total_data":
continue
#finding probability of this file feature vector belonging to y_class
p_class=probability(x,dictionary,y_class)
if(first or p_class>best_prob):
best_prob=p_class
best_class=y_class
first=False
return best_class
#predicts the classes to which all the file feature vectors belong in the testing data
def predict(X_test,dictionary):
y_pred=[]
#iterates over all the file feature vectors
for x in X_test:
#predicts the class of a particular file feature vector
x_class=predictSinglePoint(x,dictionary)
y_pred.append(x_class)
return y_pred
dictionary=fit(X_train,Y_train) #makes the required dictionary
y_pred=predict(X_test,dictionary)# predicts the classes
print(classification_report(Y_test,y_pred)) #classification report for testing data
```
# **Comparison of results between inbuilt and self implemented Multinomial NB**
```
print("----------------------------------------------------------------------------")
print("Classification report for inbuilt Multinomial NB on testing data: ")
print("----------------------------------------------------------------------------")
print(classification_report(Y_test,Y_pred))
print("----------------------------------------------------------------------------")
print("Classification report for self implemented Multinomial NB on testing data: ")
print("----------------------------------------------------------------------------")
print(classification_report(Y_test,y_pred))
```
| github_jupyter |
# 예외 처리
It is likely that you have raised Exceptions if you have
typed all the previous commands of the tutorial. For example, you may
have raised an exception if you entered a command with a typo.
Exceptions are raised by different kinds of errors arising when executing
Python code. In your own code, you may also catch errors, or define custom
error types. You may want to look at the descriptions of the
[the built-in Exceptions](https://docs.python.org/2/library/exceptions.html)
when looking for the right exception type.
## Exceptions
Exceptions are raised by errors in Python:
```python
In [1]: 1/0
---------------------------------------------------------------------------
ZeroDivisionError: integer division or modulo by zero
In [2]: 1 + 'e'
---------------------------------------------------------------------------
TypeError: unsupported operand type(s) for +: 'int' and 'str'
In [3]: d = {1:1, 2:2}
In [4]: d[3]
---------------------------------------------------------------------------
KeyError: 3
In [5]: l = [1, 2, 3]
In [6]: l[4]
---------------------------------------------------------------------------
IndexError: list index out of range
In [7]: l.foobar
---------------------------------------------------------------------------
AttributeError: 'list' object has no attribute 'foobar'
```
As you can see, there are **different types** of exceptions for different errors.
## Catching exceptions
### try/except
```python
In [10]: while True:
....: try:
....: x = int(raw_input('Please enter a number: '))
....: break
....: except ValueError:
....: print('That was no valid number. Try again...')
....:
Please enter a number: a
That was no valid number. Try again...
Please enter a number: 1
In [9]: x
Out[9]: 1
```python
### try/finally
```python
In [10]: try:
....: x = int(raw_input('Please enter a number: '))
....: finally:
....: print('Thank you for your input')
....:
....:
Please enter a number: a
Thank you for your input
---------------------------------------------------------------------------
ValueError: invalid literal for int() with base 10: 'a'
```
Important for resource management (e.g. closing a file)
### Easier to ask for forgiveness than for permission
```python
In [11]: def print_sorted(collection):
....: try:
....: collection.sort()
....: except AttributeError:
....: pass # The pass statement does nothing
....: print(collection)
....:
....:
In [12]: print_sorted([1, 3, 2])
[1, 2, 3]
In [13]: print_sorted(set((1, 3, 2)))
set([1, 2, 3])
In [14]: print_sorted('132')
132
```
## Raising exceptions
* Capturing and reraising an exception:
```python
In [15]: def filter_name(name):
....: try:
....: name = name.encode('ascii')
....: except UnicodeError as e:
....: if name == 'Gaël':
....: print('OK, Gaël')
....: else:
....: raise e
....: return name
....:
In [16]: filter_name('Gaël')
OK, Gaël
Out[16]: 'Ga\xc3\xabl'
In [17]: filter_name('Stéfan')
---------------------------------------------------------------------------
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 2: ordinal not in range(128)
```
* Exceptions to pass messages between parts of the code:
```python
In [17]: def achilles_arrow(x):
....: if abs(x - 1) < 1e-3:
....: raise StopIteration
....: x = 1 - (1-x)/2.
....: return x
....:
In [18]: x = 0
In [19]: while True:
....: try:
....: x = achilles_arrow(x)
....: except StopIteration:
....: break
....:
....:
In [20]: x
Out[20]: 0.9990234375
```
Use exceptions to notify certain conditions are met (e.g.
StopIteration) or not (e.g. custom error raising)
| github_jupyter |
# Locality Sensitive Hashing
Locality Sensitive Hashing (LSH) provides for a fast, efficient approximate nearest neighbor search. The algorithm scales well with respect to the number of data points as well as dimensions.
In this assignment, you will
* Implement the LSH algorithm for approximate nearest neighbor search
* Examine the accuracy for different documents by comparing against brute force search, and also contrast runtimes
* Explore the role of the algorithm’s tuning parameters in the accuracy of the method
**Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook.
## Import necessary packages
The following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read [this page](https://turi.com/download/upgrade-graphlab-create.html).
```
import numpy as np
import graphlab
from scipy.sparse import csr_matrix
from sklearn.metrics.pairwise import pairwise_distances
import time
from copy import copy
import matplotlib.pyplot as plt
%matplotlib inline
'''Check GraphLab Create version'''
from distutils.version import StrictVersion
assert (StrictVersion(graphlab.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'
'''compute norm of a sparse vector
Thanks to: Jaiyam Sharma'''
def norm(x):
sum_sq=x.dot(x.T)
norm=np.sqrt(sum_sq)
return(norm)
```
## Load in the Wikipedia dataset
```
wiki = graphlab.SFrame('people_wiki.gl/')
wiki
```
For this assignment, let us assign a unique ID to each document.
```
wiki = wiki.add_row_number()
wiki
```
## Extract TF-IDF matrix
We first use GraphLab Create to compute a TF-IDF representation for each document.
```
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])
wiki
```
For the remainder of the assignment, we will use sparse matrices. Sparse matrices are [matrices](https://en.wikipedia.org/wiki/Matrix_(mathematics%29 ) that have a small number of nonzero entries. A good data structure for sparse matrices would only store the nonzero entries to save space and speed up computation. SciPy provides a highly-optimized library for sparse matrices. Many matrix operations available for NumPy arrays are also available for SciPy sparse matrices.
We first convert the TF-IDF column (in dictionary format) into the SciPy sparse matrix format.
```
def sframe_to_scipy(column):
"""
Convert a dict-typed SArray into a SciPy sparse matrix.
Returns
-------
mat : a SciPy sparse matrix where mat[i, j] is the value of word j for document i.
mapping : a dictionary where mapping[j] is the word whose values are in column j.
"""
# Create triples of (row_id, feature_id, count).
x = graphlab.SFrame({'X1':column})
# 1. Add a row number.
x = x.add_row_number()
# 2. Stack will transform x to have a row for each unique (row, key) pair.
x = x.stack('X1', ['feature', 'value'])
# Map words into integers using a OneHotEncoder feature transformation.
f = graphlab.feature_engineering.OneHotEncoder(features=['feature'])
# We first fit the transformer using the above data.
f.fit(x)
# The transform method will add a new column that is the transformed version
# of the 'word' column.
x = f.transform(x)
# Get the feature mapping.
mapping = f['feature_encoding']
# Get the actual word id.
x['feature_id'] = x['encoded_features'].dict_keys().apply(lambda x: x[0])
# Create numpy arrays that contain the data for the sparse matrix.
i = np.array(x['id'])
j = np.array(x['feature_id'])
v = np.array(x['value'])
width = x['id'].max() + 1
height = x['feature_id'].max() + 1
# Create a sparse matrix.
mat = csr_matrix((v, (i, j)), shape=(width, height))
return mat, mapping
```
The conversion should take a few minutes to complete.
```
start=time.time()
corpus, mapping = sframe_to_scipy(wiki['tf_idf'])
end=time.time()
print end-start
```
**Checkpoint**: The following code block should return 'Check passed correctly', indicating that your matrix contains TF-IDF values for 59071 documents and 547979 unique words. Otherwise, it will return Error.
```
assert corpus.shape == (59071, 547979)
print 'Check passed correctly!'
```
## Train an LSH model
LSH performs an efficient neighbor search by randomly partitioning all reference data points into different bins. Today we will build a popular variant of LSH known as random binary projection, which approximates cosine distance. There are other variants we could use for other choices of distance metrics.
The first step is to generate a collection of random vectors from the standard Gaussian distribution.
```
def generate_random_vectors(num_vector, dim):
return np.random.randn(dim, num_vector)
```
To visualize these Gaussian random vectors, let's look at an example in low-dimensions. Below, we generate 3 random vectors each of dimension 5.
```
# Generate 3 random vectors of dimension 5, arranged into a single 5 x 3 matrix.
np.random.seed(0) # set seed=0 for consistent results
generate_random_vectors(num_vector=3, dim=5)
```
We now generate random vectors of the same dimensionality as our vocubulary size (547979). Each vector can be used to compute one bit in the bin encoding. We generate 16 vectors, leading to a 16-bit encoding of the bin index for each document.
```
# Generate 16 random vectors of dimension 547979
np.random.seed(0)
random_vectors = generate_random_vectors(num_vector=16, dim=547979)
random_vectors.shape
```
Next, we partition data points into bins. Instead of using explicit loops, we'd like to utilize matrix operations for greater efficiency. Let's walk through the construction step by step.
We'd like to decide which bin document 0 should go. Since 16 random vectors were generated in the previous cell, we have 16 bits to represent the bin index. The first bit is given by the sign of the dot product between the first random vector and the document's TF-IDF vector.
```
doc = corpus[0, :] # vector of tf-idf values for document 0
doc.dot(random_vectors[:, 0]) >= 0 # True if positive sign; False if negative sign
```
Similarly, the second bit is computed as the sign of the dot product between the second random vector and the document vector.
```
doc.dot(random_vectors[:, 1]) >= 0 # True if positive sign; False if negative sign
```
We can compute all of the bin index bits at once as follows. Note the absence of the explicit `for` loop over the 16 vectors. Matrix operations let us batch dot-product computation in a highly efficent manner, unlike the `for` loop construction. Given the relative inefficiency of loops in Python, the advantage of matrix operations is even greater.
```
doc.dot(random_vectors) >= 0 # should return an array of 16 True/False bits
np.array(doc.dot(random_vectors) >= 0, dtype=int) # display index bits in 0/1's
```
All documents that obtain exactly this vector will be assigned to the same bin. We'd like to repeat the identical operation on all documents in the Wikipedia dataset and compute the corresponding bin indices. Again, we use matrix operations so that no explicit loop is needed.
```
corpus[0:2].dot(random_vectors) >= 0 # compute bit indices of first two documents
corpus.dot(random_vectors) >= 0 # compute bit indices of ALL documents
```
We're almost done! To make it convenient to refer to individual bins, we convert each binary bin index into a single integer:
```
Bin index integer
[0,0,0,0,0,0,0,0,0,0,0,0] => 0
[0,0,0,0,0,0,0,0,0,0,0,1] => 1
[0,0,0,0,0,0,0,0,0,0,1,0] => 2
[0,0,0,0,0,0,0,0,0,0,1,1] => 3
...
[1,1,1,1,1,1,1,1,1,1,0,0] => 65532
[1,1,1,1,1,1,1,1,1,1,0,1] => 65533
[1,1,1,1,1,1,1,1,1,1,1,0] => 65534
[1,1,1,1,1,1,1,1,1,1,1,1] => 65535 (= 2^16-1)
```
By the [rules of binary number representation](https://en.wikipedia.org/wiki/Binary_number#Decimal), we just need to compute the dot product between the document vector and the vector consisting of powers of 2:
```
doc = corpus[0, :] # first document
index_bits = (doc.dot(random_vectors) >= 0)
powers_of_two = (1 << np.arange(15, -1, -1))
print index_bits
print powers_of_two
print index_bits.dot(powers_of_two)
```
Since it's the dot product again, we batch it with a matrix operation:
```
index_bits = corpus.dot(random_vectors) >= 0
index_bits.dot(powers_of_two)
```
This array gives us the integer index of the bins for all documents.
Now we are ready to complete the following function. Given the integer bin indices for the documents, you should compile a list of document IDs that belong to each bin. Since a list is to be maintained for each unique bin index, a dictionary of lists is used.
1. Compute the integer bin indices. This step is already completed.
2. For each document in the dataset, do the following:
* Get the integer bin index for the document.
* Fetch the list of document ids associated with the bin; if no list yet exists for this bin, assign the bin an empty list.
* Add the document id to the end of the list.
```
def train_lsh(data, num_vector=16, seed=None):
dim = data.shape[1]
if seed is not None:
np.random.seed(seed)
random_vectors = generate_random_vectors(num_vector, dim)
powers_of_two = 1 << np.arange(num_vector-1, -1, -1)
table = {}
# Partition data points into bins
bin_index_bits = (data.dot(random_vectors) >= 0)
# Encode bin index bits into integers
bin_indices = bin_index_bits.dot(powers_of_two)
# Update `table` so that `table[i]` is the list of document ids with bin index equal to i.
for data_index, bin_index in enumerate(bin_indices):
if bin_index not in table:
# If no list yet exists for this bin, assign the bin an empty list.
table[bin_index] = [] # YOUR CODE HERE
# Fetch the list of document ids associated with the bin and add the document id to the end.
table[bin_index] = table[bin_index]+[data_index] # YOUR CODE HERE
model = {'data': data,
'bin_index_bits': bin_index_bits,
'bin_indices': bin_indices,
'table': table,
'random_vectors': random_vectors,
'num_vector': num_vector}
return model
```
**Checkpoint**.
```
model = train_lsh(corpus, num_vector=16, seed=143)
table = model['table']
if 0 in table and table[0] == [39583] and \
143 in table and table[143] == [19693, 28277, 29776, 30399]:
print 'Passed!'
else:
print 'Check your code.'
```
**Note.** We will be using the model trained here in the following sections, unless otherwise indicated.
## Inspect bins
Let us look at some documents and see which bins they fall into.
```
wiki[wiki['name'] == 'Barack Obama']
```
**Quiz Question**. What is the document `id` of Barack Obama's article?
**Quiz Question**. Which bin contains Barack Obama's article? Enter its integer index.
```
obama_l =index_bits[35817]
print obama_l.dot(powers_of_two)
```
Recall from the previous assignment that Joe Biden was a close neighbor of Barack Obama.
```
wiki[wiki['name'] == 'Joe Biden']
```
**Quiz Question**. Examine the bit representations of the bins containing Barack Obama and Joe Biden. In how many places do they agree?
1. 16 out of 16 places (Barack Obama and Joe Biden fall into the same bin)
2. 14 out of 16 places
3. 12 out of 16 places
4. 10 out of 16 places
5. 8 out of 16 places
```
print np.array(model['bin_index_bits'][35817], dtype=int) # list of 0/1's
print np.array(model['bin_index_bits'][24478], dtype=int) # list of 0/1's
model['bin_index_bits'][35817] == model['bin_index_bits'][24478]
```
Compare the result with a former British diplomat, whose bin representation agrees with Obama's in only 8 out of 16 places.
```
wiki[wiki['name']=='Wynn Normington Hugh-Jones']
print np.array(model['bin_index_bits'][22745], dtype=int) # list of 0/1's
print model['bin_indices'][22745] # integer format
model['bin_index_bits'][35817] == model['bin_index_bits'][22745]
```
How about the documents in the same bin as Barack Obama? Are they necessarily more similar to Obama than Biden? Let's look at which documents are in the same bin as the Barack Obama article.
```
model['table'][model['bin_indices'][35817]]
```
There are four other documents that belong to the same bin. Which documents are they?
```
doc_ids = list(model['table'][model['bin_indices'][35817]])
doc_ids.remove(35817) # display documents other than Obama
docs = wiki.filter_by(values=doc_ids, column_name='id') # filter by id column
docs
```
It turns out that Joe Biden is much closer to Barack Obama than any of the four documents, even though Biden's bin representation differs from Obama's by 2 bits.
```
def cosine_distance(x, y):
xy = x.dot(y.T)
dist = xy/(norm(x)*norm(y))
return 1-dist[0,0]
obama_tf_idf = corpus[35817,:]
biden_tf_idf = corpus[24478,:]
print '================= Cosine distance from Barack Obama'
print 'Barack Obama - {0:24s}: {1:f}'.format('Joe Biden',
cosine_distance(obama_tf_idf, biden_tf_idf))
for doc_id in doc_ids:
doc_tf_idf = corpus[doc_id,:]
print 'Barack Obama - {0:24s}: {1:f}'.format(wiki[doc_id]['name'],
cosine_distance(obama_tf_idf, doc_tf_idf))
```
**Moral of the story**. Similar data points will in general _tend to_ fall into _nearby_ bins, but that's all we can say about LSH. In a high-dimensional space such as text features, we often get unlucky with our selection of only a few random vectors such that dissimilar data points go into the same bin while similar data points fall into different bins. **Given a query document, we must consider all documents in the nearby bins and sort them according to their actual distances from the query.**
## Query the LSH model
Let us first implement the logic for searching nearby neighbors, which goes like this:
```
1. Let L be the bit representation of the bin that contains the query documents.
2. Consider all documents in bin L.
3. Consider documents in the bins whose bit representation differs from L by 1 bit.
4. Consider documents in the bins whose bit representation differs from L by 2 bits.
...
```
To obtain candidate bins that differ from the query bin by some number of bits, we use `itertools.combinations`, which produces all possible subsets of a given list. See [this documentation](https://docs.python.org/3/library/itertools.html#itertools.combinations) for details.
```
1. Decide on the search radius r. This will determine the number of different bits between the two vectors.
2. For each subset (n_1, n_2, ..., n_r) of the list [0, 1, 2, ..., num_vector-1], do the following:
* Flip the bits (n_1, n_2, ..., n_r) of the query bin to produce a new bit vector.
* Fetch the list of documents belonging to the bin indexed by the new bit vector.
* Add those documents to the candidate set.
```
Each line of output from the following cell is a 3-tuple indicating where the candidate bin would differ from the query bin. For instance,
```
(0, 1, 3)
```
indicates that the candiate bin differs from the query bin in first, second, and fourth bits.
```
from itertools import combinations
num_vector = 16
search_radius = 3
for diff in combinations(range(num_vector), search_radius):
print diff
```
With this output in mind, implement the logic for nearby bin search:
```
def search_nearby_bins(query_bin_bits, table, search_radius=2, initial_candidates=set()):
"""
For a given query vector and trained LSH model, return all candidate neighbors for
the query among all bins within the given search radius.
Example usage
-------------
>>> model = train_lsh(corpus, num_vector=16, seed=143)
>>> q = model['bin_index_bits'][0] # vector for the first document
>>> candidates = search_nearby_bins(q, model['table'])
"""
num_vector = len(query_bin_bits)
powers_of_two = 1 << np.arange(num_vector-1, -1, -1)
# Allow the user to provide an initial set of candidates.
candidate_set = copy(initial_candidates)
for different_bits in combinations(range(num_vector), search_radius):
# Flip the bits (n_1,n_2,...,n_r) of the query bin to produce a new bit vector.
## Hint: you can iterate over a tuple like a list
alternate_bits = copy(query_bin_bits)
for i in different_bits:
alternate_bits[i] = not alternate_bits[i] # YOUR CODE HERE
# Convert the new bit vector to an integer index
nearby_bin = alternate_bits.dot(powers_of_two)
# Fetch the list of documents belonging to the bin indexed by the new bit vector.
# Then add those documents to candidate_set
# Make sure that the bin exists in the table!
# Hint: update() method for sets lets you add an entire list to the set
if nearby_bin in table:
candidate_set.update(table[nearby_bin]) # YOUR CODE HERE: Update candidate_set with the documents in this bin.
return candidate_set
```
**Checkpoint**. Running the function with `search_radius=0` should yield the list of documents belonging to the same bin as the query.
```
obama_bin_index = model['bin_index_bits'][35817] # bin index of Barack Obama
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=0)
if candidate_set == set([35817, 21426, 53937, 39426, 50261]):
print 'Passed test'
else:
print 'Check your code'
print 'List of documents in the same bin as Obama: 35817, 21426, 53937, 39426, 50261'
```
**Checkpoint**. Running the function with `search_radius=1` adds more documents to the fore.
```
candidate_set = search_nearby_bins(obama_bin_index, model['table'], search_radius=1, initial_candidates=candidate_set)
if candidate_set == set([39426, 38155, 38412, 28444, 9757, 41631, 39207, 59050, 47773, 53937, 21426, 34547,
23229, 55615, 39877, 27404, 33996, 21715, 50261, 21975, 33243, 58723, 35817, 45676,
19699, 2804, 20347]):
print 'Passed test'
else:
print 'Check your code'
```
**Note**. Don't be surprised if few of the candidates look similar to Obama. This is why we add as many candidates as our computational budget allows and sort them by their distance to the query.
Now we have a function that can return all the candidates from neighboring bins. Next we write a function to collect all candidates and compute their true distance to the query.
```
def query(vec, model, k, max_search_radius):
data = model['data']
table = model['table']
random_vectors = model['random_vectors']
num_vector = random_vectors.shape[1]
# Compute bin index for the query vector, in bit representation.
bin_index_bits = (vec.dot(random_vectors) >= 0).flatten()
# Search nearby bins and collect candidates
candidate_set = set()
for search_radius in xrange(max_search_radius+1):
candidate_set = search_nearby_bins(bin_index_bits, table, search_radius, initial_candidates=candidate_set)
# Sort candidates by their true distances from the query
nearest_neighbors = graphlab.SFrame({'id':candidate_set})
candidates = data[np.array(list(candidate_set)),:]
nearest_neighbors['distance'] = pairwise_distances(candidates, vec, metric='cosine').flatten()
return nearest_neighbors.topk('distance', k, reverse=True), len(candidate_set)
```
Let's try it out with Obama:
```
query(corpus[35817,:], model, k=10, max_search_radius=3)
```
To identify the documents, it's helpful to join this table with the Wikipedia table:
```
query(corpus[35817,:], model, k=10, max_search_radius=3)[0].join(wiki[['id', 'name']], on='id').sort('distance')
```
We have shown that we have a working LSH implementation!
# Experimenting with your LSH implementation
In the following sections we have implemented a few experiments so that you can gain intuition for how your LSH implementation behaves in different situations. This will help you understand the effect of searching nearby bins and the performance of LSH versus computing nearest neighbors using a brute force search.
## Effect of nearby bin search
How does nearby bin search affect the outcome of LSH? There are three variables that are affected by the search radius:
* Number of candidate documents considered
* Query time
* Distance of approximate neighbors from the query
Let us run LSH multiple times, each with different radii for nearby bin search. We will measure the three variables as discussed above.
```
wiki[wiki['name']=='Barack Obama']
num_candidates_history = []
query_time_history = []
max_distance_from_query_history = []
min_distance_from_query_history = []
average_distance_from_query_history = []
for max_search_radius in xrange(17):
start=time.time()
result, num_candidates = query(corpus[35817,:], model, k=10,
max_search_radius=max_search_radius)
end=time.time()
query_time = end-start
print 'Radius:', max_search_radius
print result.join(wiki[['id', 'name']], on='id').sort('distance')
average_distance_from_query = result['distance'][1:].mean()
max_distance_from_query = result['distance'][1:].max()
min_distance_from_query = result['distance'][1:].min()
num_candidates_history.append(num_candidates)
query_time_history.append(query_time)
average_distance_from_query_history.append(average_distance_from_query)
max_distance_from_query_history.append(max_distance_from_query)
min_distance_from_query_history.append(min_distance_from_query)
```
Notice that the top 10 query results become more relevant as the search radius grows. Let's plot the three variables:
```
plt.figure(figsize=(7,4.5))
plt.plot(num_candidates_history, linewidth=4)
plt.xlabel('Search radius')
plt.ylabel('# of documents searched')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(query_time_history, linewidth=4)
plt.xlabel('Search radius')
plt.ylabel('Query time (seconds)')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(average_distance_from_query_history, linewidth=4, label='Average of 10 neighbors')
plt.plot(max_distance_from_query_history, linewidth=4, label='Farthest of 10 neighbors')
plt.plot(min_distance_from_query_history, linewidth=4, label='Closest of 10 neighbors')
plt.xlabel('Search radius')
plt.ylabel('Cosine distance of neighbors')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
```
Some observations:
* As we increase the search radius, we find more neighbors that are a smaller distance away.
* With increased search radius comes a greater number documents that have to be searched. Query time is higher as a consequence.
* With sufficiently high search radius, the results of LSH begin to resemble the results of brute-force search.
**Quiz Question**. What was the smallest search radius that yielded the correct nearest neighbor, namely Joe Biden?
**Quiz Question**. Suppose our goal was to produce 10 approximate nearest neighbors whose average distance from the query document is within 0.01 of the average for the true 10 nearest neighbors. For Barack Obama, the true 10 nearest neighbors are on average about 0.77. What was the smallest search radius for Barack Obama that produced an average distance of 0.78 or better?
```
#2 and 7
```
## Quality metrics for neighbors
The above analysis is limited by the fact that it was run with a single query, namely Barack Obama. We should repeat the analysis for the entirety of data. Iterating over all documents would take a long time, so let us randomly choose 10 documents for our analysis.
For each document, we first compute the true 25 nearest neighbors, and then run LSH multiple times. We look at two metrics:
* Precision@10: How many of the 10 neighbors given by LSH are among the true 25 nearest neighbors?
* Average cosine distance of the neighbors from the query
Then we run LSH multiple times with different search radii.
```
def brute_force_query(vec, data, k):
num_data_points = data.shape[0]
# Compute distances for ALL data points in training set
nearest_neighbors = graphlab.SFrame({'id':range(num_data_points)})
nearest_neighbors['distance'] = pairwise_distances(data, vec, metric='cosine').flatten()
return nearest_neighbors.topk('distance', k, reverse=True)
```
The following cell will run LSH with multiple search radii and compute the quality metrics for each run. Allow a few minutes to complete.
```
max_radius = 17
precision = {i:[] for i in xrange(max_radius)}
average_distance = {i:[] for i in xrange(max_radius)}
query_time = {i:[] for i in xrange(max_radius)}
np.random.seed(0)
num_queries = 10
for i, ix in enumerate(np.random.choice(corpus.shape[0], num_queries, replace=False)):
print('%s / %s' % (i, num_queries))
ground_truth = set(brute_force_query(corpus[ix,:], corpus, k=25)['id'])
# Get the set of 25 true nearest neighbors
for r in xrange(1,max_radius):
start = time.time()
result, num_candidates = query(corpus[ix,:], model, k=10, max_search_radius=r)
end = time.time()
query_time[r].append(end-start)
# precision = (# of neighbors both in result and ground_truth)/10.0
precision[r].append(len(set(result['id']) & ground_truth)/10.0)
average_distance[r].append(result['distance'][1:].mean())
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(average_distance[i]) for i in xrange(1,17)], linewidth=4, label='Average over 10 neighbors')
plt.xlabel('Search radius')
plt.ylabel('Cosine distance')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(precision[i]) for i in xrange(1,17)], linewidth=4, label='Precison@10')
plt.xlabel('Search radius')
plt.ylabel('Precision')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(1,17), [np.mean(query_time[i]) for i in xrange(1,17)], linewidth=4, label='Query time')
plt.xlabel('Search radius')
plt.ylabel('Query time (seconds)')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
```
The observations for Barack Obama generalize to the entire dataset.
## Effect of number of random vectors
Let us now turn our focus to the remaining parameter: the number of random vectors. We run LSH with different number of random vectors, ranging from 5 to 20. We fix the search radius to 3.
Allow a few minutes for the following cell to complete.
```
precision = {i:[] for i in xrange(5,20)}
average_distance = {i:[] for i in xrange(5,20)}
query_time = {i:[] for i in xrange(5,20)}
num_candidates_history = {i:[] for i in xrange(5,20)}
ground_truth = {}
np.random.seed(0)
num_queries = 10
docs = np.random.choice(corpus.shape[0], num_queries, replace=False)
for i, ix in enumerate(docs):
ground_truth[ix] = set(brute_force_query(corpus[ix,:], corpus, k=25)['id'])
# Get the set of 25 true nearest neighbors
for num_vector in xrange(5,20):
print('num_vector = %s' % (num_vector))
model = train_lsh(corpus, num_vector, seed=143)
for i, ix in enumerate(docs):
start = time.time()
result, num_candidates = query(corpus[ix,:], model, k=10, max_search_radius=3)
end = time.time()
query_time[num_vector].append(end-start)
precision[num_vector].append(len(set(result['id']) & ground_truth[ix])/10.0)
average_distance[num_vector].append(result['distance'][1:].mean())
num_candidates_history[num_vector].append(num_candidates)
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(average_distance[i]) for i in xrange(5,20)], linewidth=4, label='Average over 10 neighbors')
plt.xlabel('# of random vectors')
plt.ylabel('Cosine distance')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(precision[i]) for i in xrange(5,20)], linewidth=4, label='Precison@10')
plt.xlabel('# of random vectors')
plt.ylabel('Precision')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(query_time[i]) for i in xrange(5,20)], linewidth=4, label='Query time (seconds)')
plt.xlabel('# of random vectors')
plt.ylabel('Query time (seconds)')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
plt.figure(figsize=(7,4.5))
plt.plot(range(5,20), [np.mean(num_candidates_history[i]) for i in xrange(5,20)], linewidth=4,
label='# of documents searched')
plt.xlabel('# of random vectors')
plt.ylabel('# of documents searched')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size':16})
plt.tight_layout()
```
We see a similar trade-off between quality and performance: as the number of random vectors increases, the query time goes down as each bin contains fewer documents on average, but on average the neighbors are likewise placed farther from the query. On the other hand, when using a small enough number of random vectors, LSH becomes very similar brute-force search: Many documents appear in a single bin, so searching the query bin alone covers a lot of the corpus; then, including neighboring bins might result in searching all documents, just as in the brute-force approach.
| github_jupyter |
# Convolutional Neural Networks
---
In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.
The images in this database are small color images that fall into one of ten classes; some example images are pictured below.
<img src='notebook_ims/cifar_data.png' width=70% height=70% />
### Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)
Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
```
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
```
---
## Load the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)
Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data.
```
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
```
### Visualize a Batch of Training Data
```
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
```
### View an Image in More Detail
Here, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
```
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
```
---
## Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)
This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:
* [Convolutional layers](https://pytorch.org/docs/stable/nn.html#conv2d), which can be thought of as stack of filtered images.
* [Maxpooling layers](https://pytorch.org/docs/stable/nn.html#maxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.
* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.
A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer.
<img src='notebook_ims/2_layer_conv.png' height=50% width=50% />
#### TODO: Define a model with multiple convolutional layers, and define the feedforward network behavior.
The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting.
It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure.
#### Output volume for a convolutional layer
To compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/#layers)):
> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`.
For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
```
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer
# (32,32,3)
self.conv1 = nn.Conv2d(3, 6, 5)
# (14,14,6)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
# (5,5,16)
self.fc1 = nn.Linear(16 * 5 * 5, 64)
self.fc2 = nn.Linear(64, 32)
self.fc3 = nn.Linear(32, 10)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
# def __init__(self):
# super(Net, self).__init__()
# self.conv1 = nn.Conv2d(3, 6, 5)
# self.pool = nn.MaxPool2d(2, 2)
# self.conv2 = nn.Conv2d(6, 16, 5)
# self.fc1 = nn.Linear(16 * 5 * 5, 120)
# self.fc2 = nn.Linear(120, 84)
# self.fc3 = nn.Linear(84, 10)
# def forward(self, x):
# x = self.pool(F.relu(self.conv1(x)))
# x = self.pool(F.relu(self.conv2(x)))
# x = x.view(-1, 16 * 5 * 5)
# x = F.relu(self.fc1(x))
# x = F.relu(self.fc2(x))
# x = self.fc3(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
# self.conv1 = nn.Conv2d(3, 6, 5)
# self.pool = nn.MaxPool2d(2, 2)
# self.conv2 = nn.Conv2d(6, 16, 5)
# self.fc1 = nn.Linear(16 * 5 * 5, 120)
# self.fc2 = nn.Linear(120, 84)
# self.fc3 = nn.Linear(84, 10)
```
### Specify [Loss Function](http://pytorch.org/docs/stable/nn.html#loss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)
Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error.
#### TODO: Define the loss and optimizer and see how these choices change the loss over time.
```
import torch.optim as optim
# specify loss function
criterion = torch.nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
```
---
## Train the Network
Remember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
```
# number of epochs to train the model
n_epochs = 10# you may increase this number to train a final model
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for data, target in train_loader:
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for data, target in valid_loader:
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_cifar.pt')
valid_loss_min = valid_loss
```
### Load the Model with the Lowest Validation Loss
```
model.load_state_dict(torch.load('model_cifar.pt'))
```
---
## Test the Trained Network
Test your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
```
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for data, target in test_loader:
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
```
### Question: What are your model's weaknesses and how might they be improved?
**Answer**: (double-click to edit and add an answer)
### Visualize Sample Test Results
```
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
```
| github_jupyter |
```
```
# INTRODUCTION TO UNSUPERVISED LEARNING
Unsupervised learning is the training of a machine using information that is neither classified nor labeled and allowing the algorithm to act on that information without guidance. Here the task of the machine is to group unsorted information according to similarities, patterns, and differences without any prior training of data.
Unlike supervised learning, no teacher is provided that means no training will be given to the machine. Therefore the machine is restricted to find the hidden structure in unlabeled data by itself.
#Example of Unsupervised Machine Learning
For instance, suppose a image having both dogs and cats which it has never seen.
Thus the machine has no idea about the features of dogs and cats so we can’t categorize it as ‘dogs and cats ‘. But it can categorize them according to their similarities, patterns, and differences, i.e., we can easily categorize the picture into two parts. The first may contain all pics having dogs in it and the second part may contain all pics having cats in it. Here you didn’t learn anything before, which means no training data or examples.
It allows the model to work on its own to discover patterns and information that was previously undetected. It mainly deals with unlabelled data.
#Why Unsupervised Learning?
Here, are prime reasons for using Unsupervised Learning in Machine Learning:
>Unsupervised machine learning finds all kind of unknown patterns in data.
>Unsupervised methods help you to find features which can be useful for categorization.
>It is taken place in real time, so all the input data to be analyzed and labeled in the presence of learners.
>It is easier to get unlabeled data from a computer than labeled data, which needs manual intervention.
#Unsupervised Learning Algorithms
Unsupervised Learning Algorithms allow users to perform more complex processing tasks compared to supervised learning. Although, unsupervised learning can be more unpredictable compared with other natural learning methods. Unsupervised learning algorithms include clustering, anomaly detection, neural networks, etc.
#Unsupervised learning is classified into two categories of algorithms:
>Clustering: A clustering problem is where you want to discover the inherent groupings in the data, such as grouping customers by purchasing behavior.
>Association: An association rule learning problem is where you want to discover rules that describe large portions of your data, such as people that buy X also tend to buy Y.
#a)Clustering
Clustering is an important concept when it comes to unsupervised learning.
It mainly deals with finding a structure or pattern in a collection of uncategorized data.
Unsupervised Learning Clustering algorithms will process your data and find natural clusters(groups) if they exist in the data.
You can also modify how many clusters your algorithms should identify. It allows you to adjust the granularity of these groups.
#There are different types of clustering you can utilize:
#1.Exclusive (partitioning)
In this clustering method, Data are grouped in such a way that one data can belong to one cluster only.
Example: K-means
#2.Agglomerative
In this clustering technique, every data is a cluster. The iterative unions between the two nearest clusters reduce the number of clusters.
Example: Hierarchical clustering
#3.Overlapping
In this technique, fuzzy sets is used to cluster data. Each point may belong to two or more clusters with separate degrees of membership.
Here, data will be associated with an appropriate membership value.
Example: Fuzzy C-Means
#4.Probabilistic
This technique uses probability distribution to create the clusters
Example: Following keywords
“man’s shoe.”
“women’s shoe.”
“women’s glove.”
“man’s glove.”
can be clustered into two categories “shoe” and “glove” or “man” and “women.”
#Clustering Types
Following are the clustering types of Machine Learning:
Hierarchical clustering
K-means clustering
K-NN (k nearest neighbors)
Principal Component Analysis
Singular value decomposition
Independent Component Analysis
#1.Hierarchical Clustering
>Hierarchical clustering is an algorithm which builds a hierarchy of clusters. It begins with all the data which is assigned to a cluster of their own. Here, two close cluster are going to be in the same cluster. This algorithm ends when there is only one cluster left.
#2.K-means Clustering
>K-means it is an iterative clustering algorithm which helps you to find the highest value for every iteration. Initially, the desired number of clusters are selected. In this clustering method, you need to cluster the data points into k groups. A larger k means smaller groups with more granularity in the same way. A lower k means larger groups with less granularity.
>The output of the algorithm is a group of “labels.” It assigns data point to one of the k groups. In k-means clustering, each group is defined by creating a centroid for each group. The centroids are like the heart of the cluster, which captures the points closest to them and adds them to the cluster.
K-mean clustering further defines two subgroups:
Agglomerative clustering
Dendrogram
#Agglomerative clustering
>This type of K-means clustering starts with a fixed number of clusters. It allocates all data into the exact number of clusters. This clustering method does not require the number of clusters K as an input. Agglomeration process starts by forming each data as a single cluster.
>This method uses some distance measure, reduces the number of clusters (one in each iteration) by merging process. Lastly, we have one big cluster that contains all the objects.
#Dendrogram
>In the Dendrogram clustering method, each level will represent a possible cluster. The height of dendrogram shows the level of similarity between two join clusters. The closer to the bottom of the process they are more similar cluster which is finding of the group from dendrogram which is not natural and mostly subjective.
#K- Nearest neighbors
>K- nearest neighbour is the simplest of all machine learning classifiers. It differs from other machine learning techniques, in that it doesn’t produce a model. It is a simple algorithm which stores all available cases and classifies new instances based on a similarity measure.
It works very well when there is a distance between examples. The learning speed is slow when the training set is large, and the distance calculation is nontrivial.
#4.Principal Components Analysis
>In case you want a higher-dimensional space. You need to select a basis for that space and only the 200 most important scores of that basis. This base is known as a principal component. The subset you select constitute is a new space which is small in size compared to original space. It maintains as much of the complexity of data as possible.
#5.Singular value decomposition
>The singular value decomposition of a matrix is usually referred to as the SVD.
This is the final and best factorization of a matrix:
A = UΣV^T
where U is orthogonal, Σ is diagonal, and V is orthogonal.
In the decomoposition A = UΣV^T, A can be any matrix. We know that if A
is symmetric positive definite its eigenvectors are orthogonal and we can write
A = QΛQ^T. This is a special case of a SVD, with U = V = Q. For more general
A, the SVD requires two different matrices U and V.
We’ve also learned how to write A = SΛS^−1, where S is the matrix of n
distinct eigenvectors of A. However, S may not be orthogonal; the matrices U
and V in the SVD will be.
#6.Independent Component Analysis
>Independent Component Analysis (ICA) is a machine learning technique to separate independent sources from a mixed signal. Unlike principal component analysis which focuses on maximizing the variance of the data points, the independent component analysis focuses on independence, i.e. independent components.
#b)Association
>Association rules allow you to establish associations amongst data objects inside large databases. This unsupervised technique is about discovering interesting relationships between variables in large databases. For example, people that buy a new home most likely to buy new furniture.
>Other Examples:
>A subgroup of cancer patients grouped by their gene expression measurements
>Groups of shopper based on their browsing and purchasing histories
>Movie group by the rating given by movies viewers
#Applications of Unsupervised Machine Learning
Some application of Unsupervised Learning Techniques are:
1.Clustering automatically split the dataset into groups base on their similarities
2.Anomaly detection can discover unusual data points in your dataset. It is useful for finding fraudulent transactions
3.Association mining identifies sets of items which often occur together in your dataset
4.Latent variable models are widely used for data preprocessing. Like reducing the number of features in a dataset or decomposing the dataset into multiple components.
#Real-life Applications Of Unsupervised Learning
Machines are not that quick, unlike humans. It takes a lot of resources to train a model based on patterns in data. Below are a few of the wonderful real-life simulations of unsupervised learning.
1.Anomaly detection –The advent of technology and the internet has given birth to enormous anomalies in the past and is still growing. Unsupervised learning has huge scope when it comes to anomaly detection.
2.Segmentation – Unsupervised learning can be used to segment the customers based on certain patterns. Each cluster of customers is different whereas customers within a cluster share common properties. Customer segmentation is a widely opted approach used in devising marketing plans.
#Advantages of unsupervised learning
1.It can see what human minds cannot visualize.
2.It is used to dig hidden patterns which hold utmost importance in the industry and has widespread applications in real-time.
3.The outcome of an unsupervised task can yield an entirely new business vertical or venture.
4.There is lesser complexity compared to the supervised learning task. Here, no one is required to interpret the associated labels and hence it holds lesser complexities.
5.It is reasonably easier to obtain unlabeled data.
#Disadvantages of Unsupervised Learning
1.You cannot get precise information regarding data sorting, and the output as data used in unsupervised learning is labeled and not known.
2.Less accuracy of the results is because the input data is not known and not labeled by people in advance. This means that the machine requires to do this itself.
3.The spectral classes do not always correspond to informational classes.
The user needs to spend time interpreting and label the classes which follow that classification.
4.Spectral properties of classes can also change over time so you can’t have the same class information while moving from one image to another.
#How to use Unsupervised learning to find patterns in data
#CODE:
from sklearn import datasets
import matplotlib.pyplot as plt
iris_df = datasets.load_iris()
print(dir(iris_df)
print(iris_df.feature_names)
print(iris_df.target)
print(iris_df.target_names)
label = {0: 'red', 1: 'blue', 2: 'green'}
x_axis = iris_df.data[:, 0]
y_axis = iris_df.data[:, 2]
plt.scatter(x_axis, y_axis, c=iris_df.target)
plt.show()
#Explanation:
As the above code shows, we have used the Iris dataset to make predictions.The dataset contains a records under four attributes-petal length, petal width, sepal length, sepal width.And also it contains three iris classes:setosa, virginica and versicolor .We'll feed the four features of our flowers to the unsupervised algorithm and it will predict which class the iris belongs.For that we have used scikit-learn library in python to load the Iris dataset and matplotlib for data visualisation.
#OUTPUT:
As we can see here the violet colour represents setosa,green colour represents versicolor and yellow
colour represents virginica.

```
```
| github_jupyter |
# Perceptron
__The Perceptron is a linear machine learning algorithm for binary classification tasks.__
It may be considered one of the first and one of the simplest types of artificial neural networks. It is definitely not “deep” learning but is an important building block.
Like logistic regression, it can quickly learn a linear separation in feature space for two-class classification tasks, although unlike logistic regression, it learns using the stochastic gradient descent optimization algorithm and does not predict calibrated probabilities.
# Perceptron Algorithm
The Perceptron algorithm is a two-class (binary) classification machine learning algorithm.
It is a type of neural network model, perhaps the simplest type of neural network model.
It consists of a single node or neuron that takes a row of data as input and predicts a class label. This is achieved by calculating the weighted sum of the inputs and a bias (set to 1). The weighted sum of the input of the model is called the activation.
- Activation = Weights * Inputs + Bias<br><br>
__If the activation is above 0.0, the model will output 1.0; otherwise, it will output 0.0__<br><br>
- Predict 1: If Activation > 0.0
- Predict 0: If Activation <= 0.0
Model weights are updated with a small proportion of the error each batch, and the proportion is controlled by a hyperparameter called the learning rate, typically set to a small value. This is to ensure learning does not occur too quickly, resulting in a possibly lower skill model, referred to as premature convergence of the optimization (search) procedure for the model weights.
__weights(t + 1) = weights(t) + learning_rate * (expected_i – predicted_) * input_i__<br>
Training is stopped when the error made by the model falls to a low level or no longer improves, or a maximum number of epochs is performed.
The learning rate and number of training epochs are hyperparameters of the algorithm that can be set using heuristics or hyperparameter tuning.
```
from sklearn.datasets import make_classification
from sklearn.model_selection import GridSearchCV, RepeatedStratifiedKFold
from sklearn.linear_model import Perceptron
if __name__ == '__main__' :
X, y = make_classification(n_samples = 1000,
n_features = 10, n_informative = 10,
n_redundant = 0, random_state = 1)
model = Perceptron(eta0=0.0001)
cv = RepeatedStratifiedKFold(n_splits = 10, n_repeats = 3, random_state = 1)
grid = dict()
grid['max_iter'] = [1, 10, 100, 1000, 10000]
search = GridSearchCV(model, grid, scoring = 'accuracy', cv = cv, n_jobs = -1)
result = search.fit(X, y)
# Summarize :
print('Mean Accuracy : {}'.format(round(result.best_score_, 4)))
print('-'*50)
print('Config : {}'.format(result.best_params_))
print('-'*50)
# Summarize all :
means = result.cv_results_['mean_test_score']
params = result.cv_results_['params']
for mean, param in zip(means, params) :
print('{} with : {}'.format(round(mean, 3),(param)))
```
| github_jupyter |
```
%load_ext rpy2.ipython
%matplotlib inline
from fbprophet import Prophet
import pandas as pd
from matplotlib import pyplot as plt
import logging
logging.getLogger('fbprophet').setLevel(logging.ERROR)
import warnings
warnings.filterwarnings("ignore")
df = pd.read_csv('../examples/example_wp_log_peyton_manning.csv')
m = Prophet()
m.fit(df)
future = m.make_future_dataframe(periods=366)
%%R
library(prophet)
df <- read.csv('../examples/example_wp_log_peyton_manning.csv')
m <- prophet(df)
future <- make_future_dataframe(m, periods=366)
```
### Modeling Holidays and Special Events
If you have holidays or other recurring events that you'd like to model, you must create a dataframe for them. It has two columns (`holiday` and `ds`) and a row for each occurrence of the holiday. It must include all occurrences of the holiday, both in the past (back as far as the historical data go) and in the future (out as far as the forecast is being made). If they won't repeat in the future, Prophet will model them and then not include them in the forecast.
You can also include columns `lower_window` and `upper_window` which extend the holiday out to `[lower_window, upper_window]` days around the date. For instance, if you wanted to include Christmas Eve in addition to Christmas you'd include `lower_window=-1,upper_window=0`. If you wanted to use Black Friday in addition to Thanksgiving, you'd include `lower_window=0,upper_window=1`. You can also include a column `prior_scale` to set the prior scale separately for each holiday, as described below.
Here we create a dataframe that includes the dates of all of Peyton Manning's playoff appearances:
```
%%R
library(dplyr)
playoffs <- data_frame(
holiday = 'playoff',
ds = as.Date(c('2008-01-13', '2009-01-03', '2010-01-16',
'2010-01-24', '2010-02-07', '2011-01-08',
'2013-01-12', '2014-01-12', '2014-01-19',
'2014-02-02', '2015-01-11', '2016-01-17',
'2016-01-24', '2016-02-07')),
lower_window = 0,
upper_window = 1
)
superbowls <- data_frame(
holiday = 'superbowl',
ds = as.Date(c('2010-02-07', '2014-02-02', '2016-02-07')),
lower_window = 0,
upper_window = 1
)
holidays <- bind_rows(playoffs, superbowls)
playoffs = pd.DataFrame({
'holiday': 'playoff',
'ds': pd.to_datetime(['2008-01-13', '2009-01-03', '2010-01-16',
'2010-01-24', '2010-02-07', '2011-01-08',
'2013-01-12', '2014-01-12', '2014-01-19',
'2014-02-02', '2015-01-11', '2016-01-17',
'2016-01-24', '2016-02-07']),
'lower_window': 0,
'upper_window': 1,
})
superbowls = pd.DataFrame({
'holiday': 'superbowl',
'ds': pd.to_datetime(['2010-02-07', '2014-02-02', '2016-02-07']),
'lower_window': 0,
'upper_window': 1,
})
holidays = pd.concat((playoffs, superbowls))
```
Above we have included the superbowl days as both playoff games and superbowl games. This means that the superbowl effect will be an additional additive bonus on top of the playoff effect.
Once the table is created, holiday effects are included in the forecast by passing them in with the `holidays` argument. Here we do it with the Peyton Manning data from the [Quickstart](https://facebook.github.io/prophet/docs/quick_start.html):
```
%%R
m <- prophet(df, holidays = holidays)
forecast <- predict(m, future)
m = Prophet(holidays=holidays)
forecast = m.fit(df).predict(future)
```
The holiday effect can be seen in the `forecast` dataframe:
```
%%R
forecast %>%
select(ds, playoff, superbowl) %>%
filter(abs(playoff + superbowl) > 0) %>%
tail(10)
forecast[(forecast['playoff'] + forecast['superbowl']).abs() > 0][
['ds', 'playoff', 'superbowl']][-10:]
```
The holiday effects will also show up in the components plot, where we see that there is a spike on the days around playoff appearances, with an especially large spike for the superbowl:
```
%%R -w 9 -h 12 -u in
prophet_plot_components(m, forecast)
fig = m.plot_components(forecast)
```
Individual holidays can be plotted using the `plot_forecast_component` function (imported from `fbprophet.plot` in Python) like `plot_forecast_component(m, forecast, 'superbowl')` to plot just the superbowl holiday component.
### Built-in Country Holidays
You can use a built-in collection of country-specific holidays using the `add_country_holidays` method (Python) or function (R). The name of the country is specified, and then major holidays for that country will be included in addition to any holidays that are specified via the `holidays` argument described above:
```
%%R
m <- prophet(holidays = holidays)
m <- add_country_holidays(m, country_name = 'US')
m <- fit.prophet(m, df)
m = Prophet(holidays=holidays)
m.add_country_holidays(country_name='US')
m.fit(df)
```
You can see which holidays were included by looking at the `train_holiday_names` (Python) or `train.holiday.names` (R) attribute of the model:
```
%%R
m$train.holiday.names
m.train_holiday_names
```
The holidays for each country are provided by the `holidays` package in Python. A list of available countries, and the country name to use, is available on their page: https://github.com/dr-prodigy/python-holidays. In addition to those countries, Prophet includes holidays for these countries: Brazil (BR), Indonesia (ID), India (IN), Malaysia (MY), Vietnam (VN), Thailand (TH), Philippines (PH), Turkey (TU), Pakistan (PK), Bangladesh (BD), Egypt (EG), China (CN), and Russian (RU), Korea (KR), Belarus (BY), and United Arab Emirates (AE).
In Python, most holidays are computed deterministically and so are available for any date range; a warning will be raised if dates fall outside the range supported by that country. In R, holiday dates are computed for 1995 through 2044 and stored in the package as `data-raw/generated_holidays.csv`. If a wider date range is needed, this script can be used to replace that file with a different date range: https://github.com/facebook/prophet/blob/master/python/scripts/generate_holidays_file.py.
As above, the country-level holidays will then show up in the components plot:
```
%%R -w 9 -h 12 -u in
forecast <- predict(m, future)
prophet_plot_components(m, forecast)
forecast = m.predict(future)
fig = m.plot_components(forecast)
```
### Fourier Order for Seasonalities
Seasonalities are estimated using a partial Fourier sum. See [the paper](https://peerj.com/preprints/3190/) for complete details, and [this figure on Wikipedia](https://en.wikipedia.org/wiki/Fourier_series#/media/File:Fourier_Series.svg) for an illustration of how a partial Fourier sum can approximate an aribtrary periodic signal. The number of terms in the partial sum (the order) is a parameter that determines how quickly the seasonality can change. To illustrate this, consider the Peyton Manning data from the [Quickstart](https://facebook.github.io/prophet/docs/quick_start.html). The default Fourier order for yearly seasonality is 10, which produces this fit:
```
%%R -w 9 -h 3 -u in
m <- prophet(df)
prophet:::plot_yearly(m)
from fbprophet.plot import plot_yearly
m = Prophet().fit(df)
a = plot_yearly(m)
```
The default values are often appropriate, but they can be increased when the seasonality needs to fit higher-frequency changes, and generally be less smooth. The Fourier order can be specified for each built-in seasonality when instantiating the model, here it is increased to 20:
```
%%R -w 9 -h 3 -u in
m <- prophet(df, yearly.seasonality = 20)
prophet:::plot_yearly(m)
from fbprophet.plot import plot_yearly
m = Prophet(yearly_seasonality=20).fit(df)
a = plot_yearly(m)
```
Increasing the number of Fourier terms allows the seasonality to fit faster changing cycles, but can also lead to overfitting: N Fourier terms corresponds to 2N variables used for modeling the cycle
### Specifying Custom Seasonalities
Prophet will by default fit weekly and yearly seasonalities, if the time series is more than two cycles long. It will also fit daily seasonality for a sub-daily time series. You can add other seasonalities (monthly, quarterly, hourly) using the `add_seasonality` method (Python) or function (R).
The inputs to this function are a name, the period of the seasonality in days, and the Fourier order for the seasonality. For reference, by default Prophet uses a Fourier order of 3 for weekly seasonality and 10 for yearly seasonality. An optional input to `add_seasonality` is the prior scale for that seasonal component - this is discussed below.
As an example, here we fit the Peyton Manning data from the [Quickstart](https://facebook.github.io/prophet/docs/quick_start.html), but replace the weekly seasonality with monthly seasonality. The monthly seasonality then will appear in the components plot:
```
%%R -w 9 -h 9 -u in
m <- prophet(weekly.seasonality=FALSE)
m <- add_seasonality(m, name='monthly', period=30.5, fourier.order=5)
m <- fit.prophet(m, df)
forecast <- predict(m, future)
prophet_plot_components(m, forecast)
m = Prophet(weekly_seasonality=False)
m.add_seasonality(name='monthly', period=30.5, fourier_order=5)
forecast = m.fit(df).predict(future)
fig = m.plot_components(forecast)
```
### Seasonalities that depend on other factors
In some instances the seasonality may depend on other factors, such as a weekly seasonal pattern that is different during the summer than it is during the rest of the year, or a daily seasonal pattern that is different on weekends vs. on weekdays. These types of seasonalities can be modeled using conditional seasonalities.
Consider the Peyton Manning example from the [Quickstart](https://facebook.github.io/prophet/docs/quick_start.html). The default weekly seasonality assumes that the pattern of weekly seasonality is the same throughout the year, but we'd expect the pattern of weekly seasonality to be different during the on-season (when there are games every Sunday) and the off-season. We can use conditional seasonalities to construct separate on-season and off-season weekly seasonalities.
First we add a boolean column to the dataframe that indicates whether each date is during the on-season or the off-season:
```
%%R
is_nfl_season <- function(ds) {
dates <- as.Date(ds)
month <- as.numeric(format(dates, '%m'))
return(month > 8 | month < 2)
}
df$on_season <- is_nfl_season(df$ds)
df$off_season <- !is_nfl_season(df$ds)
def is_nfl_season(ds):
date = pd.to_datetime(ds)
return (date.month > 8 or date.month < 2)
df['on_season'] = df['ds'].apply(is_nfl_season)
df['off_season'] = ~df['ds'].apply(is_nfl_season)
```
Then we disable the built-in weekly seasonality, and replace it with two weekly seasonalities that have these columns specified as a condition. This means that the seasonality will only be applied to dates where the `condition_name` column is `True`. We must also add the column to the `future` dataframe for which we are making predictions.
```
%%R -w 9 -h 12 -u in
m <- prophet(weekly.seasonality=FALSE)
m <- add_seasonality(m, name='weekly_on_season', period=7, fourier.order=3, condition.name='on_season')
m <- add_seasonality(m, name='weekly_off_season', period=7, fourier.order=3, condition.name='off_season')
m <- fit.prophet(m, df)
future$on_season <- is_nfl_season(future$ds)
future$off_season <- !is_nfl_season(future$ds)
forecast <- predict(m, future)
prophet_plot_components(m, forecast)
m = Prophet(weekly_seasonality=False)
m.add_seasonality(name='weekly_on_season', period=7, fourier_order=3, condition_name='on_season')
m.add_seasonality(name='weekly_off_season', period=7, fourier_order=3, condition_name='off_season')
future['on_season'] = future['ds'].apply(is_nfl_season)
future['off_season'] = ~future['ds'].apply(is_nfl_season)
forecast = m.fit(df).predict(future)
fig = m.plot_components(forecast)
```
Both of the seasonalities now show up in the components plots above. We can see that during the on-season when games are played every Sunday, there are large increases on Sunday and Monday that are completely absent during the off-season.
### Prior scale for holidays and seasonality
If you find that the holidays are overfitting, you can adjust their prior scale to smooth them using the parameter `holidays_prior_scale`. By default this parameter is 10, which provides very little regularization. Reducing this parameter dampens holiday effects:
```
%%R
m <- prophet(df, holidays = holidays, holidays.prior.scale = 0.05)
forecast <- predict(m, future)
forecast %>%
select(ds, playoff, superbowl) %>%
filter(abs(playoff + superbowl) > 0) %>%
tail(10)
m = Prophet(holidays=holidays, holidays_prior_scale=0.05).fit(df)
forecast = m.predict(future)
forecast[(forecast['playoff'] + forecast['superbowl']).abs() > 0][
['ds', 'playoff', 'superbowl']][-10:]
```
The magnitude of the holiday effect has been reduced compared to before, especially for superbowls, which had the fewest observations. There is a parameter `seasonality_prior_scale` which similarly adjusts the extent to which the seasonality model will fit the data.
Prior scales can be set separately for individual holidays by including a column `prior_scale` in the holidays dataframe. Prior scales for individual seasonalities can be passed as an argument to `add_seasonality`. For instance, the prior scale for just weekly seasonality can be set using:
```
%%R
m <- prophet()
m <- add_seasonality(
m, name='weekly', period=7, fourier.order=3, prior.scale=0.1)
m = Prophet()
m.add_seasonality(
name='weekly', period=7, fourier_order=3, prior_scale=0.1)
```
### Additional regressors
Additional regressors can be added to the linear part of the model using the `add_regressor` method or function. A column with the regressor value will need to be present in both the fitting and prediction dataframes. For example, we can add an additional effect on Sundays during the NFL season. On the components plot, this effect will show up in the 'extra_regressors' plot:
```
%%R -w 9 -h 12 -u in
nfl_sunday <- function(ds) {
dates <- as.Date(ds)
month <- as.numeric(format(dates, '%m'))
as.numeric((weekdays(dates) == "Sunday") & (month > 8 | month < 2))
}
df$nfl_sunday <- nfl_sunday(df$ds)
m <- prophet()
m <- add_regressor(m, 'nfl_sunday')
m <- fit.prophet(m, df)
future$nfl_sunday <- nfl_sunday(future$ds)
forecast <- predict(m, future)
prophet_plot_components(m, forecast)
def nfl_sunday(ds):
date = pd.to_datetime(ds)
if date.weekday() == 6 and (date.month > 8 or date.month < 2):
return 1
else:
return 0
df['nfl_sunday'] = df['ds'].apply(nfl_sunday)
m = Prophet()
m.add_regressor('nfl_sunday')
m.fit(df)
future['nfl_sunday'] = future['ds'].apply(nfl_sunday)
forecast = m.predict(future)
fig = m.plot_components(forecast)
```
NFL Sundays could also have been handled using the "holidays" interface described above, by creating a list of past and future NFL Sundays. The `add_regressor` function provides a more general interface for defining extra linear regressors, and in particular does not require that the regressor be a binary indicator. Another time series could be used as a regressor, although its future values would have to be known.
[This notebook](https://nbviewer.jupyter.org/github/nicolasfauchereau/Auckland_Cycling/blob/master/notebooks/Auckland_cycling_and_weather.ipynb) shows an example of using weather factors as extra regressors in a forecast of bicycle usage, and provides an excellent illustration of how other time series can be included as extra regressors.
The `add_regressor` function has optional arguments for specifying the prior scale (holiday prior scale is used by default) and whether or not the regressor is standardized - see the docstring with `help(Prophet.add_regressor)` in Python and `?add_regressor` in R. Note that regressors must be added prior to model fitting.
The extra regressor must be known for both the history and for future dates. It thus must either be something that has known future values (such as `nfl_sunday`), or something that has separately been forecasted elsewhere. Prophet will also raise an error if the regressor is constant throughout the history, since there is nothing to fit from it.
Extra regressors are put in the linear component of the model, so the underlying model is that the time series depends on the extra regressor as either an additive or multiplicative factor (see the next section for multiplicativity).
| github_jupyter |
### Demonstration of `flopy.utils.get_transmissivities` method
for computing open interval transmissivities (for weighted averages of heads or fluxes)
In practice this method might be used to:
* compute vertically-averaged head target values representative of observation wells of varying open intervals (including variability in saturated thickness due to the position of the water table)
* apportion boundary fluxes (e.g. from an analytic element model) among model layers based on transmissivity.
* any other analysis where a distribution of transmissivity by layer is needed for a specified vertical interval of the model.
```
import os
import sys
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
```
### Make up some open interval tops and bottoms and some heads
* (these could be lists of observation well screen tops and bottoms)
* the heads array contains the simulated head in each model layer,
at the location of each observation well (for example, what you would get back from HYDMOD if you had an entry for each layer at the location of each head target).
* make up a model grid with uniform horizontal k of 2.
```
sctop = [-.25, .5, 1.7, 1.5, 3., 2.5] # screen tops
scbot = [-1., -.5, 1.2, 0.5, 1.5, -.2] # screen bottoms
# head in each layer, for 6 head target locations
heads = np.array([[1., 2.0, 2.05, 3., 4., 2.5],
[1.1, 2.1, 2.2, 2., 3.5, 3.],
[1.2, 2.3, 2.4, 0.6, 3.4, 3.2]
])
nl, nr = heads.shape
nc = nr
botm = np.ones((nl, nr, nc), dtype=float)
top = np.ones((nr, nc), dtype=float) * 2.1
hk = np.ones((nl, nr, nc), dtype=float) * 2.
for i in range(nl):
botm[nl-i-1, :, :] = i
botm
```
### Make a flopy modflow model
```
m = flopy.modflow.Modflow('junk', version='mfnwt', model_ws='data')
dis = flopy.modflow.ModflowDis(m, nlay=nl, nrow=nr, ncol=nc, botm=botm, top=top)
upw = flopy.modflow.ModflowUpw(m, hk=hk)
```
### Get transmissivities along the diagonal cells
* alternatively, if a model's coordinate information has been set up, the real-world x and y coordinates could be supplied with the `x` and `y` arguments
* if `sctop` and `scbot` arguments are given, the transmissivites are computed for the open intervals only
(cells that are partially within the open interval have reduced thickness, cells outside of the open interval have transmissivities of 0). If no `sctop` or `scbot` arguments are supplied, trasmissivites reflect the full saturated thickness in each column of cells (see plot below, which shows different open intervals relative to the model layering)
```
r, c = np.arange(nr), np.arange(nc)
T = flopy.utils.get_transmissivities(heads, m, r=r, c=c, sctop=sctop, scbot=scbot)
np.round(T, 2)
m.dis.botm.array[:, r, c]
```
### Plot the model top and layer bottoms (colors)
open intervals are shown as boxes
* well 0 has zero transmissivities for each layer, as it is below the model bottom
* well 1 has T values of 0 for layers 1 and 2, and 1 for layer 3 (K=2 x 0.5 thickness)
```
fig, ax = plt.subplots()
plt.plot(m.dis.top.array[r, c], label='model top')
for i, l in enumerate(m.dis.botm.array[:, r, c]):
label = 'layer {} bot'.format(i+1)
if i == m.nlay -1:
label = 'model bot'
plt.plot(l, label=label)
plt.plot(heads[0], label='piezometric surface', color='b', linestyle=':')
for iw in range(len(sctop)):
ax.fill_between([iw-.25, iw+.25], scbot[iw], sctop[iw],
facecolor='None', edgecolor='k')
ax.legend(loc=2)
```
### example of transmissivites without `sctop` and `scbot`
* well zero has T=0 in layer 1 because it is dry; T=0.2 in layer 2 because the sat. thickness there is only 0.1
```
T = flopy.utils.get_transmissivities(heads, m, r=r, c=c)
np.round(T, 2)
```
| github_jupyter |
```
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
# Dependencies for interaction with database:
from sqlalchemy import create_engine
from sqlalchemy.orm import Session
# Machine Learning dependencies:
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
# Validation libraries
from sklearn import metrics
from sklearn.metrics import accuracy_score, mean_squared_error, precision_recall_curve
from sklearn.model_selection import cross_val_score
from collections import Counter
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from imblearn.metrics import classification_report_imbalanced
# Create engine and link to local postgres database:
engine = create_engine('postgresql://postgres:spring01@mht.ciic7sa0kxc0.us-west-2.rds.amazonaws.com:5432/postgres')
connect = engine.connect()
# Create session:
session = Session(engine)
# Import clean_dataset_2016 table:
clean_2016_df = pd.read_sql(sql = 'SELECT * FROM "survey_2016"',con=connect)
clean_2016_df.head()
# Check data for insights:
print(clean_2016_df.shape)
print(clean_2016_df.columns.tolist())
print(clean_2016_df.value_counts)
##Test:
#Dataset filtered on tech_company = "yes"
#Target:
#Features: company_size, age, gender, country_live, identified_with_mh, mh_employer, employer_discus_mh, employer_provide_mh_coverage,treatment_mh_from_professional, employers_options_help, protected_anonymity_mh
# Filter tech_or_not columns:
clean_2016_df["tech_company"].head()
tech_df = pd.read_sql('SELECT * FROM "survey_2016" WHERE "tech_company" = 1', connect)
tech_df.shape
ml_df = tech_df[["mh_sought_pro_tx","mh_dx_pro","company_size","mh_discussion_coworkers", "mh_discussion_supervisors","mh_employer_discussion","prev_mh_discussion_coworkers","prev_mh_discussion_supervisors","mh_sharing_friends_family"]]
ml_df
# Encode dataset:
# Create label encoder instance:
le = LabelEncoder()
# Make a copy of desire data:
encoded_df = ml_df.copy()
# Apply encoder:
#encoded_df["age"] = le.fit_transform(encoded_df["age"] )
#encoded_df["company_size"] = le.fit_transform(encoded_df["company_size"])
#encoded_df["gender"] = le.fit_transform(encoded_df["gender"])
#encoded_df["country_live"] = le.fit_transform(encoded_df["country_live"])
#encoded_df["identified_with_mh"] = le.fit_transform(encoded_df["identified_with_mh"])
#encoded_df["dx_mh_disorder"] = le.fit_transform(encoded_df["dx_mh_disorder"])
#encoded_df["employer_discus_mh"] = le.fit_transform(encoded_df["employer_discus_mh"])
#encoded_df["mh_employer"] = le.fit_transform(encoded_df["mh_employer"])
#encoded_df["treatment_mh_from_professional"] = le.fit_transform(encoded_df["treatment_mh_from_professional"])
#encoded_df["employer_provide_mh_coverage"] = le.fit_transform(encoded_df["employer_provide_mh_coverage"])
#encoded_df["employers_options_help"] = le.fit_transform(encoded_df["employers_options_help"])
#encoded_df["protected_anonymity_mh"] = le.fit_transform(encoded_df["protected_anonymity_mh"])
features = encoded_df.columns.tolist()
for feature in features:
encoded_df[feature] = le.fit_transform(encoded_df[feature])
# Check:
encoded_df.head()
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder(sparse = False)
encoded_df1 = ml_df.copy()
# Apply encoder:
encoded_df1 = encoder.fit_transform(encoded_df1)
encoded_df1
# Create our target:
y = encoded_df["mh_sought_pro_tx"]
# Create our features:
X = encoded_df.drop(columns = "mh_sought_pro_tx", axis =1)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,
y,
random_state=40,
stratify=y)
X_train.shape
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(solver='lbfgs',
max_iter=200,
random_state=1)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
results = pd.DataFrame({"Prediction": y_pred, "Actual": y_test}).reset_index(drop=True)
results.head(20)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_pred))
from sklearn.metrics import confusion_matrix, classification_report
matrix = confusion_matrix(y_test,y_pred)
print(matrix)
report = classification_report(y_test, y_pred)
print(report)
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/keras"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Keras is a high-level API to build and train deep learning models. It's used for
fast prototyping, advanced research, and production, with three key advantages:
- *User friendly*<br>
Keras has a simple, consistent interface optimized for common use cases. It
provides clear and actionable feedback for user errors.
- *Modular and composable*<br>
Keras models are made by connecting configurable building blocks together,
with few restrictions.
- *Easy to extend*<br> Write custom building blocks to express new ideas for
research. Create new layers, loss functions, and develop state-of-the-art
models.
## Import tf.keras
`tf.keras` is TensorFlow's implementation of the
[Keras API specification](https://keras.io). This is a high-level
API to build and train models that includes first-class support for
TensorFlow-specific functionality, such as [eager execution](#eager_execution),
`tf.data` pipelines, and [Estimators](./estimators.md).
`tf.keras` makes TensorFlow easier to use without sacrificing flexibility and
performance.
To get started, import `tf.keras` as part of your TensorFlow program setup:
```
!pip install -q pyyaml # Required to save models in YAML format
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow.keras import layers
print(tf.VERSION)
print(tf.keras.__version__)
```
`tf.keras` can run any Keras-compatible code, but keep in mind:
* The `tf.keras` version in the latest TensorFlow release might not be the same
as the latest `keras` version from PyPI. Check `tf.keras.__version__`.
* When [saving a model's weights](#weights_only), `tf.keras` defaults to the
[checkpoint format](./checkpoints.md). Pass `save_format='h5'` to
use HDF5.
## Build a simple model
### Sequential model
In Keras, you assemble *layers* to build *models*. A model is (usually) a graph
of layers. The most common type of model is a stack of layers: the
`tf.keras.Sequential` model.
To build a simple, fully-connected network (i.e. multi-layer perceptron):
```
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
```
### Configure the layers
There are many `tf.keras.layers` available with some common constructor
parameters:
* `activation`: Set the activation function for the layer. This parameter is
specified by the name of a built-in function or as a callable object. By
default, no activation is applied.
* `kernel_initializer` and `bias_initializer`: The initialization schemes
that create the layer's weights (kernel and bias). This parameter is a name or
a callable object. This defaults to the `"Glorot uniform"` initializer.
* `kernel_regularizer` and `bias_regularizer`: The regularization schemes
that apply the layer's weights (kernel and bias), such as L1 or L2
regularization. By default, no regularization is applied.
The following instantiates `tf.keras.layers.Dense` layers using constructor
arguments:
```
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
```
## Train and evaluate
### Set up training
After the model is constructed, configure its learning process by calling the
`compile` method:
```
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
```
`tf.keras.Model.compile` takes three important arguments:
* `optimizer`: This object specifies the training procedure. Pass it optimizer
instances from the `tf.train` module, such as
`tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or
`tf.train.GradientDescentOptimizer`.
* `loss`: The function to minimize during optimization. Common choices include
mean square error (`mse`), `categorical_crossentropy`, and
`binary_crossentropy`. Loss functions are specified by name or by
passing a callable object from the `tf.keras.losses` module.
* `metrics`: Used to monitor training. These are string names or callables from
the `tf.keras.metrics` module.
The following shows a few examples of configuring a model for training:
```
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
```
### Input NumPy data
For small datasets, use in-memory [NumPy](https://www.numpy.org/)
arrays to train and evaluate a model. The model is "fit" to the training data
using the `fit` method:
```
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
```
`tf.keras.Model.fit` takes three important arguments:
* `epochs`: Training is structured into *epochs*. An epoch is one iteration over
the entire input data (this is done in smaller batches).
* `batch_size`: When passed NumPy data, the model slices the data into smaller
batches and iterates over these batches during training. This integer
specifies the size of each batch. Be aware that the last batch may be smaller
if the total number of samples is not divisible by the batch size.
* `validation_data`: When prototyping a model, you want to easily monitor its
performance on some validation data. Passing this argument—a tuple of inputs
and labels—allows the model to display the loss and metrics in inference mode
for the passed data, at the end of each epoch.
Here's an example using `validation_data`:
```
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
```
### Input tf.data datasets
Use the [Datasets API](./datasets.md) to scale to large datasets
or multi-device training. Pass a `tf.data.Dataset` instance to the `fit`
method:
```
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
```
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number of
training steps the model runs before it moves to the next epoch. Since the
`Dataset` yields batches of data, this snippet does not require a `batch_size`.
Datasets can also be used for validation:
```
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
```
### Evaluate and predict
The `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPy
data and a `tf.data.Dataset`.
To *evaluate* the inference-mode loss and metrics for the data provided:
```
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
```
And to *predict* the output of the last layer in inference for the data provided,
as a NumPy array:
```
result = model.predict(data, batch_size=32)
print(result.shape)
```
## Build advanced models
### Functional API
The `tf.keras.Sequential` model is a simple stack of layers that cannot
represent arbitrary models. Use the
[Keras functional API](https://keras.io/getting-started/functional-api-guide/)
to build complex model topologies such as:
* Multi-input models,
* Multi-output models,
* Models with shared layers (the same layer called several times),
* Models with non-sequential data flows (e.g. residual connections).
Building a model with the functional API works like this:
1. A layer instance is callable and returns a tensor.
2. Input tensors and output tensors are used to define a `tf.keras.Model`
instance.
3. This model is trained just like the `Sequential` model.
The following example uses the functional API to build a simple, fully-connected
network:
```
inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
```
Instantiate the model given inputs and outputs.
```
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
```
### Model subclassing
Build a fully-customizable model by subclassing `tf.keras.Model` and defining
your own forward pass. Create layers in the `__init__` method and set them as
attributes of the class instance. Define the forward pass in the `call` method.
Model subclassing is particularly useful when
[eager execution](./eager.md) is enabled since the forward pass
can be written imperatively.
Key Point: Use the right API for the job. While model subclassing offers
flexibility, it comes at a cost of greater complexity and more opportunities for
user errors. If possible, prefer the functional API.
The following example shows a subclassed `tf.keras.Model` using a custom forward
pass:
```
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# You need to override this function if you want to use the subclassed model
# as part of a functional-style model.
# Otherwise, this method is optional.
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
```
Instantiate the new model class:
```
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
```
### Custom layers
Create a custom layer by subclassing `tf.keras.layers.Layer` and implementing
the following methods:
* `build`: Create the weights of the layer. Add weights with the `add_weight`
method.
* `call`: Define the forward pass.
* `compute_output_shape`: Specify how to compute the output shape of the layer
given the input shape.
* Optionally, a layer can be serialized by implementing the `get_config` method
and the `from_config` class method.
Here's an example of a custom layer that implements a `matmul` of an input with
a kernel matrix:
```
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# Make sure to call the `build` method at the end
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
```
Create a model using your custom layer:
```
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
```
## Callbacks
A callback is an object passed to a model to customize and extend its behavior
during training. You can write your own custom callback, or use the built-in
`tf.keras.callbacks` that include:
* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at
regular intervals.
* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning
rate.
* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation
performance has stopped improving.
* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using
[TensorBoard](./summaries_and_tensorboard.md).
To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
```
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
```
<a id='weights_only'></a>
## Save and restore
### Weights only
Save and load the weights of a model using `tf.keras.Model.save_weights`:
```
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
```
By default, this saves the model's weights in the
[TensorFlow checkpoint](./checkpoints.md) file format. Weights can
also be saved to the Keras HDF5 format (the default for the multi-backend
implementation of Keras):
```
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
```
### Configuration only
A model's configuration can be saved—this serializes the model architecture
without any weights. A saved configuration can recreate and initialize the same
model, even without the code that defined the original model. Keras supports
JSON and YAML serialization formats:
```
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
```
Recreate the model (newly initialized) from the JSON:
```
fresh_model = tf.keras.models.model_from_json(json_string)
```
Serializing a model to YAML format requires that you install `pyyaml` *before you import TensorFlow*:
```
yaml_string = model.to_yaml()
print(yaml_string)
```
Recreate the model from the YAML:
```
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
```
Caution: Subclassed models are not serializable because their architecture is
defined by the Python code in the body of the `call` method.
### Entire model
The entire model can be saved to a file that contains the weight values, the
model's configuration, and even the optimizer's configuration. This allows you
to checkpoint a model and resume training later—from the exact same
state—without access to the original code.
```
# Create a trivial model
model = tf.keras.Sequential([
layers.Dense(10, activation='softmax', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
```
## Eager execution
[Eager execution](./eager.md) is an imperative programming
environment that evaluates operations immediately. This is not required for
Keras, but is supported by `tf.keras` and useful for inspecting your program and
debugging.
All of the `tf.keras` model-building APIs are compatible with eager execution.
And while the `Sequential` and functional APIs can be used, eager execution
especially benefits *model subclassing* and building *custom layers*—the APIs
that require you to write the forward pass as code (instead of the APIs that
create models by assembling existing layers).
See the [eager execution guide](./eager.md#build_a_model) for
examples of using Keras models with custom training loops and `tf.GradientTape`.
## Distribution
### Estimators
The [Estimators](./estimators.md) API is used for training models
for distributed environments. This targets industry use cases such as
distributed training on large datasets that can export a model for production.
A `tf.keras.Model` can be trained with the `tf.estimator` API by converting the
model to an `tf.estimator.Estimator` object with
`tf.keras.estimator.model_to_estimator`. See
[Creating Estimators from Keras models](./estimators.md#creating_estimators_from_keras_models).
```
model = tf.keras.Sequential([layers.Dense(10,activation='softmax'),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
```
Note: Enable [eager execution](./eager.md) for debugging
[Estimator input functions](./premade_estimators.md#create_input_functions)
and inspecting data.
### Multiple GPUs
`tf.keras` models can run on multiple GPUs using
`tf.contrib.distribute.DistributionStrategy`. This API provides distributed
training on multiple GPUs with almost no changes to existing code.
Currently, `tf.contrib.distribute.MirroredStrategy` is the only supported
distribution strategy. `MirroredStrategy` does in-graph replication with
synchronous training using all-reduce on a single machine. To use
`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a
`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, then
train the estimator
The following example distributes a `tf.keras.Model` across multiple GPUs on a
single machine.
First, define a simple model:
```
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
```
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` object
used to distribute the data across multiple devices—with each device processing
a slice of the input batch.
```
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
```
Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argument
to the `tf.contrib.distribute.MirroredStrategy` instance. When creating
`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`
argument. The default uses all available GPUs, like the following:
```
strategy = tf.contrib.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
```
Convert the Keras model to a `tf.estimator.Estimator` instance:
```
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
```
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`
arguments:
```
keras_estimator.train(input_fn=input_fn, steps=10)
```
| github_jupyter |
```
import math
import numpy as np
import pandas as pd
```
### Initial conditions
```
initial_rating = 400
k = 100
things = ['Malted Milk','Rich Tea','Hobnob','Digestive']
```
### Elo Algos
```
def expected_win(r1, r2):
"""
Expected probability of player 1 beating player 2
if player 1 has rating 1 (r1) and player 2 has rating 2 (r2)
"""
return 1.0 / (1 + math.pow(10, (r2-r1)/400))
def update_rating(R, k, P, d):
"""
d = 1 = WIN
d = 0 = LOSS
"""
return R + k*(d-P)
def elo(Ra, Rb, k, d):
"""
d = 1 when player A wins
d = 0 when player B wins
"""
Pa = expected_win(Ra, Rb)
Pb = expected_win(Rb, Ra)
# update if A wins
if d == 1:
Ra = update_rating(Ra, k, Pa, d)
Rb = update_rating(Rb, k, Pb, d-1)
# update if B wins
elif d == 0:
Ra = update_rating(Ra, k, Pa, d)
Rb = update_rating(Rb, k, Pb, d+1)
return Pa, Pb, Ra, Rb
def elo_sequence(things, initial_rating, k, results):
"""
Initialises score dictionary, and runs through sequence of pairwise results, returning final dictionary of Elo rankings
"""
dic_scores = {i:initial_rating for i in things}
for result in results:
winner, loser = result
Ra, Rb = dic_scores[winner], dic_scores[loser]
_, _, newRa, newRb = elo(Ra, Rb, k, 1)
dic_scores[winner], dic_scores[loser] = newRa, newRb
return dic_scores
```
### Mean Elo
```
def mElo(things, initial_rating, k, results, numEpochs):
"""
Randomises the sequence of the pairwise comparisons, running the Elo sequence in a random
sequence for a number of epochs
Returns the mean Elo ratings over the randomised epoch sequences
"""
lst_outcomes = []
for i in np.arange(numEpochs):
np.random.shuffle(results)
lst_outcomes.append(elo_sequence(things, initial_rating, k, results))
return pd.DataFrame(lst_outcomes).mean().sort_values(ascending=False)
```
### Pairwise Outcomes from Christian's Taste Test
> **Format** (Winner, Loser)
```
results = np.array([('Malted Milk','Rich Tea'),('Malted Milk','Digestive'),('Malted Milk','Hobnob')\
,('Hobnob','Rich Tea'),('Hobnob','Digestive'),('Digestive','Rich Tea')])
jenResults = np.array([('Rich Tea','Malted Milk'),('Digestive','Malted Milk'),('Hobnob','Malted Milk')\
,('Hobnob','Rich Tea'),('Hobnob','Digestive'),('Digestive','Rich Tea')])
mElo(things, initial_rating, k, jenResults, 1000)
```
| github_jupyter |
##### 1. Dates (사건일자)
: timestamp of the crime incident
##### 2. Category (범죄유형 - 종속변수)
: category of the crime incident (only in train.csv). This is the target variable you are going to predict.
##### 3. Descript (범죄 세부정보)
: detailed description of the crime incident (only in train.csv)
##### 4. DayOfWeek (요일)
: the day of the week
##### 5. PdDistrict (관할서)
: name of the Police Department District
##### 6. Resolution (처벌결과)
: how the crime incident was resolved (only in train.csv)
##### 7. Address (사건현장 대략적 주소)
: the approximate street address of the crime incident
##### 8. X (경도)
:Longitude
##### 9. Y (위도)
: Latitude
```
df = pd.read_csv("train.csv", parse_dates=['Dates'])
# 연,월,주차,시간(1시간단위)
df['year'] = df['Dates'].map(lambda x: x.year)
df['month'] = df['Dates'].map(lambda x: x.month)
df['week'] = df['Dates'].map(lambda x: x.week)
df['hour'] = df['Dates'].map(lambda x: x.hour)
df.tail()
for i in df.columns:
print(i, "\n", df[i].unique(), len(df[i].unique()))
df['event'] = 1
df.year.value_counts()
```
2015년의 데이터가 현저하게 적다.
## 1 .시간에 따른 분석
##### 1) 연도별 범죄발생횟수
```
event_by_year = df[['year','event']].groupby('year').count().reset_index()
event_by_year.plot(kind='bar',x='year', y='event')
```
##### 1-2) 연도별 범죄유형별 범죄발생횟수
```
event_by_year_category = df[['year','Category','event']].groupby(['year','Category']).count().reset_index()
event_by_year_category_pivot = event_by_year_category.pivot(index="year", columns='Category', values='event').fillna(0)
f, ax = plt.subplots(figsize=(11, 9))
cmap = sns.diverging_palette(220, 10, as_cmap=True)
sns.heatmap(event_by_year_category_pivot, cmap=cmap, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
```
해를 거듭할수록 절도가 증가함.
NON-CRIMINAL은 형사상 책임을 묻지 않는 걸 말하는 건가...(대부분 정신치료)
##### 2) 시간대별 사건발생수
```
event_by_hour = df[['hour','event']].groupby(['hour']).sum().reset_index()
event_by_hour.plot(kind='bar', title="events by hour")
```
##### 2-1) 연도별 시간대별 사건발생수
```
event_by_year_hour = df[['year','hour','event']].groupby(['year','hour']).count().reset_index()
event_by_year_hour_pivot = event_by_year_hour.pivot(index='hour', columns='year', values='event')
event_by_year_hour_pivot
event_by_year_hour_pivot.plot(figsize=(10,7))
```
2015년을 제외한 2003~2014년까지 시간대별로 사건발생패턴이 유사하다.
##### 2-2) 연도별 월별 사건발생수
```
event_by_year_month = df[['year','month','event']].groupby(['year','month']).count().reset_index()
event_by_year_month_pivot = event_by_year_month.pivot(index='month', columns='year', values='event').fillna(method='ffill')
event_by_year_month_pivot.plot(title="events by year and month")
```
##### 2-3) 연도별 주차별 사건발생수
```
event_by_year_week = df[['year','week','event']].groupby(['year','week']).count().reset_index()
event_by_year_week_pivot = event_by_year_week.pivot(index='week', columns='year', values='event').fillna(method='ffill')
event_by_year_week_pivot.plot(title="events by year and week")
```
##### 3. 시간대별 범죄 발생수
```
crime = df.Category.unique()
crime
count = 1
for i in df.Category.unique():
crime = df[df.Category == i]
crime = crime[['year','hour','event']].groupby(['year','hour']).count().reset_index()
crime_pivot = crime.pivot(index='hour', columns='year', values='event')
plt.subplot(len(df.Category.unique()), 1, count)
crime_pivot.plot()
count += 1
plt.show()
crime = df[df.Category == "WARRANTS"]
crime = crime[['year','hour','event']].groupby(['year','hour']).count().reset_index()
crime_pivot = crime.pivot(index='hour', columns='year', values='event')
crime_pivot.plot()
```
##### 4. 요일별 범죄수
```
dayofweek_event = df[['DayOfWeek','event']].groupby(['DayOfWeek']).count().reset_index()
dayofweek_event.plot(x='DayOfWeek', y='event',kind='bar', color='r')
```
## 2. 장소별
##### 1) PdDistrict별 사건발생수
```
event_by_PdDistrict = df[['PdDistrict','event']].groupby('PdDistrict').count().reset_index()
event_by_PdDistrict.sort_values(by='event', ascending=False, inplace=True)
event_by_PdDistrict.plot(kind='bar', x='PdDistrict')
```
##### 2) Address별 사건발생수
```
event_by_address_pd = df[['PdDistrict','Address','event']].groupby(['PdDistrict','Address']).count().reset_index()
event_by_address_pd.sort_values(by='event', ascending=False)
most_crime = df[['Category','event']].groupby("Category").sum().reset_index().sort_values(by="event", ascending=False)
most_crime.plot(kind='bar', x='Category', title="What is most crime?")
```
##### 각 범죄가 일어난 곳은 어느 곳일까? 어디서 제일 많이 일어날까?
```
crime_address = df[['Category','Address','event']].groupby(['Category','Address']).sum().reset_index()
crime_address.sort_values(by=['Category','event'], ascending=False, inplace=True)
len(crime_address[crime_address.Category == "LARCENY/THEFT"]), crime_address[crime_address.Category == "LARCENY/THEFT"]
```
#### 각 범죄별로 가장 많이 발생한 위치 5군데를 뽑아보자
```
for crime in crime_address.Category.unique():
result = crime_address[crime_address.Category == crime]
print("Crime: ",crime,"\t", "Address count: ",len(result),"\n" ,result.head(),"\n")
```
1. 800 Block of BRYANT ST : 거의 모든 범죄가 일어남...범죄소굴
2. 각 범죄별로 상위 랭크된 Address에 block이 많이 들어간다. block유무로 binarize..?
#### Address에 block유무로 binarize
```
a = np.array([1 if 'Block' in i else 0 for i in df.Address.values])
a
```
### Crime Visualization
```
event_by_crimeAddress = df[['Category','Address','X','Y','event']].groupby(['Category','Address','X','Y']).sum().reset_index()
event_by_crimeAddress
df[df.Y == event_by_crimeAddress.Y.max()]
```
뜬금없이 위도 90은 뭐냐...구글맵에 바다로 뜨는데ㅡㅡ
| github_jupyter |
# Low level API
## Prerequisites
- Understanding the gammapy data workflow, in particular what are DL3 events and instrument response functions (IRF).
- Understanding of the data reduction and modeling fitting process as shown in the [analysis with the high level interface tutorial](analysis_1.ipynb)
## Context
This notebook is an introduction to gammapy analysis this time using the lower level classes and functions
the library.
This allows to understand what happens during two main gammapy analysis steps, data reduction and modeling/fitting.
**Objective: Create a 3D dataset of the Crab using the H.E.S.S. DL3 data release 1 and perform a simple model fitting of the Crab nebula using the lower level gammapy API.**
## Proposed approach
Here, we have to interact with the data archive (with the `~gammapy.data.DataStore`) to retrieve a list of selected observations (`~gammapy.data.Observations`). Then, we define the geometry of the `~gammapy.datasets.MapDataset` object we want to produce and the maker object that reduce an observation
to a dataset.
We can then proceed with data reduction with a loop over all selected observations to produce datasets in the relevant geometry and stack them together (i.e. sum them all).
In practice, we have to:
- Create a `~gammapy.data.DataStore` poiting to the relevant data
- Apply an observation selection to produce a list of observations, a `~gammapy.data.Observations` object.
- Define a geometry of the Map we want to produce, with a sky projection and an energy range.
- Create a `~gammapy.maps.MapAxis` for the energy
- Create a `~gammapy.maps.WcsGeom` for the geometry
- Create the necessary makers :
- the map dataset maker : `~gammapy.makers.MapDatasetMaker`
- the background normalization maker, here a `~gammapy.makers.FoVBackgroundMaker`
- and usually the safe range maker : `~gammapy.makers.SafeRangeMaker`
- Perform the data reduction loop. And for every observation:
- Apply the makers sequentially to produce the current `~gammapy.datasets.MapDataset`
- Stack it on the target one.
- Define the `~gammapy.modeling.models.SkyModel` to apply to the dataset.
- Create a `~gammapy.modeling.Fit` object and run it to fit the model parameters
- Apply a `~gammapy.estimators.FluxPointsEstimator` to compute flux points for the spectral part of the fit.
## Setup
First, we setup the analysis by performing required imports.
```
%matplotlib inline
import matplotlib.pyplot as plt
from pathlib import Path
from astropy import units as u
from astropy.coordinates import SkyCoord
from regions import CircleSkyRegion
from gammapy.data import DataStore
from gammapy.datasets import MapDataset
from gammapy.maps import WcsGeom, MapAxis
from gammapy.makers import MapDatasetMaker, SafeMaskMaker, FoVBackgroundMaker
from gammapy.modeling.models import (
SkyModel,
PowerLawSpectralModel,
PointSpatialModel,
FoVBackgroundModel,
)
from gammapy.modeling import Fit
from gammapy.estimators import FluxPointsEstimator
```
## Defining the datastore and selecting observations
We first use the `~gammapy.data.DataStore` object to access the observations we want to analyse. Here the H.E.S.S. DL3 DR1.
```
data_store = DataStore.from_dir("$GAMMAPY_DATA/hess-dl3-dr1")
```
We can now define an observation filter to select only the relevant observations.
Here we use a cone search which we define with a python dict.
We then filter the `ObservationTable` with `~gammapy.data.ObservationTable.select_observations()`.
```
selection = dict(
type="sky_circle",
frame="icrs",
lon="83.633 deg",
lat="22.014 deg",
radius="5 deg",
)
selected_obs_table = data_store.obs_table.select_observations(selection)
```
We can now retrieve the relevant observations by passing their `obs_id` to the`~gammapy.data.DataStore.get_observations()` method.
```
observations = data_store.get_observations(selected_obs_table["OBS_ID"])
```
## Preparing reduced datasets geometry
Now we define a reference geometry for our analysis, We choose a WCS based geometry with a binsize of 0.02 deg and also define an energy axis:
```
energy_axis = MapAxis.from_energy_bounds(1.0, 10.0, 4, unit="TeV")
geom = WcsGeom.create(
skydir=(83.633, 22.014),
binsz=0.02,
width=(2, 2),
frame="icrs",
proj="CAR",
axes=[energy_axis],
)
# Reduced IRFs are defined in true energy (i.e. not measured energy).
energy_axis_true = MapAxis.from_energy_bounds(
0.5, 20, 10, unit="TeV", name="energy_true"
)
```
Now we can define the target dataset with this geometry.
```
stacked = MapDataset.create(
geom=geom, energy_axis_true=energy_axis_true, name="crab-stacked"
)
```
## Data reduction
### Create the maker classes to be used
The `~gammapy.datasets.MapDatasetMaker` object is initialized as well as the `~gammapy.makers.SafeMaskMaker` that carries here a maximum offset selection.
```
offset_max = 2.5 * u.deg
maker = MapDatasetMaker()
maker_safe_mask = SafeMaskMaker(
methods=["offset-max", "aeff-max"], offset_max=offset_max
)
circle = CircleSkyRegion(
center=SkyCoord("83.63 deg", "22.14 deg"), radius=0.2 * u.deg
)
exclusion_mask = ~geom.region_mask(regions=[circle])
maker_fov = FoVBackgroundMaker(method="fit", exclusion_mask=exclusion_mask)
```
### Perform the data reduction loop
```
%%time
for obs in observations:
# First a cutout of the target map is produced
cutout = stacked.cutout(
obs.pointing_radec, width=2 * offset_max, name=f"obs-{obs.obs_id}"
)
# A MapDataset is filled in this cutout geometry
dataset = maker.run(cutout, obs)
# The data quality cut is applied
dataset = maker_safe_mask.run(dataset, obs)
# fit background model
dataset = maker_fov.run(dataset)
print(
f"Background norm obs {obs.obs_id}: {dataset.background_model.spectral_model.norm.value:.2f}"
)
# The resulting dataset cutout is stacked onto the final one
stacked.stack(dataset)
print(stacked)
```
### Inspect the reduced dataset
```
stacked.counts.sum_over_axes().smooth(0.05 * u.deg).plot(
stretch="sqrt", add_cbar=True
);
```
## Save dataset to disk
It is common to run the preparation step independent of the likelihood fit, because often the preparation of maps, PSF and energy dispersion is slow if you have a lot of data. We first create a folder:
```
path = Path("analysis_2")
path.mkdir(exist_ok=True)
```
And then write the maps and IRFs to disk by calling the dedicated `~gammapy.datasets.MapDataset.write()` method:
```
filename = path / "crab-stacked-dataset.fits.gz"
stacked.write(filename, overwrite=True)
```
## Define the model
We first define the model, a `SkyModel`, as the combination of a point source `SpatialModel` with a powerlaw `SpectralModel`:
```
target_position = SkyCoord(ra=83.63308, dec=22.01450, unit="deg")
spatial_model = PointSpatialModel(
lon_0=target_position.ra, lat_0=target_position.dec, frame="icrs"
)
spectral_model = PowerLawSpectralModel(
index=2.702,
amplitude=4.712e-11 * u.Unit("1 / (cm2 s TeV)"),
reference=1 * u.TeV,
)
sky_model = SkyModel(
spatial_model=spatial_model, spectral_model=spectral_model, name="crab"
)
bkg_model = FoVBackgroundModel(dataset_name="crab-stacked")
```
Now we assign this model to our reduced dataset:
```
stacked.models = [sky_model, bkg_model]
```
## Fit the model
The `~gammapy.modeling.Fit` class is orchestrating the fit, connecting the `stats` method of the dataset to the minimizer. By default, it uses `iminuit`.
Its constructor takes a list of dataset as argument.
```
%%time
fit = Fit(optimize_opts={"print_level": 1})
result = fit.run([stacked])
```
The `FitResult` contains information about the optimization and parameter error calculation.
```
print(result)
```
The fitted parameters are visible from the `~astropy.modeling.models.Models` object.
```
stacked.models.to_parameters_table()
```
### Inspecting residuals
For any fit it is useful to inspect the residual images. We have a few options on the dataset object to handle this. First we can use `.plot_residuals_spatial()` to plot a residual image, summed over all energies:
```
stacked.plot_residuals_spatial(method="diff/sqrt(model)", vmin=-0.5, vmax=0.5);
```
In addition, we can also specify a region in the map to show the spectral residuals:
```
region = CircleSkyRegion(
center=SkyCoord("83.63 deg", "22.14 deg"), radius=0.5 * u.deg
)
stacked.plot_residuals(
kwargs_spatial=dict(method="diff/sqrt(model)", vmin=-0.5, vmax=0.5),
kwargs_spectral=dict(region=region),
);
```
We can also directly access the `.residuals()` to get a map, that we can plot interactively:
```
residuals = stacked.residuals(method="diff")
residuals.smooth("0.08 deg").plot_interactive(
cmap="coolwarm", vmin=-0.2, vmax=0.2, stretch="linear", add_cbar=True
);
```
## Plot the fitted spectrum
### Making a butterfly plot
The `SpectralModel` component can be used to produce a, so-called, butterfly plot showing the envelope of the model taking into account parameter uncertainties:
```
spec = sky_model.spectral_model
```
Now we can actually do the plot using the `plot_error` method:
```
energy_bounds = [1, 10] * u.TeV
spec.plot(energy_bounds=energy_bounds, energy_power=2)
ax = spec.plot_error(energy_bounds=energy_bounds, energy_power=2)
```
### Computing flux points
We can now compute some flux points using the `~gammapy.estimators.FluxPointsEstimator`.
Besides the list of datasets to use, we must provide it the energy intervals on which to compute flux points as well as the model component name.
```
energy_edges = [1, 2, 4, 10] * u.TeV
fpe = FluxPointsEstimator(energy_edges=energy_edges, source="crab")
%%time
flux_points = fpe.run(datasets=[stacked])
ax = spec.plot_error(energy_bounds=energy_bounds, energy_power=2)
flux_points.plot(ax=ax, energy_power=2)
```
| github_jupyter |
# Benchmark FRESA.CAD BSWIMS final Script
This algorithm implementation uses R code and a Python library (rpy2) to connect with it, in order to run the following it is necesary to have installed both on your computer:
- R (you can download in https://www.r-project.org/) <br>
- install rpy2 by <code> pip install rpy2 </code>
```
import numpy as np
import pandas as pd
import sys
from pathlib import Path
sys.path.append("../tadpole-algorithms")
import tadpole_algorithms
from tadpole_algorithms.models import Benchmark_FRESACAD_R
from tadpole_algorithms.preprocessing.split import split_test_train_tadpole
#rpy2 libs and funcs
import rpy2.robjects.packages as rpackages
from rpy2.robjects.vectors import StrVector
from rpy2.robjects import r, pandas2ri
from rpy2 import robjects
from rpy2.robjects.conversion import localconverter
# Load D1_D2 train and possible test data set
data_path_train_test = Path("data/TADPOLE_D1_D2.csv")
data_df_train_test = pd.read_csv(data_path_train_test)
# Load data Dictionary
data_path_Dictionaty = Path("data/TADPOLE_D1_D2_Dict.csv")
data_Dictionaty = pd.read_csv(data_path_Dictionaty)
# Load D3 possible test set
data_path_test = Path("data/TADPOLE_D3.csv")
data_D3 = pd.read_csv(data_path_test)
# Load D4 evaluation data set
data_path_eval = Path("data/TADPOLE_D4_corr.csv")
data_df_eval = pd.read_csv(data_path_eval)
# Split data in test, train and evaluation data
train_df, test_df, eval_df = split_test_train_tadpole(data_df_train_test, data_df_eval)
#instanciate the model to get the functions
model = Benchmark_FRESACAD_R()
#set the flag to true to use a preprocessed data
USE_PREPROC = False
#preprocess the data
D1Train,D2Test,D3Train,D3Test = model.extractTrainTestDataSets_R("data/TADPOLE_D1_D2.csv","data/TADPOLE_D3.csv")
# AdjustedTrainFrame,testingFrame,Train_Imputed,Test_Imputed = model.preproc_tadpole_D1_D2(data_df_train_test,USE_PREPROC)
AdjustedTrainFrame,testingFrame,Train_Imputed,Test_Imputed = model.preproc_with_R(D1Train,D2Test,data_Dictionaty,usePreProc=USE_PREPROC)
#Train Congitive Models
modelfilename = model.Train_Congitive(AdjustedTrainFrame,usePreProc=USE_PREPROC)
#Train ADAS/Ventricles Models
regresionModelfilename = model.Train_Regression(AdjustedTrainFrame,Train_Imputed,usePreProc=USE_PREPROC)
print(regresionModelfilename)
print(regresionModelfilename)
print(type(regresionModelfilename))
#Predict
Forecast_D2 = model.Forecast_All(modelfilename,
regresionModelfilename,
testingFrame,
Test_Imputed,
submissionTemplateFileName="data/TADPOLE_Simple_Submission_TeamName.xlsx",
usePreProc=USE_PREPROC)
#data_forecast_test = Path("data/_ForecastFRESACAD.csv")
#Forecast_D2 = pd.read_csv(data_forecast_test)
from tadpole_algorithms.evaluation import evaluate_forecast
from tadpole_algorithms.evaluation import print_metrics
# Evaluate the model
dictionary = evaluate_forecast(eval_df, Forecast_D2)
# Print metrics
print_metrics(dictionary)
# AdjustedTrainFrame,testingFrame,Train_Imputed,Test_Imputed = model.preproc_tadpole_D1_D2(data_df_train_test,USE_PREPROC)
D3AdjustedTrainFrame,D3testingFrame,D3Train_Imputed,D3Test_Imputed = model.preproc_with_R(D3Train,
D3Test,
data_Dictionaty,
MinVisit=18,
colImputeThreshold=0.15,
rowImputeThreshold=0.10,
includeID=False,
usePreProc=USE_PREPROC)
#Train Congitive Models
D3modelfilename = model.Train_Congitive(D3AdjustedTrainFrame,usePreProc=USE_PREPROC)
#Train ADAS/Ventricles Models
D3regresionModelfilename = model.Train_Regression(D3AdjustedTrainFrame,D3Train_Imputed,usePreProc=USE_PREPROC)
#Predict
Forecast_D3 = model.Forecast_All(D3modelfilename,
D3regresionModelfilename,
D3testingFrame,
D3Test_Imputed,
submissionTemplateFileName="data/TADPOLE_Simple_Submission_TeamName.xlsx",
usePreProc=USE_PREPROC)
from tadpole_algorithms.evaluation import evaluate_forecast
from tadpole_algorithms.evaluation import print_metrics
# Evaluate the D3 model
dictionary = evaluate_forecast(eval_df,Forecast_D3)
# Print metrics
print_metrics(dictionary)
```
| github_jupyter |
This notebook contains an implementation of the third place result in the Rossman Kaggle competition as detailed in Guo/Berkhahn's [Entity Embeddings of Categorical Variables](https://arxiv.org/abs/1604.06737).
The motivation behind exploring this architecture is it's relevance to real-world application. Much of our focus has been computer-vision and NLP tasks, which largely deals with unstructured data.
However, most of the data informing KPI's in industry are structured, time-series data. Here we explore the end-to-end process of using neural networks with practical structured data problems.
```
%matplotlib inline
import math, keras, datetime, pandas as pd, numpy as np, keras.backend as K
import matplotlib.pyplot as plt, xgboost, operator, random, pickle
from utils2 import *
np.set_printoptions(threshold=50, edgeitems=20)
limit_mem()
from isoweek import Week
from pandas_summary import DataFrameSummary
%cd data/rossman/
```
## Create datasets
In addition to the provided data, we will be using external datasets put together by participants in the Kaggle competition. You can download all of them [here](http://files.fast.ai/part2/lesson14/rossmann.tgz).
For completeness, the implementation used to put them together is included below.
```
def concat_csvs(dirname):
os.chdir(dirname)
filenames=glob.glob("*.csv")
wrote_header = False
with open("../"+dirname+".csv","w") as outputfile:
for filename in filenames:
name = filename.split(".")[0]
with open(filename) as f:
line = f.readline()
if not wrote_header:
wrote_header = True
outputfile.write("file,"+line)
for line in f:
outputfile.write(name + "," + line)
outputfile.write("\n")
os.chdir("..")
# concat_csvs('googletrend')
# concat_csvs('weather')
```
Feature Space:
* train: Training set provided by competition
* store: List of stores
* store_states: mapping of store to the German state they are in
* List of German state names
* googletrend: trend of certain google keywords over time, found by users to correlate well w/ given data
* weather: weather
* test: testing set
```
table_names = ['train', 'store', 'store_states', 'state_names',
'googletrend', 'weather', 'test']
```
We'll be using the popular data manipulation framework pandas.
Among other things, pandas allows you to manipulate tables/data frames in python as one would in a database.
We're going to go ahead and load all of our csv's as dataframes into a list `tables`.
```
tables = [pd.read_csv(fname+'.csv', low_memory=False) for fname in table_names]
from IPython.display import HTML
```
We can use `head()` to get a quick look at the contents of each table:
* train: Contains store information on a daily basis, tracks things like sales, customers, whether that day was a holdiay, etc.
* store: general info about the store including competition, etc.
* store_states: maps store to state it is in
* state_names: Maps state abbreviations to names
* googletrend: trend data for particular week/state
* weather: weather conditions for each state
* test: Same as training table, w/o sales and customers
```
for t in tables: display(t.head())
```
This is very representative of a typical industry dataset.
The following returns summarized aggregate information to each table accross each field.
```
for t in tables: display(DataFrameSummary(t).summary())
```
## Data Cleaning / Feature Engineering
As a structured data problem, we necessarily have to go through all the cleaning and feature engineering, even though we're using a neural network.
```
train, store, store_states, state_names, googletrend, weather, test = tables
len(train),len(test)
```
Turn state Holidays to Bool
```
train.StateHoliday = train.StateHoliday!='0'
test.StateHoliday = test.StateHoliday!='0'
```
Define function for joining tables on specific fields.
By default, we'll be doing a left outer join of `right` on the `left` argument using the given fields for each table.
Pandas does joins using the `merge` method. The `suffixes` argument describes the naming convention for duplicate fields. We've elected to leave the duplicate field names on the left untouched, and append a "_y" to those on the right.
```
def join_df(left, right, left_on, right_on=None):
if right_on is None: right_on = left_on
return left.merge(right, how='left', left_on=left_on, right_on=right_on,
suffixes=("", "_y"))
```
Join weather/state names.
```
weather = join_df(weather, state_names, "file", "StateName")
```
In pandas you can add new columns to a dataframe by simply defining it. We'll do this for googletrends by extracting dates and state names from the given data and adding those columns.
We're also going to replace all instances of state name 'NI' with the usage in the rest of the table, 'HB,NI'. This is a good opportunity to highlight pandas indexing. We can use `.ix[rows, cols]` to select a list of rows and a list of columns from the dataframe. In this case, we're selecting rows w/ statename 'NI' by using a boolean list `googletrend.State=='NI'` and selecting "State".
```
googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0]
googletrend['State'] = googletrend.file.str.split('_', expand=True)[2]
googletrend.loc[googletrend.State=='NI', "State"] = 'HB,NI'
```
The following extracts particular date fields from a complete datetime for the purpose of constructing categoricals.
You should always consider this feature extraction step when working with date-time. Without expanding your date-time into these additional fields, you can't capture any trend/cyclical behavior as a function of time at any of these granularities.
```
def add_datepart(df):
df.Date = pd.to_datetime(df.Date)
df["Year"] = df.Date.dt.year
df["Month"] = df.Date.dt.month
df["Week"] = df.Date.dt.week
df["Day"] = df.Date.dt.day
```
We'll add to every table w/ a date field.
```
add_datepart(weather)
add_datepart(googletrend)
add_datepart(train)
add_datepart(test)
trend_de = googletrend[googletrend.file == 'Rossmann_DE']
```
Now we can outer join all of our data into a single dataframe.
Recall that in outer joins everytime a value in the joining field on the left table does not have a corresponding value on the right table, the corresponding row in the new table has Null values for all right table fields.
One way to check that all records are consistent and complete is to check for Null values post-join, as we do here.
*Aside*: Why note just do an inner join?
If you are assuming that all records are complete and match on the field you desire, an inner join will do the same thing as an outer join. However, in the event you are wrong or a mistake is made, an outer join followed by a null-check will catch it. (Comparing before/after # of rows for inner join is equivalent, but requires keeping track of before/after row #'s. Outer join is easier.)
```
store = join_df(store, store_states, "Store")
len(store[store.State.isnull()])
joined = join_df(train, store, "Store")
len(joined[joined.StoreType.isnull()])
joined = join_df(joined, googletrend, ["State","Year", "Week"])
len(joined[joined.trend.isnull()])
joined = joined.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
len(joined[joined.trend_DE.isnull()])
joined = join_df(joined, weather, ["State","Date"])
len(joined[joined.Mean_TemperatureC.isnull()])
joined_test = test.merge(store, how='left', left_on='Store', right_index=True)
len(joined_test[joined_test.StoreType.isnull()])
```
Next we'll fill in missing values to avoid complications w/ na's.
```
joined.CompetitionOpenSinceYear = joined.CompetitionOpenSinceYear.fillna(1900).astype(np.int32)
joined.CompetitionOpenSinceMonth = joined.CompetitionOpenSinceMonth.fillna(1).astype(np.int32)
joined.Promo2SinceYear = joined.Promo2SinceYear.fillna(1900).astype(np.int32)
joined.Promo2SinceWeek = joined.Promo2SinceWeek.fillna(1).astype(np.int32)
```
Next we'll extract features "CompetitionOpenSince" and "CompetitionDaysOpen". Note the use of `apply()` in mapping a function across dataframe values.
```
joined["CompetitionOpenSince"] = pd.to_datetime(joined.apply(lambda x: datetime.datetime(
x.CompetitionOpenSinceYear, x.CompetitionOpenSinceMonth, 15), axis=1).astype(pd.datetime))
joined["CompetitionDaysOpen"] = joined.Date.subtract(joined["CompetitionOpenSince"]).dt.days
```
We'll replace some erroneous / outlying data.
```
joined.loc[joined.CompetitionDaysOpen<0, "CompetitionDaysOpen"] = 0
joined.loc[joined.CompetitionOpenSinceYear<1990, "CompetitionDaysOpen"] = 0
```
Added "CompetitionMonthsOpen" field, limit the maximum to 2 years to limit number of unique embeddings.
```
joined["CompetitionMonthsOpen"] = joined["CompetitionDaysOpen"]//30
joined.loc[joined.CompetitionMonthsOpen>24, "CompetitionMonthsOpen"] = 24
joined.CompetitionMonthsOpen.unique()
```
Same process for Promo dates.
```
joined["Promo2Since"] = pd.to_datetime(joined.apply(lambda x: Week(
x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1).astype(pd.datetime))
joined["Promo2Days"] = joined.Date.subtract(joined["Promo2Since"]).dt.days
joined.loc[joined.Promo2Days<0, "Promo2Days"] = 0
joined.loc[joined.Promo2SinceYear<1990, "Promo2Days"] = 0
joined["Promo2Weeks"] = joined["Promo2Days"]//7
joined.loc[joined.Promo2Weeks<0, "Promo2Weeks"] = 0
joined.loc[joined.Promo2Weeks>25, "Promo2Weeks"] = 25
joined.Promo2Weeks.unique()
```
## Durations
It is common when working with time series data to extract data that explains relationships across rows as opposed to columns, e.g.:
* Running averages
* Time until next event
* Time since last event
This is often difficult to do with most table manipulation frameworks, since they are designed to work with relationships across columns. As such, we've created a class to handle this type of data.
```
columns = ["Date", "Store", "Promo", "StateHoliday", "SchoolHoliday"]
```
We've defined a class `elapsed` for cumulative counting across a sorted dataframe.
Given a particular field `fld` to monitor, this object will start tracking time since the last occurrence of that field. When the field is seen again, the counter is set to zero.
Upon initialization, this will result in datetime na's until the field is encountered. This is reset every time a new store is seen.
We'll see how to use this shortly.
```
class elapsed(object):
def __init__(self, fld):
self.fld = fld
self.last = pd.to_datetime(np.nan)
self.last_store = 0
def get(self, row):
if row.Store != self.last_store:
self.last = pd.to_datetime(np.nan)
self.last_store = row.Store
if (row[self.fld]): self.last = row.Date
return row.Date-self.last
df = train[columns]
```
And a function for applying said class across dataframe rows and adding values to a new column.
```
def add_elapsed(fld, prefix):
sh_el = elapsed(fld)
df[prefix+fld] = df.apply(sh_el.get, axis=1)
```
Let's walk through an example.
Say we're looking at School Holiday. We'll first sort by Store, then Date, and then call `add_elapsed('SchoolHoliday', 'After')`:
This will generate an instance of the `elapsed` class for School Holiday:
* Instance applied to every row of the dataframe in order of store and date
* Will add to the dataframe the days since seeing a School Holiday
* If we sort in the other direction, this will count the days until another promotion.
```
fld = 'SchoolHoliday'
df = df.sort_values(['Store', 'Date'])
add_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
add_elapsed(fld, 'Before')
```
We'll do this for two more fields.
```
fld = 'StateHoliday'
df = df.sort_values(['Store', 'Date'])
add_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
add_elapsed(fld, 'Before')
fld = 'Promo'
df = df.sort_values(['Store', 'Date'])
add_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
add_elapsed(fld, 'Before')
display(df.head())
```
We're going to set the active index to Date.
```
df = df.set_index("Date")
```
Then set null values from elapsed field calculations to 0.
```
columns = ['SchoolHoliday', 'StateHoliday', 'Promo']
for o in ['Before', 'After']:
for p in columns:
a = o+p
df[a] = df[a].fillna(pd.Timedelta(0)).dt.days
```
Next we'll demonstrate window functions in pandas to calculate rolling quantities.
Here we're sorting by date (`sort_index()`) and counting the number of events of interest (`sum()`) defined in `columns` in the following week (`rolling()`), grouped by Store (`groupby()`). We do the same in the opposite direction.
```
bwd = df[['Store']+columns].sort_index().groupby("Store").rolling(7, min_periods=1).sum()
fwd = df[['Store']+columns].sort_index(ascending=False
).groupby("Store").rolling(7, min_periods=1).sum()
```
Next we want to drop the Store indices grouped together in the window function.
Often in pandas, there is an option to do this in place. This is time and memory efficient when working with large datasets.
```
bwd.drop('Store',1,inplace=True)
bwd.reset_index(inplace=True)
fwd.drop('Store',1,inplace=True)
fwd.reset_index(inplace=True)
df.reset_index(inplace=True)
```
Now we'll merge these values onto the df.
```
df = df.merge(bwd, 'left', ['Date', 'Store'], suffixes=['', '_bw'])
df = df.merge(fwd, 'left', ['Date', 'Store'], suffixes=['', '_fw'])
df.drop(columns,1,inplace=True)
df.head()
```
It's usually a good idea to back up large tables of extracted / wrangled features before you join them onto another one, that way you can go back to it easily if you need to make changes to it.
```
df.to_csv('df.csv')
df = pd.read_csv('df.csv', index_col=0)
df["Date"] = pd.to_datetime(df.Date)
df.columns
joined = join_df(joined, df, ['Store', 'Date'])
```
We'll back this up as well.
```
joined.to_csv('joined.csv')
```
We now have our final set of engineered features.
```
joined = pd.read_csv('joined.csv', index_col=0)
joined["Date"] = pd.to_datetime(joined.Date)
joined.columns
```
While these steps were explicitly outlined in the paper, these are all fairly typical feature engineering steps for dealing with time series data and are practical in any similar setting.
## Create features
Now that we've engineered all our features, we need to convert to input compatible with a neural network.
This includes converting categorical variables into contiguous integers or one-hot encodings, normalizing continuous features to standard normal, etc...
```
from sklearn_pandas import DataFrameMapper
from sklearn.preprocessing import LabelEncoder, Imputer, StandardScaler
```
This dictionary maps categories to embedding dimensionality. In generally, categories we might expect to be conceptually more complex have larger dimension.
```
cat_var_dict = {'Store': 50, 'DayOfWeek': 6, 'Year': 2, 'Month': 6,
'Day': 10, 'StateHoliday': 3, 'CompetitionMonthsOpen': 2,
'Promo2Weeks': 1, 'StoreType': 2, 'Assortment': 3, 'PromoInterval': 3,
'CompetitionOpenSinceYear': 4, 'Promo2SinceYear': 4, 'State': 6,
'Week': 2, 'Events': 4, 'Promo_fw': 1,
'Promo_bw': 1, 'StateHoliday_fw': 1,
'StateHoliday_bw': 1, 'SchoolHoliday_fw': 1,
'SchoolHoliday_bw': 1}
```
Name categorical variables
```
cat_vars = [o[0] for o in
sorted(cat_var_dict.items(), key=operator.itemgetter(1), reverse=True)]
"""cat_vars = ['Store', 'DayOfWeek', 'Year', 'Month', 'Day', 'StateHoliday',
'StoreType', 'Assortment', 'Week', 'Events', 'Promo2SinceYear',
'CompetitionOpenSinceYear', 'PromoInterval', 'Promo', 'SchoolHoliday', 'State']"""
```
Likewise for continuous
```
# mean/max wind; min temp; cloud; min/mean humid;
contin_vars = ['CompetitionDistance',
'Max_TemperatureC', 'Mean_TemperatureC', 'Min_TemperatureC',
'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h',
'Mean_Wind_SpeedKm_h', 'CloudCover', 'trend', 'trend_DE',
'AfterStateHoliday', 'BeforeStateHoliday', 'Promo', 'SchoolHoliday']
"""contin_vars = ['CompetitionDistance', 'Max_TemperatureC', 'Mean_TemperatureC',
'Max_Humidity', 'trend', 'trend_DE', 'AfterStateHoliday', 'BeforeStateHoliday']"""
```
Replace nulls w/ 0 for continuous, "" for categorical.
```
for v in contin_vars: joined.loc[joined[v].isnull(), v] = 0
for v in cat_vars: joined.loc[joined[v].isnull(), v] = ""
```
Here we create a list of tuples, each containing a variable and an instance of a transformer for that variable.
For categoricals, we use a label encoder that maps categories to continuous integers. For continuous variables, we standardize them.
```
cat_maps = [(o, LabelEncoder()) for o in cat_vars]
contin_maps = [([o], StandardScaler()) for o in contin_vars]
```
The same instances need to be used for the test set as well, so values are mapped/standardized appropriately.
DataFrame mapper will keep track of these variable-instance mappings.
```
cat_mapper = DataFrameMapper(cat_maps)
cat_map_fit = cat_mapper.fit(joined)
cat_cols = len(cat_map_fit.features)
cat_cols
contin_mapper = DataFrameMapper(contin_maps)
contin_map_fit = contin_mapper.fit(joined)
contin_cols = len(contin_map_fit.features)
contin_cols
```
Example of first five rows of zeroth column being transformed appropriately.
```
cat_map_fit.transform(joined)[0,:5], contin_map_fit.transform(joined)[0,:5]
```
We can also pickle these mappings, which is great for portability!
```
pickle.dump(contin_map_fit, open('contin_maps.pickle', 'wb'))
pickle.dump(cat_map_fit, open('cat_maps.pickle', 'wb'))
[len(o[1].classes_) for o in cat_map_fit.features]
```
## Sample data
Next, the authors removed all instances where the store had zero sale / was closed.
```
joined_sales = joined[joined.Sales!=0]
n = len(joined_sales)
```
We speculate that this may have cost them a higher standing in the competition. One reason this may be the case is that a little EDA reveals that there are often periods where stores are closed, typically for refurbishment. Before and after these periods, there are naturally spikes in sales that one might expect. Be ommitting this data from their training, the authors gave up the ability to leverage information about these periods to predict this otherwise volatile behavior.
```
n
```
We're going to run on a sample.
```
samp_size = 100000
np.random.seed(42)
idxs = sorted(np.random.choice(n, samp_size, replace=False))
joined_samp = joined_sales.iloc[idxs].set_index("Date")
samp_size = n
joined_samp = joined_sales.set_index("Date")
```
In time series data, cross-validation is not random. Instead, our holdout data is always the most recent data, as it would be in real application.
We've taken the last 10% as our validation set.
```
train_ratio = 0.9
train_size = int(samp_size * train_ratio)
train_size
joined_valid = joined_samp[train_size:]
joined_train = joined_samp[:train_size]
len(joined_valid), len(joined_train)
```
Here's a preprocessor for our categoricals using our instance mapper.
```
def cat_preproc(dat):
return cat_map_fit.transform(dat).astype(np.int64)
cat_map_train = cat_preproc(joined_train)
cat_map_valid = cat_preproc(joined_valid)
```
Same for continuous.
```
def contin_preproc(dat):
return contin_map_fit.transform(dat).astype(np.float32)
contin_map_train = contin_preproc(joined_train)
contin_map_valid = contin_preproc(joined_valid)
```
Grab our targets.
```
y_train_orig = joined_train.Sales
y_valid_orig = joined_valid.Sales
```
Finally, the authors modified the target values by applying a logarithmic transformation and normalizing to unit scale by dividing by the maximum log value.
Log transformations are used on this type of data frequently to attain a nicer shape.
Further by scaling to the unit interval we can now use a sigmoid output in our neural network. Then we can multiply by the maximum log value to get the original log value and transform back.
```
max_log_y = np.max(np.log(joined_samp.Sales))
y_train = np.log(y_train_orig)/max_log_y
y_valid = np.log(y_valid_orig)/max_log_y
```
Note: Some testing shows this doesn't make a big difference.
```
"""#y_train = np.log(y_train)
ymean=y_train_orig.mean()
ystd=y_train_orig.std()
y_train = (y_train_orig-ymean)/ystd
#y_valid = np.log(y_valid)
y_valid = (y_valid_orig-ymean)/ystd"""
```
Root-mean-squared percent error is the metric Kaggle used for this competition.
```
def rmspe(y_pred, targ = y_valid_orig):
pct_var = (targ - y_pred)/targ
return math.sqrt(np.square(pct_var).mean())
```
These undo the target transformations.
```
def log_max_inv(preds, mx = max_log_y):
return np.exp(preds * mx)
# - This can be used if ymean and ystd are calculated above (they are currently commented out)
def normalize_inv(preds):
return preds * ystd + ymean
```
## Create models
Now we're ready to put together our models.
Much of the following code has commented out portions / alternate implementations.
```
"""
1 97s - loss: 0.0104 - val_loss: 0.0083
2 93s - loss: 0.0076 - val_loss: 0.0076
3 90s - loss: 0.0071 - val_loss: 0.0076
4 90s - loss: 0.0068 - val_loss: 0.0075
5 93s - loss: 0.0066 - val_loss: 0.0075
6 95s - loss: 0.0064 - val_loss: 0.0076
7 98s - loss: 0.0063 - val_loss: 0.0077
8 97s - loss: 0.0062 - val_loss: 0.0075
9 95s - loss: 0.0061 - val_loss: 0.0073
0 101s - loss: 0.0061 - val_loss: 0.0074
"""
def split_cols(arr):
return np.hsplit(arr,arr.shape[1])
# - This gives the correct list length for the model
# - (list of 23 elements: 22 embeddings + 1 array of 16-dim elements)
map_train = split_cols(cat_map_train) + [contin_map_train]
map_valid = split_cols(cat_map_valid) + [contin_map_valid]
len(map_train)
# map_train = split_cols(cat_map_train) + split_cols(contin_map_train)
# map_valid = split_cols(cat_map_valid) + split_cols(contin_map_valid)
```
Helper function for getting categorical name and dim.
```
def cat_map_info(feat): return feat[0], len(feat[1].classes_)
cat_map_info(cat_map_fit.features[1])
# - In Keras 2 the "initializations" module is not available.
# - To keep here the custom initializer the code from Keras 1 "uniform" initializer is exploited
def my_init(scale):
# return lambda shape, name=None: initializations.uniform(shape, scale=scale, name=name)
return K.variable(np.random.uniform(low=-scale, high=scale, size=shape),
name=name)
# - In Keras 2 the "initializations" module is not available.
# - To keep here the custom initializer the code from Keras 1 "uniform" initializer is exploited
def emb_init(shape, name=None):
# return initializations.uniform(shape, scale=2/(shape[1]+1), name=name)
return K.variable(np.random.uniform(low=-2/(shape[1]+1), high=2/(shape[1]+1), size=shape),
name=name)
```
Helper function for constructing embeddings. Notice commented out codes, several different ways to compute embeddings at play.
Also, note we're flattening the embedding. Embeddings in Keras come out as an element of a sequence like we might use in a sequence of words; here we just want to concatenate them so we flatten the 1-vector sequence into a vector.
```
def get_emb(feat):
name, c = cat_map_info(feat)
#c2 = cat_var_dict[name]
c2 = (c+1)//2
if c2>50: c2=50
inp = Input((1,), dtype='int64', name=name+'_in')
# , kernel_regularizer=l2(1e-6) # Keras 2
u = Flatten(name=name+'_flt')(Embedding(c, c2, input_length=1, embeddings_initializer=emb_init)(inp)) # Keras 2
# u = Flatten(name=name+'_flt')(Embedding(c, c2, input_length=1)(inp))
return inp,u
```
Helper function for continuous inputs.
```
def get_contin(feat):
name = feat[0][0]
inp = Input((1,), name=name+'_in')
return inp, Dense(1, name=name+'_d', kernel_initializer=my_init(1.))(inp) # Keras 2
```
Let's build them.
```
contin_inp = Input((contin_cols,), name='contin')
contin_out = Dense(contin_cols*10, activation='relu', name='contin_d')(contin_inp)
#contin_out = BatchNormalization()(contin_out)
```
Now we can put them together. Given the inputs, continuous and categorical embeddings, we're going to concatenate all of them.
Next, we're going to pass through some dropout, then two dense layers w/ ReLU activations, then dropout again, then the sigmoid activation we mentioned earlier.
```
embs = [get_emb(feat) for feat in cat_map_fit.features]
#conts = [get_contin(feat) for feat in contin_map_fit.features]
#contin_d = [d for inp,d in conts]
x = concatenate([emb for inp,emb in embs] + [contin_out]) # Keras 2
#x = concatenate([emb for inp,emb in embs] + contin_d) # Keras 2
x = Dropout(0.02)(x)
x = Dense(1000, activation='relu', kernel_initializer='uniform')(x)
x = Dense(500, activation='relu', kernel_initializer='uniform')(x)
x = Dropout(0.2)(x)
x = Dense(1, activation='sigmoid')(x)
model = Model([inp for inp,emb in embs] + [contin_inp], x)
#model = Model([inp for inp,emb in embs] + [inp for inp,d in conts], x)
model.compile('adam', 'mean_absolute_error')
#model.compile(Adam(), 'mse')
```
### Start training
```
%%time
hist = model.fit(map_train, y_train, batch_size=128, epochs=25,
verbose=1, validation_data=(map_valid, y_valid))
hist.history
plot_train(hist)
preds = np.squeeze(model.predict(map_valid, 1024))
```
Result on validation data: 0.1678 (samp 150k, 0.75 trn)
```
log_max_inv(preds)
# - This will work if ymean and ystd are calculated in the "Data" section above (in this case uncomment)
# normalize_inv(preds)
```
## Using 3rd place data
```
pkl_path = '/data/jhoward/github/entity-embedding-rossmann/'
def load_pickle(fname):
return pickle.load(open(pkl_path+fname + '.pickle', 'rb'))
[x_pkl_orig, y_pkl_orig] = load_pickle('feature_train_data')
max_log_y_pkl = np.max(np.log(y_pkl_orig))
y_pkl = np.log(y_pkl_orig)/max_log_y_pkl
pkl_vars = ['Open', 'Store', 'DayOfWeek', 'Promo', 'Year', 'Month', 'Day',
'StateHoliday', 'SchoolHoliday', 'CompetitionMonthsOpen', 'Promo2Weeks',
'Promo2Weeks_L', 'CompetitionDistance',
'StoreType', 'Assortment', 'PromoInterval', 'CompetitionOpenSinceYear',
'Promo2SinceYear', 'State', 'Week', 'Max_TemperatureC', 'Mean_TemperatureC',
'Min_TemperatureC', 'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h',
'Mean_Wind_SpeedKm_h', 'CloudCover','Events', 'Promo_fw', 'Promo_bw',
'StateHoliday_fw', 'StateHoliday_bw', 'AfterStateHoliday', 'BeforeStateHoliday',
'SchoolHoliday_fw', 'SchoolHoliday_bw', 'trend_DE', 'trend']
x_pkl = np.array(x_pkl_orig)
gt_enc = StandardScaler()
gt_enc.fit(x_pkl[:,-2:])
x_pkl[:,-2:] = gt_enc.transform(x_pkl[:,-2:])
x_pkl.shape
x_pkl = x_pkl[idxs]
y_pkl = y_pkl[idxs]
x_pkl_trn, x_pkl_val = x_pkl[:train_size], x_pkl[train_size:]
y_pkl_trn, y_pkl_val = y_pkl[:train_size], y_pkl[train_size:]
x_pkl_trn.shape
xgb_parms = {'learning_rate': 0.1, 'subsample': 0.6,
'colsample_bylevel': 0.6, 'silent': True, 'objective': 'reg:linear'}
xdata_pkl = xgboost.DMatrix(x_pkl_trn, y_pkl_trn, feature_names=pkl_vars)
xdata_val_pkl = xgboost.DMatrix(x_pkl_val, y_pkl_val, feature_names=pkl_vars)
xgb_parms['seed'] = random.randint(0,1e9)
model_pkl = xgboost.train(xgb_parms, xdata_pkl)
model_pkl.eval(xdata_val_pkl)
#0.117473
importance = model_pkl.get_fscore()
importance = sorted(importance.items(), key=operator.itemgetter(1))
df = pd.DataFrame(importance, columns=['feature', 'fscore'])
df['fscore'] = df['fscore'] / df['fscore'].sum()
df.plot(kind='barh', x='feature', y='fscore', legend=False, figsize=(6, 10))
plt.title('XGBoost Feature Importance')
plt.xlabel('relative importance');
```
### Neural net
```
#np.savez_compressed('vars.npz', pkl_cats, pkl_contins)
#np.savez_compressed('deps.npz', y_pkl)
pkl_cats = np.stack([x_pkl[:,pkl_vars.index(f)] for f in cat_vars], 1)
pkl_contins = np.stack([x_pkl[:,pkl_vars.index(f)] for f in contin_vars], 1)
co_enc = StandardScaler().fit(pkl_contins)
pkl_contins = co_enc.transform(pkl_contins)
pkl_contins_trn, pkl_contins_val = pkl_contins[:train_size], pkl_contins[train_size:]
pkl_cats_trn, pkl_cats_val = pkl_cats[:train_size], pkl_cats[train_size:]
y_pkl_trn, y_pkl_val = y_pkl[:train_size], y_pkl[train_size:]
def get_emb_pkl(feat):
name, c = cat_map_info(feat)
c2 = (c+2)//3
if c2>50: c2=50
inp = Input((1,), dtype='int64', name=name+'_in')
u = Flatten(name=name+'_flt')(Embedding(c, c2, input_length=1, init=emb_init)(inp))
return inp,u
n_pkl_contin = pkl_contins_trn.shape[1]
contin_inp = Input((n_pkl_contin,), name='contin')
contin_out = BatchNormalization()(contin_inp)
map_train_pkl = split_cols(pkl_cats_trn) + [pkl_contins_trn]
map_valid_pkl = split_cols(pkl_cats_val) + [pkl_contins_val]
def train_pkl(bs=128, ne=10):
return model_pkl.fit(map_train_pkl, y_pkl_trn, batch_size=bs, nb_epoch=ne,
verbose=0, validation_data=(map_valid_pkl, y_pkl_val))
def get_model_pkl():
conts = [get_contin_pkl(feat) for feat in contin_map_fit.features]
embs = [get_emb_pkl(feat) for feat in cat_map_fit.features]
x = merge([emb for inp,emb in embs] + [contin_out], mode='concat')
x = Dropout(0.02)(x)
x = Dense(1000, activation='relu', init='uniform')(x)
x = Dense(500, activation='relu', init='uniform')(x)
x = Dense(1, activation='sigmoid')(x)
model_pkl = Model([inp for inp,emb in embs] + [contin_inp], x)
model_pkl.compile('adam', 'mean_absolute_error')
#model.compile(Adam(), 'mse')
return model_pkl
model_pkl = get_model_pkl()
train_pkl(128, 10).history['val_loss']
K.set_value(model_pkl.optimizer.lr, 1e-4)
train_pkl(128, 5).history['val_loss']
"""
1 97s - loss: 0.0104 - val_loss: 0.0083
2 93s - loss: 0.0076 - val_loss: 0.0076
3 90s - loss: 0.0071 - val_loss: 0.0076
4 90s - loss: 0.0068 - val_loss: 0.0075
5 93s - loss: 0.0066 - val_loss: 0.0075
6 95s - loss: 0.0064 - val_loss: 0.0076
7 98s - loss: 0.0063 - val_loss: 0.0077
8 97s - loss: 0.0062 - val_loss: 0.0075
9 95s - loss: 0.0061 - val_loss: 0.0073
0 101s - loss: 0.0061 - val_loss: 0.0074
"""
plot_train(hist)
preds = np.squeeze(model_pkl.predict(map_valid_pkl, 1024))
y_orig_pkl_val = log_max_inv(y_pkl_val, max_log_y_pkl)
rmspe(log_max_inv(preds, max_log_y_pkl), y_orig_pkl_val)
```
## XGBoost
Xgboost is extremely quick and easy to use. Aside from being a powerful predictive model, it gives us information about feature importance.
```
X_train = np.concatenate([cat_map_train, contin_map_train], axis=1)
X_valid = np.concatenate([cat_map_valid, contin_map_valid], axis=1)
all_vars = cat_vars + contin_vars
xgb_parms = {'learning_rate': 0.1, 'subsample': 0.6,
'colsample_bylevel': 0.6, 'silent': True, 'objective': 'reg:linear'}
xdata = xgboost.DMatrix(X_train, y_train, feature_names=all_vars)
xdata_val = xgboost.DMatrix(X_valid, y_valid, feature_names=all_vars)
xgb_parms['seed'] = random.randint(0,1e9)
model = xgboost.train(xgb_parms, xdata)
model.eval(xdata_val)
model.eval(xdata_val)
```
Easily, competition distance is the most important, while events are not important at all.
In real applications, putting together a feature importance plot is often a first step. Oftentimes, we can remove hundreds of thousands of features from consideration with importance plots.
```
importance = model.get_fscore()
importance = sorted(importance.items(), key=operator.itemgetter(1))
df = pd.DataFrame(importance, columns=['feature', 'fscore'])
df['fscore'] = df['fscore'] / df['fscore'].sum()
df.plot(kind='barh', x='feature', y='fscore', legend=False, figsize=(6, 10))
plt.title('XGBoost Feature Importance')
plt.xlabel('relative importance');
```
## End
| github_jupyter |
```
from math import inf
from collections import Counter
from collections import OrderedDict
```
# 1.Codigo de norving
```
"""
Spelling Corrector in Python 3; see http://norvig.com/spell-correct.html
Copyright (c) 2007-2016 Peter Norvig
MIT license: www.opensource.org/licenses/mit-license.php
"""
################ Spelling Corrector ################
####################################################
import re
from collections import Counter
def words(text): return re.findall(r'\w+', text.lower())
WORDS = Counter(words(open('big.txt').read()))
def P(word, N=sum(WORDS.values())):
"Probability of `word`."
return WORDS[word] / N
def correction(word):
"Most probable spelling correction for word."
return max(candidates(word), key=P)
def candidates(word):
"Generate possible spelling corrections for word."
return (known([word]) or known(edits1(word)) or known(edits2(word)) or [word])
def known(words):
"The subset of `words` that appear in the dictionary of WORDS."
return set(w for w in words if w in WORDS)
def edits1(word):
"All edits that are one edit away from `word`."
letters = 'abcdefghijklmnopqrstuvwxyz'
splits = [(word[:i], word[i:]) for i in range(len(word) + 1)]
deletes = [L + R[1:] for L, R in splits if R]
transposes = [L + R[1] + R[0] + R[2:] for L, R in splits if len(R)>1]
replaces = [L + c + R[1:] for L, R in splits if R for c in letters]
inserts = [L + c + R for L, R in splits for c in letters]
return set(deletes + transposes + replaces + inserts)
def edits2(word):
"All edits that are two edits away from `word`."
return (e2 for e1 in edits1(word) for e2 in edits1(e1))
################ Test Code
def unit_tests():
assert correction('speling') == 'spelling','Err: insert'# insert
assert correction('korrectud') == 'corrected' # replace 2
assert correction('bycycle') == 'bicycle' # replace
assert correction('inconvient') == 'inconvenient' # insert 2
assert correction('arrainged') == 'arranged' # delete
assert correction('peotry') =='poetry' # transpose
assert correction('peotryy') =='poetry' # transpose + delete
assert correction('word') == 'word' # known
assert correction('quintessential') == 'quintessential' # unknown
assert words('This is a TEST.') == ['this', 'is', 'a', 'test']
assert Counter(words('This is a test. 123; A TEST this is.')) == (
Counter({'123': 1, 'a': 2, 'is': 2, 'test': 2, 'this': 2}))
assert len(WORDS) == 32198
assert sum(WORDS.values()) == 1115585
assert WORDS.most_common(10) == [
('the', 79809),
('of', 40024),
('and', 38312),
('to', 28765),
('in', 22023),
('a', 21124),
('that', 12512),
('he', 12401),
('was', 11410),
('it', 10681)]
assert WORDS['the'] == 79809
assert P('quintessential') == 0
assert 0.07 < P('the') < 0.08
return 'unit_tests pass'
print(unit_tests())
print(correction('speling'))
print(correction('korrectud'))
print(correction('thu'))
```
# 2.La siguiente palabra m\'as probable
Usando _big.txt_ crear una funci\'on que estime la siguiente palabra m\'as probable dada una anterior. La funci\'on debbe calcular
$$w_{i+1} = \text{argmax}_{w_{i+1}}P(W_{i+1}|w_i)$$
Para este trabajo
1. Podemos asumir que ambas palabras siempre existir\'an en la colecci\'on
2. Requerimos una funci\'on similar a $P$, que calcule $P(w_1|w_2)$
```
################################
### Funciones para trabajar ###
################################
def words_from_file( fileName ):
""" Obtenemos las palabras de un archivo. """
file = open(fileName).read()
return re.findall(r'\w+', file.lower())
def create_dict(texto):
""" Funcion para crear el diccionario auxiliar para
calcular las probabilidades necesarias.
"""
ret = {}
for i in range(1,len(texto)):
if texto[i] not in ret:
ret[texto[i]] = {}
if texto[i-1] not in ret[texto[i]]:
(ret[texto[i]])[texto[i-1]] = 0
(ret[texto[i]])[texto[i-1]] += 1
# Pre-ordenado
for word in ret:
ret[word] = OrderedDict(sorted(ret[word].items(),
key=lambda x:
prob_cond(x[0], word, ret),
reverse=True))
return ret
def prob_cond(a, b, dic):
""" Probabilidad de A dado B en funcion de dic """
try:
return ((dic[a])[b])/sum(dic[b].values())
except KeyError:
return -1
def next_word(word, dic):
""" Obtenemos la siguiente palabra mas probable en funcion
del dicionario y sus probabiliodades. """
try:
return next(iter(dic[word]))
except:
return word
dic = create_dict(words_from_file('big.txt'))
word = 'new'
print( word +' '+next_word( word, dic) )
print( prob_cond('york','new', dic) )
```
## 2.1.Aqu\'i la maquina juega al ahorcado
Se recomienda extender y mejorar de alg\'un modo la funci\'on propuesta por __Norving__.
```
def under(word):
word = word.split('_')
if len(word) > 5:
print('Demasiadas letras desconocidas')
return None
return word
def candidatos(word):
''' Recibe a word ya con el 'split' aplicado
y regresamos las posibles palabras
'''
letters = 'abcdefghijklmnopqrstuvwxyz'
n_letters = len(letters)
flag = word[-1] if word[-1] != '' else 'BrendA'
# Creamos los posibles 'pedacitos' de la palabra
words = [ele + letter
for ele in word[:len(word)-1]
for letter in letters]
# Variables auxiliares
options = words[:n_letters]
options_t = []
# Concatenamos los posibles 'pedacitos'
for k in range( 1, len(words)//n_letters ):
for option in options:
for i in range(n_letters):
options_t.append(option + words[n_letters*k + i])
options = options_t; options_t = []
if flag != 'BrendA': # Checamos si al final hay un '_' o una letra
for i in range(len(options)):
options[i] = options[i] + flag
# Regresamos unicamente las palabras que esten en el diccionario
return set(opt for opt in options if opt in WORDS)
def dist_lev(source, target):
if source == target: return 0
# Crear matriz
n_s, n_t = len(source), len(target)
dist = [[0 for i in range(n_t+1)] for x in range(n_s+1)]
for i in range(n_s+1): dist[i][0] = i
for j in range(n_t+1): dist[0][j] = j
# Calculando la distancia
for i in range(n_s):
for j in range(n_t):
cost = 0 if source[i] == target[j] else 1
dist[i+1][j+1] = min(
dist[i][j+1] + 1, # deletion
dist[i+1][j] + 1, # insertion
dist[i][j] + cost # substitution
)
return dist[-1][-1]
def closest(word, options):
ret = 'BrendA', inf
for opt in options:
dist = dist_lev(word, opt)
ret = (opt, dist) if dist < ret[1] else ret
return ret
def hangman(word):
options = candidatos( under(word) )
return closest(word, options)
print(hangman('s_e_l_c_')[0]) #sherlock
print(hangman('no_eb_o_')[0]) #notebook
print(hangman('he__o')[0]) #hello
print(hangman('pe_p_e')[0]) #people
print(hangman('phi__sop_y')[0]) #philospphy
print(hangman('si_nif_c_nc_')[0]) #significance
print(hangman('kn__l_d_e')[0]) #sun
```
## 2.2.Ahorcado al extremo
Unir la funci\'on de _2_ y _2.1_ para, utilizando una palabra de contexto, completar palabras con mayor precisi\'on
```
def super_under(word):
ct = Counter(word)
if len(word) - ct['_'] < 1:
print('Demasiadas letras desconocidas')
return None
word = word.split('_')
return word
def super_closest( context, options):
ret = 'BrendA', -inf
for opt in options: # Buscando el ret adecuado
# Esta es la misma funcion de probabilidad del ejercicio anterior
prob = prob_cond(opt, context, dic)
# En caso de que las proabilidades empaten
# utilizamos las distancia entre las palabras
# para responder.
ret = ((opt, prob) if dist_lev(context, opt) < dist_lev(context, ret[0]) else ret) if prob == ret[1] else ret
ret = (opt, prob) if prob > ret[1] else ret
return ret
def super_hangman(context, word):
options = candidatos( super_under(word) )
return super_closest(context, options)
print(super_hangman('sherlock', '_____s')) #holmes
print(super_hangman('united', '_t_t__')) #states
print(super_hangman('white', '___s_')) #house
print(super_hangman('new', 'y___')) #york
print(super_hangman('abraham', 'l_____n')) #lincoln
```
# 3.Correci\'on ortografica simple
### Funciones auxiliares
```
import os, re
# simple extraction of words
def words (text) :
return re.findall(r'\w+', text.lower())
# siple loading of the documents
from keras.preprocessing.text import Tokenizer
def get_texts_from_catdir( catdir ):
texts = [ ]
TARGET_DIR = catdir # "./target"
for f_name in sorted( os.listdir( TARGET_DIR )) :
if f_name.endswith('.txt'):
f_path = os.path.join( TARGET_DIR, f_name )
#print(f_name)
#print(f_path)
f = open( f_path , 'r', encoding='utf8' )
#print( f_name )
texts += [ f.read( ) ]
f.close( )
print( '%d files loaded . ' %len(texts) )
return texts
# Load the RAW text
target_txt = get_texts_from_catdir( './target' )
# Print first 10 words in document0
print( words(target_txt[0])[:10] )
```
### Mezclar diccionarios
```
import json
WORDS = Counter(words(open('big.txt').read()))
with open('WORDS_IN_NEWS.txt', 'r') as infile: # Exportando WORDS_IN_NEWS
WORDS_IN_NEWS = json.load( infile )
WORDS_IN_NEWS = Counter(WORDS_IN_NEWS)
WORDS = WORDS + WORDS_IN_NEWS
print(WORDS['the'])
print(WORDS.most_common(5))
```
### Detectar las plabras mal escritas
```
def mispelled_and_candidates( target_words ):
mispelled_candidates = []
for word in target_words:
temp = list(candidates(word)) # candidates de Norving
if len(temp) > 1:
temp.sort(key=lambda x: dist_lev(word, x))
mispelled_candidates.append((word, temp[:10])) #Tomamos las primeras 10
return mispelled_candidates
def mispelled_and_candidates( target_words ):
mispelled_candidates = []
for word in target_words:
candidatos = list(candidates(word))
candidatos.sort(key=lambda x: dist_lev(word, x))
if len(candidatos) > 1:
# En caso de que haya una opcion
mispelled_candidates.append((word, candidatos[:10]))
elif len(candidatos) == 1 and word not in candidatos:
# En caso de que la unica opcion sea distinta
mispelled_candidates.append((word, candidatos))
return mispelled_candidates
#print ( mispelled_and_candidates( words( target_txt[0] )))
# Print misspelled words and candidates for each document in
# target_txt list
for text in target_txt:
print ( mispelled_and_candidates ( words ( text ) ) )
pass
```
### Correccion completa
Para este ejercicio supondremos que la primera palabra esta bien escrita y tiene sentido.
La funcion `spell_correction` tiene una caracteristica que puede o no mejorar dependiendo de ciertos casos. De manera general, primero pasamos por la funcion del iniciso anterior al texto e identificamos todas las palabras mal escritas, luego, priorizando la probabilidad que ofrece la palabra anterior, escogemos la mejor opcione de entre aquellas que se generen por `candidates` _de Norvng_.
Esta forma de actuar tiene la principal desventaja de que no detectara problemas como las ultimas dos pruebas (ejemplos) que se proponen. Donde son palabras bien escritas pero que no necesariamente son las correctas, para solucionar esto podemos dar una propuesta mas agresva donde, en caso de que la palabra que probabilisticamente halbando (y en funcion con el corpus) deberia de seguir, la ponemos sin preguntar. Esto permite solucionar mas incisos del ejemplo, pero tambien descompone otras partes (como se puede ver en las pruebas de las noticias)
En general creo que aqui es donde podemos darle la opcion al humano para que escoja la palabra que mejor se acomode. Para superar esto podriamos ampliar el corpus o considerar la palabra que mejor se complemente con la que sigue. En caso de empezar con estasconsideraciones me parece que seria mejor primero arreglar todos las palabras que estan claramente mal escritas y luego hacer otra pasada con probabilidades.
#### Nota
Dado que _ham_ no parece estar en el corpus, causa problemas
```
# Creacion de diccionario ampliado
# Aunque no sirve de mucho
nbig = open('big.txt').read()
for text in target_txt:
nbig += text
dic = create_dict(words(nbig))
def spell_correction( input_text, max_dist=2, profundo=False):
""" Profundo le da mas libertad a la maquia para mejorar el texto. """
corrected_text = input_text
mispeled = dict(mispelled_and_candidates(input_text))
for iw in range(1, len(input_text)):
pword = corrected_text[iw-1]
word = input_text[iw]
nword = next_word(pword, dic)
# En otro caso consideramos las probabilidades
if word in mispeled:
corrected_text[iw] = max(mispeled[word],
key=lambda x: prob_cond(x, pword, dic))
# Si se parecem cambiamos sin preguntar
if profundo and dist_lev(nword, word) <= max_dist:
corrected_text[iw] = nword
return corrected_text
tests = [['i', 'hav', 'a', 'ham'],
['my', 'countr', 'is', 'biig'],
['i', 'want', 't00', 'eat'],
['the', 'science', '0ff', 'computer'],
['the', 'science', 'off', 'computer'],
[ 'i', 'want' , 'too' , 'eat']
]
for s in tests:
#print(mispelled_and_candidates(s))
print(s)
print( spell_correction( s, profundo=True ))
print()
```
#### Chequeo con Golden
```
golden_txt = get_texts_from_catdir( './golden' )
golden_words = words(" ".join(golden_txt))
target_words = words(" ".join(target_txt))
i = 0
for gword, tword in zip(golden_words, target_words):
if gword != tword:
print(f"{i} => {gword} != {tword}")
i+=1
new_text = spell_correction(target_words)
new_words = words(" ".join(new_text))
i = 0
for gword, nword in zip(golden_words, new_words):
if gword != nword:
print(f"{i} => {gword} != {nword}")
i+=1
else:
if i==0:
print("<-----|!!! No hay errores =D !!!|----->")
new_text = spell_correction(target_words, profundo=True)
new_words = words(" ".join(new_text))
i = 0
for gword, nword in zip(golden_words, new_words):
if gword != nword:
print(f"{i} => {gword} != {nword}")
i+=1
else:
if i==0:
print("<-----|!!! No hay errores =D !!!|----->")
else:
print(" ='( Ahora si )'=")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/verbose/alphafold_noTemplates_noMD.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#AlphaFold
```
#################
# WARNING
#################
# - This notebook is intended as a "quick" demo, it disables many aspects of the full alphafold2 pipeline
# (input MSA/templates, amber MD refinement, and number of models). For best results, we recommend using the
# full pipeline: https://github.com/deepmind/alphafold
# - That being said, it was found that input templates and amber-relax did not help much.
# The key input features are the MSA (Multiple Sequence Alignment) of related proteins. Where you see a
# significant drop in predicted accuracy when MSA < 30, but only minor improvements > 100.
# - This notebook does NOT include the alphafold2 MSA generation pipeline, and is designed to work with a
# single sequence, custom MSA input (that you can upload) or MMseqs2 webserver.
# - Single sequence mode is particularly useful for denovo designed proteins (where there are no sequence
# homologs by definition). For natural proteins, an MSA input will make a huge difference.
#################
# EXTRA
#################
# For other related notebooks see: https://github.com/sokrypton/ColabFold
# install/import alphafold (and required libs)
import os, sys
if not os.path.isdir("/content/alphafold"):
%shell git clone -q https://github.com/deepmind/alphafold.git; cd alphafold; git checkout -q 1d43aaff941c84dc56311076b58795797e49107b
%shell mkdir --parents params; curl -fsSL https://storage.googleapis.com/alphafold/alphafold_params_2021-07-14.tar | tar x -C params
%shell pip -q install biopython dm-haiku==0.0.5 ml-collections py3Dmol
if '/content/alphafold' not in sys.path:
sys.path.append('/content/alphafold')
import numpy as np
import py3Dmol
import matplotlib.pyplot as plt
from alphafold.common import protein
from alphafold.data import pipeline
from alphafold.data import templates
from alphafold.model import data
from alphafold.model import config
from alphafold.model import model
# setup which model params to use
# note: for this demo, we only use model 1, for all five models uncomments the others!
model_runners = {}
models = ["model_1"] #,"model_2","model_3","model_4","model_5"]
for model_name in models:
model_config = config.model_config(model_name)
model_config.data.eval.num_ensemble = 1
model_params = data.get_model_haiku_params(model_name=model_name, data_dir=".")
model_runner = model.RunModel(model_config, model_params)
model_runners[model_name] = model_runner
def mk_mock_template(query_sequence):
'''create blank template'''
ln = len(query_sequence)
output_templates_sequence = "-"*ln
templates_all_atom_positions = np.zeros((ln, templates.residue_constants.atom_type_num, 3))
templates_all_atom_masks = np.zeros((ln, templates.residue_constants.atom_type_num))
templates_aatype = templates.residue_constants.sequence_to_onehot(output_templates_sequence,templates.residue_constants.HHBLITS_AA_TO_ID)
template_features = {'template_all_atom_positions': templates_all_atom_positions[None],
'template_all_atom_masks': templates_all_atom_masks[None],
'template_aatype': np.array(templates_aatype)[None],
'template_domain_names': [f'none'.encode()]}
return template_features
def predict_structure(prefix, feature_dict, model_runners, random_seed=0):
"""Predicts structure using AlphaFold for the given sequence."""
# Run the models.
plddts = {}
for model_name, model_runner in model_runners.items():
processed_feature_dict = model_runner.process_features(feature_dict, random_seed=random_seed)
prediction_result = model_runner.predict(processed_feature_dict)
unrelaxed_protein = protein.from_prediction(processed_feature_dict,prediction_result)
unrelaxed_pdb_path = f'{prefix}_unrelaxed_{model_name}.pdb'
plddts[model_name] = prediction_result['plddt']
print(f"{model_name} {plddts[model_name].mean()}")
with open(unrelaxed_pdb_path, 'w') as f:
f.write(protein.to_pdb(unrelaxed_protein))
return plddts
```
# Single sequence input (no MSA)
```
# Change this line to sequence you want to predict
query_sequence = "GWSTELEKHREELKEFLKKEGITNVEIRIDNGRLEVRVEGGTERLKRFLEELRQKLEKKGYTVDIKIE"
%%time
feature_dict = {
**pipeline.make_sequence_features(sequence=query_sequence,
description="none",
num_res=len(query_sequence)),
**pipeline.make_msa_features(msas=[[query_sequence]],
deletion_matrices=[[[0]*len(query_sequence)]]),
**mk_mock_template(query_sequence)
}
plddts = predict_structure("test",feature_dict,model_runners)
# confidence per position
plt.figure(dpi=100)
for model,value in plddts.items():
plt.plot(value,label=model)
plt.legend()
plt.ylim(0,100)
plt.ylabel("predicted LDDT")
plt.xlabel("positions")
plt.show()
p = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js')
p.addModel(open("test_unrelaxed_model_1.pdb",'r').read(),'pdb')
p.setStyle({'cartoon': {'color':'spectrum'}})
p.zoomTo()
p.show()
p = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js')
p.addModel(open("test_unrelaxed_model_1.pdb",'r').read(),'pdb')
p.setStyle({'cartoon': {'color':'spectrum'},'stick':{}})
p.zoomTo()
p.show()
```
# Custom MSA input
```
%%bash
# for this demo we will download a premade MSA input
wget -qnc --no-check-certificate https://gremlin2.bakerlab.org/db/ECOLI/fasta/P0A8I3.fas
msa, deletion_matrix = pipeline.parsers.parse_a3m("".join(open("P0A8I3.fas","r").readlines()))
query_sequence = msa[0]
%%time
feature_dict = {
**pipeline.make_sequence_features(sequence=query_sequence,
description="none",
num_res=len(query_sequence)),
**pipeline.make_msa_features(msas=[msa],deletion_matrices=[deletion_matrix]),
**mk_mock_template(query_sequence)
}
plddts = predict_structure("yaaa",feature_dict,model_runners)
# confidence per position
plt.figure(dpi=100)
for model,value in plddts.items():
plt.plot(value,label=model)
plt.legend()
plt.ylim(0,100)
plt.ylabel("predicted LDDT")
plt.xlabel("positions")
plt.show()
p = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js')
p.addModel(open("yaaa_unrelaxed_model_1.pdb",'r').read(),'pdb')
p.setStyle({'cartoon': {'color':'spectrum'}})
p.zoomTo()
p.show()
```
#MSA from MMseqs2
```
##############################
# Where do I get an MSA?
##############################
# For any "serious" use, I would recommend using the alphafold2 pipeline to make the MSAs,
# since this is what it was trained on.
# That being said, part of the MSA generation pipeline (specifically searching against uniprot database using hhblits)
# can be done here: https://toolkit.tuebingen.mpg.de/tools/hhblits
# Alternatively, using the SUPER FAST MMseqs2 pipeline below
# for a HUMAN FRIENDLY version see:
# https://colab.research.google.com/drive/1LVPSOf4L502F21RWBmYJJYYLDlOU2NTL
%%bash
apt-get -qq -y update 2>&1 1>/dev/null
apt-get -qq -y install jq curl zlib1g gawk 2>&1 1>/dev/null
# save query sequence to file
name = "YAII"
query_sequence = "MTIWVDADACPNVIKEILYRAAERMQMPLVLVANQSLRVPPSRFIRTLRVAAGFDVADNEIVRQCEAGDLVITADIPLAAEAIEKGAAALNPRGERYTPATIRERLTMRDFMDTLRASGIQTGGPDSLSQRDRQAFAAELEKWWLEVQRSRG"
with open(f"{name}.fasta","w") as out: out.write(f">{name}\n{query_sequence}\n")
%%bash -s "$name"
# build msa using the MMseqs2 search server
ID=$(curl -s -F q=@$1.fasta -F mode=all https://a3m.mmseqs.com/ticket/msa | jq -r '.id')
STATUS=$(curl -s https://a3m.mmseqs.com/ticket/${ID} | jq -r '.status')
while [ "${STATUS}" == "RUNNING" ]; do
STATUS=$(curl -s https://a3m.mmseqs.com/ticket/${ID} | jq -r '.status')
sleep 1
done
if [ "${STATUS}" == "COMPLETE" ]; then
curl -s https://a3m.mmseqs.com/result/download/${ID} > $1.tar.gz
tar xzf $1.tar.gz
tr -d '\000' < uniref.a3m > $1.a3m
else
echo "MMseqs2 server did not return a valid result."
exit 1
fi
echo "Found $(grep -c ">" $1.a3m) sequences (after redundacy filtering)"
msa, deletion_matrix = pipeline.parsers.parse_a3m("".join(open(f"{name}.a3m","r").readlines()))
query_sequence = msa[0]
%%time
feature_dict = {
**pipeline.make_sequence_features(sequence=query_sequence,
description="none",
num_res=len(query_sequence)),
**pipeline.make_msa_features(msas=[msa],deletion_matrices=[deletion_matrix]),
**mk_mock_template(query_sequence)
}
plddts = predict_structure(name,feature_dict,model_runners)
plt.figure(dpi=100)
plt.plot((feature_dict["msa"] != 21).sum(0))
plt.plot([0,len(query_sequence)],[30,30])
plt.xlabel("positions")
plt.ylabel("number of sequences")
plt.show()
# confidence per position
plt.figure(dpi=100)
for model,value in plddts.items():
plt.plot(value,label=model)
plt.legend()
plt.ylim(0,100)
plt.ylabel("predicted LDDT")
plt.xlabel("positions")
plt.show()
p = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js')
p.addModel(open(f"{name}_unrelaxed_model_1.pdb",'r').read(),'pdb')
p.setStyle({'cartoon': {'color':'spectrum'}})
p.zoomTo()
p.show()
```
| github_jupyter |
```
# default_exp first_steps
```
# First Steps
> API details.
```
#hide
from nbdev.showdoc import *
import os
os.listdir("/home/alois")
os.walk("$HOME")
for item in os.walk(("$HOME")):
print(item)
```
Apparently this does not work. So use the next try:
```
from os import listdir
from os.path import isfile, join
mypath = "/home/alois"
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
onlyfiles
```
And shorter
```
from os import walk
f = []
for (dirpath, dirnames, filenames) in walk(mypath):
f.extend(filenames)
break
print("filenames are", filenames)
print(f)
```
More concise:
```
from os import walk
f = []
for (dirpath, dirnames, filenames) in walk("/home/alois"):
print(dirpath)
print(dirnames)
print(filenames)
break
print(os.listdir("/media/mycloud/My Pictures"))
from os import walk
for (dirpath, dirname, filename) in walk("/media/mycloud/My Pictures"):
print(dirpath)
print(dirname)
print(filename)
break
def fast_scandir(dirname):
subfolders= [f.path for f in os.scandir(dirname) if f.is_dir()]
for dirname in (subfolders):
subfolders.extend(fast_scandir(dirname))
return subfolders
all_folders = fast_scandir("/media/mycloud/My Pictures")
print(lenall_folders)
# -*- coding: utf-8 -*-
# Python 3
import time
import os
from glob import glob
from pathlib import Path
directory = r"/media/mycloud/My Pictures"
RUNS = 1
def run_os_walk():
a = time.time_ns()
for i in range(RUNS):
fu = [x[0] for x in os.walk(directory)]
print(f"os.walk\t\t\ttook {(time.time_ns() - a) / 1000 / 1000 / RUNS:.0f} ms. Found dirs: {len(fu)}")
return fu
def run_glob():
a = time.time_ns()
for i in range(RUNS):
fu = glob(directory + "/*/")
print(f"glob.glob\t\ttook {(time.time_ns() - a) / 1000 / 1000 / RUNS:.0f} ms. Found dirs: {len(fu)}")
def run_pathlib_iterdir():
a = time.time_ns()
for i in range(RUNS):
dirname = Path(directory)
fu = [f for f in dirname.iterdir() if f.is_dir()]
print(f"pathlib.iterdir\ttook {(time.time_ns() - a) / 1000 / 1000 / RUNS:.0f} ms. Found dirs: {len(fu)}")
return fu
def run_os_listdir():
a = time.time_ns()
for i in range(RUNS):
dirname = Path(directory)
fu = [os.path.join(directory, o) for o in os.listdir(directory) if os.path.isdir(os.path.join(directory, o))]
print(f"os.listdir\t\ttook {(time.time_ns() - a) / 1000 / 1000 / RUNS:.0f} ms. Found dirs: {len(fu)}")
return fu
def run_os_scandir():
a = time.time_ns()
for i in range(RUNS):
fu = [f.path for f in os.scandir(directory) if f.is_dir()]
print(f"os.scandir\t\ttook {(time.time_ns() - a) / 1000 / 1000 / RUNS:.0f} ms.\tFound dirs: {len(fu)}")
return fu
if __name__ == '__main__':
run_os_scandir()
print(run_os_walk())
run_glob()
print(run_pathlib_iterdir())
run_os_listdir()
```
The difference between "walk" and the others is that walk recognizes also the subfolders of the itunes library
```
with os.scandir() as dir_contents:
for entry in dir_contents:
info = entry.__dir__
more_info = entry.stat()
another_info = entry.inode()
print(info)
print(more_info, another_info)
import scandir
all_content = scandir.walk("/media/mycloud/My Pictures")
n = 0
f = []
print(all_content)
for i in all_content:tures")
p
n += 1
f.append(i)rint(i)
print(n) break
print(len(f))
for dirpath, dirnames, filenames in f:
print(filenames)
print(len(filenames))
from os import listdir
from os.path import isfile, join, isdir
mypath = "/media/mycloud/My Pictures"
onlydirs = [f for f in listdir(mypath) if isdir(join(mypath, f))]
print(len(onlydirs), onlydirs)
absolute_path = "/media/mycloud/"
import glob
files = []
myDir = mypath
for root, dirnames, filenames in os.walk(myDir):
files.extend(glob.glob(root + "/*"))
print(len(files))
print(files[20:30])
from os import listdir
from os.path import isfile, join
mypath = "/media/mycloud/My Pictures"
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
len(onlyfiles)
print(onlyfiles[0:5])
import pandas as pd
df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
print(df)
df2 = df.copy()
df2.rename(columns={"A": "a", "B": "c"})
df2
```
| github_jupyter |
# Data Structures
## Pandas
```
import pandas as pd
```
### Preparation
```
# create dataframe with specific column names
corpus_df = pd.DataFrame(columns=['id', 'userurl', 'source', 'title', 'description', 'content', 'keywords'])
# append one row, fill by column name
corpus_df = corpus_df.append(
{'id': 0, 'userurl': 'url', 'source': 'source', 'title': 'title',
'description': 'description', 'content': 'content',
'keywords': 'keywords'}, ignore_index=True)
```
### Transformation
```
# define new column, being the concatenation of other columns
corpus_df["text"] = corpus_df["title"].map(str) + ' ' + corpus_df["description"] + ' ' + corpus_df[
"content"].map(str) + ' ' + corpus_df["keywords"]
# drop columns
corpus_df = corpus_df.drop(['source', 'userurl', 'title', 'description', 'keywords', 'content'], axis=1)
# concatenate dfs: https://stackoverflow.com/questions/32444138/combine-a-list-of-pandas-dataframes-to-one-pandas-dataframe
df = pd.concat(list_of_dataframes)
```
### Storing/ Reading
```
# write and read csv
file_name = 'test.csv'
corpus_df.to_csv(file_name, sep='\t')
# diff encoding
corpus_df.to_csv(file_name, sep='\t', encoding='utf-8')
# without index values
corpus_df.to_csv(file_name, encoding='utf-8', index=False)
# append
with open(file_name, 'a') as f:
df.to_csv(f, header=False)
# load
df_corpus = pd.read_csv(file_name, delimiter='\t')
# merging
comparison = pd.merge(results_sap, results_doc2vec, how='inner', on=['doc_id']) # if on is not specified, it is done on the index?
```
## Dictionaries
```
#https://stackoverflow.com/questions/4530611/saving-and-loading-objects-and-using-pickle
with open(r"someobject.pickle", "wb") as output_file:
cPickle.dump(d, output_file)
with open(r"someobject.pickle", "rb") as input_file:
e = cPickle.load(input_file)
#https://stackoverflow.com/questions/3108042/get-max-key-in-
max(dict_m, key=int)
# get value by key, where the values are tuples!
def get_url_key(url_to_find):
url_key = -1
for key, value in docs_dict.items():
if value[0][0] == url_to_find:
url_key = key
return url_key
#https://stackoverflow.com/questions/32957708/python-pickle-error-unicodedecodeerror
#https://stackoverflow.com/questions/9415785/merging-several-python-dictionaries
```
# Networ Communication
- Interacting with the outside world
## DB
```
import sys
sys.executable
!{sys.executable} -m pip install pyhdb
connection = pyhdb.connect(host="<mo-6770....>", port=<port>, user="<user>", password="<pass>") # dummy
cursor = connection.cursor()
# sample query construction
query = ''' SELECT "USERURL","TITLE", "CONTENT", "DESCRIPTION", "SOURCE","KEYWORDS"
FROM REPO_."T.CONTENT"
'''
# count how many rows match the query
N = cursor.execute("SELECT COUNT(*) FROM (" + query + ")").fetchone()[0]
print('Fetching ', N, ' documents...')
cursor.execute(query)
# work row by row
for i in range(N):
try:
row = cursor.fetchone()
if i % 10000 == 0:
print('Processing document ', i)
if row[0] is not None:
userurl = row[0]
else:
userurl = ""
except UnicodeDecodeError:
continue
except UnicodeEncodeError:
continue
# fetch all rows
results = cursor.fetchall()
#http://thepythonguru.com/fetching-records-using-fetchone-and-fetchmany/
```
## HTTP
```
import requests
import json
user = 'client'
passw = 'dummypass'
url = 'https://<search>.com/api/v1/search'
headers = {'Content-Type': 'application/json'}
request_body = get_api_body(query)
resp = requests.post(url, data=request_body, auth=(user, passw), headers=headers)
resp_json = resp.json()
results = resp_json['result']
def get_api_body(self, query):
data = ''' {
"repository": "srh",
"type": "content",
"filters": [{
"field": "CONTENT",
"type": "fuzzy",
"values": ["''' + query + '''"],
"fuzzy": {
"level": 0.9,
"weight": 0.2,
"parameters": "def""
},
"logicalOperator": "or"
},
{
"field": "TITLE",
"type": "fuzzy",
"values": ["''' + query + '''"],
"fuzzy": {
"level": 0.9,
"weight": 1,
"parameters": "def""
}
}
'''
return data
# results being an array of separate JSON objects
def get_results_as_df(self, results):
results_df = pd.DataFrame(columns=['doc_id', 'userurl', 'title'])
# results[i] to access each subsequent/ separate JSON element
for i in range(len(results)):
results_df = results_df.append(
{'doc_id': doc_id, 'userurl': results[i]['USERURL'], 'title': results[i]['TITLE'],
'keywords': results[i]['KEYWORDS']}, ignore_index=True)
return results_df
```
# Hardware Utilization
- usefull to check effects is htop, e.g. see https://peteris.rocks/blog/htop/
```
# results being a list of tuple(any) elements
def get_pool_data(results):
pool = mp.Pool()
deserialized_results_list = list(map(deserialize, results))
results_mp = pool.map(preprocess_row, deserialized_results_list)
df_global = pd.concat(results_mp)
return df_global
```
# Web Deployment with Flask
- small search & table view example
```
import logging
logging.basicConfig(
format='%(asctime)s %(levelname)-8s %(message)s',
level=logging.INFO,
datefmt='%Y-%m-%d %H:%M:%S')
logger = logging.getLogger(__name__)
import os
import json
from os.path import dirname, realpath
from doc2vec_search import Doc2VecSearch
from time import time
from flask import Flask, Response, request
PATH_TO_MODEL = "data/models/model_dbow_100_10_no_ppl.d2v"
PATH_TO_DICT = "data/dict/docs_dict.pickle"
if 'is_docker' in os.environ:
CWD = "/app/data"
else:
CWD = dirname(dirname(realpath(__file__))) + "/data"
app = Flask(__name__, template_folder=dirname(realpath(__file__)))
doc2vec = Doc2VecSearch(PATH_TO_MODEL, PATH_TO_DICT)
url = "/unique/search"
@app.route(url, methods=['GET'])
def doc2vec_search():
q = request.args.get('q')
exec_time = 0
response = dict()
if q:
q = q.lower()
start = time()
results = doc2vec.search(q, 10)
exec_time = int((time() - start) * 1000)
if results:
for i in range(len(results)):
response['data'+str(i)] = [{'userurl': results[i][0], 'title': results[i][1], 'keywords': results[i][2],
'source': results[i][3]}]
response['metadata'+str(i)] = {'executionTime': exec_time, 'status': 200, 'itemCount': 1}
return Response(json.dumps(response, indent=2), status=200, mimetype='application/json')
response['data'] = []
response['metadata'] = {'executionTime': exec_time, 'status': 200, 'itemCount': 0}
return Response(json.dumps(response, indent=2), status=200, mimetype='application/json')
@app.route('/healthcheck')
def healthckeck():
return "All is well"
@app.route('/' + 'pointer/testing')
def testing():
page = """
<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" href="https://www.w3schools.com/w3css/4/w3.css">
</head>
<body>
<form>
<input name="query" autofocus>
<input type="submit">
Number of results to show:
<input name="n">
</form>
<br>
"""
q = request.args.get('query')
n = request.args.get('n')
if q:
q = q.lower().strip()
page += "Current query: <b><i>" + q + "</i></b><br>\n"
if n:
n = int(n)
results = doc2vec.search(q, n)
else:
results = doc2vec.search(q, 10)
page += "<h2>Doc2Vec Search Results</h2>"
page += """<table class="w3-table-all"><tr><th></th><th>Userurl</th><th>Title</th><th>Keywords</th><th>Source</th><th>Similarity Score</th></tr>"""
if results:
for i in range(len(results)):
page += "<tr><td>" + str(i) + "</td><td>" + results[i][0] + "</td><td>" + results[i][1] + "</td><td>" + results[i][2] + "</td><td>" + results[i][3]+ "</td><td>" + "{0:.2f}".format(results[i][4]) + "</td><tr>"
page += "</table>"
return page + "</body></html>"
if __name__ == "__main__":
# This is only called when starting file directly. Not in Docker container.
logger.info("Api is ready. Try: http://localhost:5021/test/doc2vec?q=mster%20data%20mannagemnt")
app.run(port=5021)
```
# Useful things
- https://jakevdp.github.io/blog/2017/12/05/installing-python-packages-from-jupyter/
- https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html
```
for i in range(n):
if i < 136000:
print('skiping ',i)
continue
import time
start = time.time()
print("hello")
end = time.time()
print(end - start)
import glob, os
count = 1
# Read in all files from the current directory, that match the prefix.
for file in glob.glob("archive_sitemap_*"):
print(file)
#https://stackoverflow.com/questions/53513/how-do-i-check-if-a-list-is-empty
if not a:
print("List is empty")
# Screen and Sessions
# https://stackoverflow.com/questions/1509677/kill-detached-screen-session
# Remote connections
#https://superuser.com/questions/23911/running-commands-on-putty-without-fear-of-losing-connection
```
| github_jupyter |
# Fancy Indexing
In the previous sections, we saw how to access and modify portions of arrays using simple indices `(arr[0])`, `slices(arr[:5])`, and Boolean masks `(arr[arr > 0])`. In this section, we'll llok at another style of array indexing, known as *fancy indexing*. Fancy indexing islike the simple indexing we've already seen, but we pass arrays of indices in place of single scalars. this allows us to very quickly access and modify copmlicated subsets of an array's values.
## Exploring Fancy Indexing
Fancy indexing is conceptually sipmle: it means passing and array of indices to access multiple array elements at once. For example, consider the following array:
```
import numpy as np
rand = np.random.RandomState(42)
x = rand.randint(100, size=10)
print(x)
```
Suppose we want to access three different elements. We could do it like this:
```
[x[3], x[7], x[2]]
```
When using fancy indexing, the shape of the result reflects the shape of the index arrays rather than the shape of the array being indexed:
```
ind = np.array([[3, 7], [4, 5]])
x[ind]
```
Fancy indexing also works in multiple dimensions. Consider the following array:
```
X = np.arange(12).reshape((3, 4))
X
```
Like with standard indexing, the first index refers to the row, and the second to the column:
```
row = np.array([0, 1, 2])
col = np.array([2, 1, 3])
X[row, col]
```
Notice that the first value in the result is `X[0, 2]`, the seconds is `X[1, 1]`, and the third is `X[2, 3]`. The pairing of the indices in fancy indexing follows all the broadcasting rules that were mentioned. So for example, if we combine a column vector and a row vector within the indices, we get a two-dimensional result:
```
X[row[:, np.newaxis], col]
```
Here, each row value is matched with each column vector, exactly as we saw in broadcasting of arithmetic operations. For example:
```
row[:, np.newaxis] * col
```
It is always important to remember with fancy indexing that the return value deflects the *broadcasted shape of the indices*, rather than the shape of the array being indexed
## Combined Indexing
For even more powerful operations, fancy indexing can be combined with the other indexing schemes we've seen:
```
print(X)
```
We can also combine fancy indexing with slicing:
```
X[1:, [2, 0, 1]]
```
And we can combine fancy indexing with masking:
```
mask = np.array([1, 0, 1, 0], dtype=bool)
X[row[:, np.newaxis], mask]
```
All of these indexing options combined lead to a very flexible set of operations for accessing and modifying array values.
## Example: Selecting Random Points
One common use of fancy indexing is the selection of subsets of rows from a matrix.
For example, we might have an **N** by **D** matrix representing **N** points in **D** dimensions, such as the following points
draw from a two-dimensional normal distribution:
```
mean = [0, 0]
cov = [[1, 2], [2, 5]]
X = rand.multivariate_normal(mean, cov, 100)
X.shape
```
Using the plotting tools, we can visualize these points as a scatter-plot:
```
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn; seaborn.set() # for plot styling
plt.scatter(X[:, 0], X[:, 1])
```
Let's use fancy indexing to select 20 random points. We'll do this by first choosing 20 random indices with no repetas, and sue these indices to seelct a portion of original array:
```
indices = np.random.choice(X.shape[0], 20, replace=False)
indices
selection = X[indices] # fancy indexing here
selection.shape
```
Now to see which points were selected, let's over-plot large circles at the locations of the selected points:
```
plt.scatter(X[:, 0], X[:, 1], alpha=0.3)
plt.scatter(selection[:, 0], selection[:, 1], facecolor='none', s=200);
```
## Modifying Values with Fancy Indexing
Just as fancy indexing can be used to access parts of an array, it can also be used to modify parts of an array. For example, image we have an array of indices and we'd like to set the corresponding items in an array to some value:
```
x = np.arange(10)
i = np.array([2, 1, 8, 4])
x[i] = 99
print(x)
```
We can use any assignment-type operator for this. For example:
```
x[i] -= 10
print(x)
```
Notice, though, the repeated indices with these operations can cause some potentially unexpected results. Consider the following:
```
x = np.zeros(10)
x[[0, 0]] = [4, 6]
print(x)
```
Where did the 4 go? The results of this operation is to first assign `x[0] = 4`, folowed b `x[0] = 6`. The result, of course, is that `x[0]`, followed by `x[0] = 6`. The result, of course, is that `x[0]` contains the value 6.
Fair enough, but consider this operation:
```
i = [2, 3, 3, 4, 4, 4]
x[i] += 1
x
```
You migth expect that `x[3]` would contain the value 2, and `x[4]` would contain the value 3, as this is how many times each index is repeated. Why is this not the case? Conceptually, this is because `x[i] += 1` is mean as a shorthand of `x[i] = x[i] + 1. x[i] + 1` is evaluated, and then the result is assigned to the indices in x. With this in mind, it is not the augmentation that happens multiple times, but the assignment, which leads to the rather nonintuitive results.
So what if you want the other behavior where the operation is repeated? For this you can use the `at()` method of ufuncs, and do the following:
```
x = np.zeros(10)
np.add.at(x, i, 1)
print(x)
```
The `at()` method does an in-place application of the given operator at the specified indices(here, `i`) with the specified value(here, 1). Another method that is simi;ar in spirit is the `reduceat()`.
## Binning Data
You can use these ideas to efficiently bin data to create a histogram by hand. For exmpale imagine we have 1,000 values and would like to quickly find where they fall within an array of bins. WE could compute it using `ufunc.at` like this:
```
np.random.seed(42)
x = np.random.randn(100)
# compute a histogram by hand
bins = np.linspace(-5, 5, 20)
counts = np.zeros_like(bins)
# find the appropriate bin for each x
i = np.searchsorted(bins, x)
# add 1 to each of these bins
np.add.at(counts, i, 1)
```
The counts now reflect the number of poitns within each bin-in other words, a histogram:
```
# plot the results
plt.plot(bins, counts, drawstyle='steps');
plt.hist(x, bins, histtype='step');
print("NumPy routine:")
%timeit counts, edges = np.histogram(x, bins)
print("Custom routine:")
%timeit np.add.at(counts, np.searchsorted(bins, x), 1)
```
The custom method is sevearl time faster thant the optimized algorithm in NumPy.
```
x = np.random.randn(1000000)
print("NumPy routine:")
%timeit counts, edges = np.histogram(x, bins)
print("Custom routine:")
%timeit np.add.at(counts, np.searchsorted(bins, x), 1)
```
But as you can see, NumPy function is designed for better performance when the number of data points becomes large.
| github_jupyter |
# mlrose Tutorial Examples - Genevieve Hayes
## Overview
mlrose is a Python package for applying some of the most common randomized optimization and search algorithms to a range of different optimization problems, over both discrete- and continuous-valued parameter spaces. This notebook contains the examples used in the mlrose tutorial.
### Import Libraries
```
import mlrose
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
from sklearn.metrics import accuracy_score
```
### Example 1: 8-Queens Using Pre-Defined Fitness Function
```
# Initialize fitness function object using pre-defined class
fitness = mlrose.Queens()
# Define optimization problem object
problem = mlrose.DiscreteOpt(length = 8, fitness_fn = fitness, maximize=False, max_val=8)
# Define decay schedule
schedule = mlrose.ExpDecay()
# Solve using simulated annealing - attempt 1
np.random.seed(1)
init_state = np.array([0, 1, 2, 3, 4, 5, 6, 7])
best_state, best_fitness = mlrose.simulated_annealing(problem, schedule = schedule, max_attempts = 10,
max_iters = 1000, init_state = init_state)
print(best_state)
print(best_fitness)
# Solve using simulated annealing - attempt 2
np.random.seed(1)
best_state, best_fitness = mlrose.simulated_annealing(problem, schedule = schedule, max_attempts = 100,
max_iters = 1000, init_state = init_state)
print(best_state)
print(best_fitness)
```
### Example 2: 8-Queens Using Custom Fitness Function
```
# Define alternative N-Queens fitness function for maximization problem
def queens_max(state):
# Initialize counter
fitness = 0
# For all pairs of queens
for i in range(len(state) - 1):
for j in range(i + 1, len(state)):
# Check for horizontal, diagonal-up and diagonal-down attacks
if (state[j] != state[i]) \
and (state[j] != state[i] + (j - i)) \
and (state[j] != state[i] - (j - i)):
# If no attacks, then increment counter
fitness += 1
return fitness
# Check function is working correctly
state = np.array([1, 4, 1, 3, 5, 5, 2, 7])
# The fitness of this state should be 22
queens_max(state)
# Initialize custom fitness function object
fitness_cust = mlrose.CustomFitness(queens_max)
# Define optimization problem object
problem_cust = mlrose.DiscreteOpt(length = 8, fitness_fn = fitness_cust, maximize = True, max_val = 8)
# Solve using simulated annealing - attempt 1
np.random.seed(1)
best_state, best_fitness = mlrose.simulated_annealing(problem_cust, schedule = schedule,
max_attempts = 10, max_iters = 1000,
init_state = init_state)
print(best_state)
print(best_fitness)
# Solve using simulated annealing - attempt 2
np.random.seed(1)
best_state, best_fitness = mlrose.simulated_annealing(problem_cust, schedule = schedule,
max_attempts = 100, max_iters = 1000,
init_state = init_state)
print(best_state)
print(best_fitness)
```
### Example 3: Travelling Salesperson Using Coordinate-Defined Fitness Function
```
# Create list of city coordinates
coords_list = [(1, 1), (4, 2), (5, 2), (6, 4), (4, 4), (3, 6), (1, 5), (2, 3)]
# Initialize fitness function object using coords_list
fitness_coords = mlrose.TravellingSales(coords = coords_list)
# Define optimization problem object
problem_fit = mlrose.TSPOpt(length = 8, fitness_fn = fitness_coords, maximize = False)
# Solve using genetic algorithm - attempt 1
np.random.seed(2)
best_state, best_fitness = mlrose.genetic_alg(problem_fit)
print(best_state)
print(best_fitness)
# Solve using genetic algorithm - attempt 2
np.random.seed(2)
best_state, best_fitness = mlrose.genetic_alg(problem_fit, mutation_prob = 0.2, max_attempts = 100)
print(best_state)
print(best_fitness)
```
### Example 4: Travelling Salesperson Using Distance-Defined Fitness Function
```
# Create list of distances between pairs of cities
dist_list = [(0, 1, 3.1623), (0, 2, 4.1231), (0, 3, 5.8310), (0, 4, 4.2426), (0, 5, 5.3852), \
(0, 6, 4.0000), (0, 7, 2.2361), (1, 2, 1.0000), (1, 3, 2.8284), (1, 4, 2.0000), \
(1, 5, 4.1231), (1, 6, 4.2426), (1, 7, 2.2361), (2, 3, 2.2361), (2, 4, 2.2361), \
(2, 5, 4.4721), (2, 6, 5.0000), (2, 7, 3.1623), (3, 4, 2.0000), (3, 5, 3.6056), \
(3, 6, 5.0990), (3, 7, 4.1231), (4, 5, 2.2361), (4, 6, 3.1623), (4, 7, 2.2361), \
(5, 6, 2.2361), (5, 7, 3.1623), (6, 7, 2.2361)]
# Initialize fitness function object using dist_list
fitness_dists = mlrose.TravellingSales(distances = dist_list)
# Define optimization problem object
problem_fit2 = mlrose.TSPOpt(length = 8, fitness_fn = fitness_dists, maximize = False)
# Solve using genetic algorithm
np.random.seed(2)
best_state, best_fitness = mlrose.genetic_alg(problem_fit2, mutation_prob = 0.2, max_attempts = 100)
print(best_state)
print(best_fitness)
```
### Example 5: Travelling Salesperson Defining Fitness Function as Part of Optimization Problem Definition Step
```
# Create list of city coordinates
coords_list = [(1, 1), (4, 2), (5, 2), (6, 4), (4, 4), (3, 6), (1, 5), (2, 3)]
# Define optimization problem object
problem_no_fit = mlrose.TSPOpt(length = 8, coords = coords_list, maximize = False)
# Solve using genetic algorithm
np.random.seed(2)
best_state, best_fitness = mlrose.genetic_alg(problem_no_fit, mutation_prob = 0.2, max_attempts = 100)
print(best_state)
print(best_fitness)
```
### Example 6: Fitting a Neural Network to the Iris Dataset
```
# Load the Iris dataset
data = load_iris()
# Get feature values of first observation
print(data.data[0])
# Get feature names
print(data.feature_names)
# Get target value of first observation
print(data.target[0])
# Get target name of first observation
print(data.target_names[data.target[0]])
# Get minimum feature values
print(np.min(data.data, axis = 0))
# Get maximum feature values
print(np.max(data.data, axis = 0))
# Get unique target values
print(np.unique(data.target))
# Split data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size = 0.2,
random_state = 3)
# Normalize feature data
scaler = MinMaxScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
# One hot encode target values
one_hot = OneHotEncoder()
y_train_hot = one_hot.fit_transform(y_train.reshape(-1, 1)).todense()
y_test_hot = one_hot.transform(y_test.reshape(-1, 1)).todense()
# Initialize neural network object and fit object - attempt 1
np.random.seed(3)
nn_model1 = mlrose.NeuralNetwork(hidden_nodes = [2], activation ='relu',
algorithm ='random_hill_climb',
max_iters = 1000, bias = True, is_classifier = True,
learning_rate = 0.0001, early_stopping = True,
clip_max = 5, max_attempts = 100)
nn_model1.fit(X_train_scaled, y_train_hot)
# Predict labels for train set and assess accuracy
y_train_pred = nn_model1.predict(X_train_scaled)
y_train_accuracy = accuracy_score(y_train_hot, y_train_pred)
print(y_train_accuracy)
# Predict labels for test set and assess accuracy
y_test_pred = nn_model1.predict(X_test_scaled)
y_test_accuracy = accuracy_score(y_test_hot, y_test_pred)
print(y_test_accuracy)
# Initialize neural network object and fit object - attempt 2
np.random.seed(3)
nn_model2 = mlrose.NeuralNetwork(hidden_nodes = [2], activation = 'relu',
algorithm = 'gradient_descent',
max_iters = 1000, bias = True, is_classifier = True,
learning_rate = 0.0001, early_stopping = True,
clip_max = 5, max_attempts = 100)
nn_model2.fit(X_train_scaled, y_train_hot)
# Predict labels for train set and assess accuracy
y_train_pred = nn_model2.predict(X_train_scaled)
y_train_accuracy = accuracy_score(y_train_hot, y_train_pred)
print(y_train_accuracy)
# Predict labels for test set and assess accuracy
y_test_pred = nn_model2.predict(X_test_scaled)
y_test_accuracy = accuracy_score(y_test_hot, y_test_pred)
print(y_test_accuracy)
```
### Example 7: Fitting a Logistic Regression to the Iris Data
```
# Initialize logistic regression object and fit object - attempt 1
np.random.seed(3)
lr_model1 = mlrose.LogisticRegression(algorithm = 'random_hill_climb', max_iters = 1000,
bias = True, learning_rate = 0.0001,
early_stopping = True, clip_max = 5, max_attempts = 100)
lr_model1.fit(X_train_scaled, y_train_hot)
# Predict labels for train set and assess accuracy
y_train_pred = lr_model1.predict(X_train_scaled)
y_train_accuracy = accuracy_score(y_train_hot, y_train_pred)
print(y_train_accuracy)
# Predict labels for test set and assess accuracy
y_test_pred = lr_model1.predict(X_test_scaled)
y_test_accuracy = accuracy_score(y_test_hot, y_test_pred)
print(y_test_accuracy)
# Initialize logistic regression object and fit object - attempt 2
np.random.seed(3)
lr_model2 = mlrose.LogisticRegression(algorithm = 'random_hill_climb', max_iters = 1000,
bias = True, learning_rate = 0.01,
early_stopping = True, clip_max = 5, max_attempts = 100)
lr_model2.fit(X_train_scaled, y_train_hot)
# Predict labels for train set and assess accuracy
y_train_pred = lr_model2.predict(X_train_scaled)
y_train_accuracy = accuracy_score(y_train_hot, y_train_pred)
print(y_train_accuracy)
# Predict labels for test set and assess accuracy
y_test_pred = lr_model2.predict(X_test_scaled)
y_test_accuracy = accuracy_score(y_test_hot, y_test_pred)
print(y_test_accuracy)
```
### Example 8: Fitting a Logistic Regression to the Iris Data using the NeuralNetwork() class
```
# Initialize neural network object and fit object - attempt 1
np.random.seed(3)
lr_nn_model1 = mlrose.NeuralNetwork(hidden_nodes = [], activation = 'sigmoid',
algorithm = 'random_hill_climb',
max_iters = 1000, bias = True, is_classifier = True,
learning_rate = 0.0001, early_stopping = True,
clip_max = 5, max_attempts = 100)
lr_nn_model1.fit(X_train_scaled, y_train_hot)
# Predict labels for train set and assess accuracy
y_train_pred = lr_nn_model1.predict(X_train_scaled)
y_train_accuracy = accuracy_score(y_train_hot, y_train_pred)
print(y_train_accuracy)
# Predict labels for test set and assess accuracy
y_test_pred = lr_nn_model1.predict(X_test_scaled)
y_test_accuracy = accuracy_score(y_test_hot, y_test_pred)
print(y_test_accuracy)
# Initialize neural network object and fit object - attempt 2
np.random.seed(3)
lr_nn_model2 = mlrose.NeuralNetwork(hidden_nodes = [], activation = 'sigmoid',
algorithm = 'random_hill_climb',
max_iters = 1000, bias = True, is_classifier = True,
learning_rate = 0.01, early_stopping = True,
clip_max = 5, max_attempts = 100)
lr_nn_model2.fit(X_train_scaled, y_train_hot)
# Predict labels for train set and assess accuracy
y_train_pred = lr_nn_model2.predict(X_train_scaled)
y_train_accuracy = accuracy_score(y_train_hot, y_train_pred)
print(y_train_accuracy)
# Predict labels for test set and assess accuracy
y_test_pred = lr_nn_model2.predict(X_test_scaled)
y_test_accuracy = accuracy_score(y_test_hot, y_test_pred)
print(y_test_accuracy)
```
| github_jupyter |
```
import os
import numpy as np
import tensorflow.keras as keras
from sklearn.metrics import roc_auc_score
import matplotlib.pyplot as plt
import pickle
from tqdm.notebook import tqdm
import pandas as pd
import seaborn as sns
from src.models.train_model import MonteCarloDropout, MCLSTM
model_name = r"merged-ce-mc.h5"
model_path = r"C:\Users\Noaja\Downloads\msci_project\tth-ML-project\models\wandb_models"
load_path = r"C:\Users\Noaja\Downloads\msci_project\tth-ML-project\data\processed"
model_path = os.path.join(model_path, model_name)
model = keras.models.load_model(model_path, custom_objects={"MCLSTM": MCLSTM, "MonteCarloDropout": MonteCarloDropout})
with open(os.path.join(load_path, "processed_data.pickle"), "rb") as handle:
combined_data = pickle.load(handle)
y_test = combined_data['y_test']
y_train = combined_data['y_train']
event_X_train = combined_data['event_X_train']
object_X_train = combined_data['object_X_train']
event_X_test = combined_data['event_X_test']
object_X_test = combined_data['object_X_test']
from src.features.build_features import scale_event_data, scale_object_data
event_X_train, event_X_test = scale_event_data(event_X_train, event_X_test)
object_X_train, object_X_test = scale_object_data(object_X_train, object_X_test)
def add_noise(data, percentage):
varied_data = data.copy()
for col in range(data.shape[1]):
mean = np.mean(data[:, col])
std = np.std(data[:, col]) * percentage
noise = np.random.normal(0, std, data[:, col].shape)
varied_data[:, col] += noise
return varied_data
def get_test_predictions(n_bins, type, max_percent=20, scale_range=0.2):
x_values = []
scores = []
if n_bins % 2 == 0: n_bins += 1
for i in range(1, n_bins+1):
percentage = np.round((i / n_bins) * (max_percent/100), 4)
scale_factor = (1 - scale_range) + (2 * scale_range * ((i-1) / (n_bins-1)))
scale_factor = np.round(scale_factor, 4)
varied_event_data = event_X_test.copy()
varied_object_data = object_X_test.copy()
if type == "noise":
varied_event_data[:] = add_noise(event_X_test.values, percentage)
varied_object_data = add_noise(object_X_test, percentage)
x_values.append(percentage)
elif type == "scale":
varied_event_data['HT'] *= scale_factor
x_values.append(scale_factor)
preds = model.predict([varied_event_data, varied_object_data]).ravel()
auc = roc_auc_score(y_test, preds)
scores.append(auc)
return (x_values, scores)
num_samples = 10
n_bins = 9
max_percent = 20
scale_range = 0.2
def noise_impact(num_samples, n_bins, max_percent):
robust_df = pd.DataFrame(columns=['noise percentage', 'AUC'])
for _ in tqdm(range(num_samples)):
percentages, scores = get_test_predictions(n_bins, "noise", max_percent=max_percent)
for percentage, score in zip(percentages, scores):
robust_df = robust_df.append({'noise percentage': percentage*100, 'AUC': score}, ignore_index=True)
baseline_preds = model.predict([event_X_test, object_X_test]).ravel()
baseline_auc = roc_auc_score(y_test, baseline_preds)
robust_df['Δ ROC AUC'] = robust_df['AUC'] - baseline_auc
return robust_df
def scale_impact(num_samples, n_bins, scale_range):
robust_df = pd.DataFrame(columns=['HT scale factor', 'AUC'])
for _ in tqdm(range(num_samples)):
factors, scores = get_test_predictions(n_bins, "scale", scale_range=scale_range)
for factor, score in zip(factors, scores):
robust_df = robust_df.append({'HT scale factor': factor, 'AUC': score}, ignore_index=True)
return robust_df
robust_df = noise_impact(num_samples, n_bins, max_percent)
with plt.style.context(['science', 'grid', 'notebook', 'high-contrast']):
plt.figure(figsize=(10, 10))
sns.boxplot(x="noise percentage", y="Δ ROC AUC", data=robust_df)
plt.axhline(y=0.0, color='r', linestyle='-')
plt.show()
robust_df = scale_impact(num_samples=10, n_bins=9, scale_range=scale_range)
robust_df['Δ ROC AUC'] = robust_df['AUC'] - np.mean(robust_df.loc[robust_df['HT scale factor'] == 1, 'AUC'])
with plt.style.context(['science', 'grid', 'notebook', 'high-contrast']):
plt.figure(figsize=(10, 10))
sns.boxplot(x="HT scale factor", y="Δ ROC AUC", data=robust_df)
plt.axhline(y=0.0, color='r', linestyle='-')
plt.show()
```
| github_jupyter |
#### Copyright 2018 Google LLC.
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Cat vs. Dog Image Classification
## Exercise 3: Feature Extraction and Fine-Tuning
**_Estimated completion time: 30 minutes_**
In Exercise 1, we built a convnet from scratch, and were able to achieve an accuracy of about 70%. With the addition of data augmentation and dropout in Exercise 2, we were able to increase accuracy to about 80%. That seems decent, but 20% is still too high of an error rate. Maybe we just don't have enough training data available to properly solve the problem. What other approaches can we try?
In this exercise, we'll look at two techniques for repurposing feature data generated from image models that have already been trained on large sets of data, **feature extraction** and **fine tuning**, and use them to improve the accuracy of our cat vs. dog classification model.
## Feature Extraction Using a Pretrained Model
One thing that is commonly done in computer vision is to take a model trained on a very large dataset, run it on your own, smaller dataset, and extract the intermediate representations (features) that the model generates. These representations are frequently informative for your own computer vision task, even though the task may be quite different from the problem that the original model was trained on. This versatility and repurposability of convnets is one of the most interesting aspects of deep learning.
In our case, we will use the [Inception V3 model](https://arxiv.org/abs/1512.00567) developed at Google, and pre-trained on [ImageNet](http://image-net.org/), a large dataset of web images (1.4M images and 1000 classes). This is a powerful model; let's see what the features that it has learned can do for our cat vs. dog problem.
First, we need to pick which intermediate layer of Inception V3 we will use for feature extraction. A common practice is to use the output of the very last layer before the `Flatten` operation, the so-called "bottleneck layer." The reasoning here is that the following fully connected layers will be too specialized for the task the network was trained on, and thus the features learned by these layers won't be very useful for a new task. The bottleneck features, however, retain much generality.
Let's instantiate an Inception V3 model preloaded with weights trained on ImageNet:
```
import os
import tensorflow as tf
from keras.applications.inception_v3 import InceptionV3
from keras import layers
from keras.models import Model
from keras.optimizers import RMSprop
from keras import backend as K
# Configure the TF backend session
tf_config = tf.ConfigProto(
gpu_options=tf.GPUOptions(allow_growth=True))
K.set_session(tf.Session(config=tf_config))
```
Now let's download the weights:
```
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 \
-O /tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
local_weights_file = '/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5'
pre_trained_model = InceptionV3(
input_shape=(150, 150, 3), include_top=False, weights=None)
pre_trained_model.load_weights(local_weights_file)
```
By specifying the `include_top=False` argument, we load a network that doesn't include the classification layers at the top—ideal for feature extraction.
Let's make the model non-trainable, since we will only use it for feature extraction; we won't update the weights of the pretrained model during training.
```
for layer in pre_trained_model.layers:
layer.trainable = False
```
The layer we will use for feature extraction in Inception v3 is called `mixed7`. It is not the bottleneck of the network, but we are using it to keep a sufficiently large feature map (7x7 in this case). (Using the bottleneck layer would have resulting in a 3x3 feature map, which is a bit small.) Let's get the output from `mixed7`:
```
last_layer = pre_trained_model.get_layer('mixed7')
print 'last layer output shape:', last_layer.output_shape
last_output = last_layer.output
```
Now let's stick a fully connected classifier on top of `last_output`:
```
# Flatten the output layer to 1 dimension
x = layers.Flatten()(last_output)
# Add a fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dense(1024, activation='relu')(x)
# Add a dropout rate of 0.2
x = layers.Dropout(0.2)(x)
# Add a final sigmoid layer for classification
x = layers.Dense(1, activation='sigmoid')(x)
# Configure and compile the model
model = Model(pre_trained_model.input, x)
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.0001),
metrics=['acc'])
```
For examples and data preprocessing, let's use the same files and `train_generator` as we did in Exercise 2.
**NOTE:** The 2,000 images used in this exercise are excerpted from the ["Dogs vs. Cats" dataset](https://www.kaggle.com/c/dogs-vs-cats/data) available on Kaggle, which contains 25,000 images. Here, we use a subset of the full dataset to decrease training time for educational purposes.
```
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip -O \
/tmp/cats_and_dogs_filtered.zip
import os
import zipfile
local_zip = '/tmp/cats_and_dogs_filtered.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
# Define our example directories and files
base_dir = '/tmp/cats_and_dogs_filtered'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
train_cat_fnames = os.listdir(train_cats_dir)
train_dog_fnames = os.listdir(train_dogs_dir)
from keras.preprocessing.image import ImageDataGenerator
# Add our data-augmentation parameters to ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir, # This is the source directory for training images
target_size=(150, 150), # All images will be resized to 150x150
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
```
Finally, let's train the model using the features we extracted. We'll train on all 2000 images available, for 2 epochs, and validate on all 1,000 test images.
```
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=2,
validation_data=validation_generator,
validation_steps=50,
verbose=2)
```
You can see that we reach a validation accuracy of 88–90% very quickly. This is much better than the small model we trained from scratch.
## Further Improving Accuracy with Fine-Tuning
In our feature-extraction experiment, we only tried adding two classification layers on top of an Inception V3 layer. The weights of the pretrained network were not updated during training. One way to increase performance even further is to "fine-tune" the weights of the top layers of the pretrained model alongside the training of the top-level classifier. A couple of important notes on fine-tuning:
- **Fine-tuning should only be attempted *after* you have trained the top-level classifier with the pretrained model set to non-trainable**. If you add a randomly initialized classifier on top of a pretrained model and attempt to train all layers jointly, the magnitude of the gradient updates will be too large (due to the random weights from the classifier), and your pretrained model will just forget everything it has learned.
- Additionally, we **fine-tune only the *top layers* of the pre-trained model** rather than all layers of the pretrained model because, in a convnet, the higher up a layer is, the more specialized it is. The first few layers in a convnet learn very simple and generic features, which generalize to almost all types of images. But as you go higher up, the features are increasingly specific to the dataset that the model is trained on. The goal of fine-tuning is to adapt these specialized features to work with the new dataset.
All we need to do to implement fine-tuning is to set the top layers of Inception V3 to be trainable, recompile the model (necessary for these changes to take effect), and resume training. Let's unfreeze all layers belonging to the `mixed7` module—i.e., all layers found after `mixed6`—and recompile the model:
```
unfreeze = False
# Unfreeze all models after "mixed6"
for layer in pre_trained_model.layers:
if unfreeze:
layer.trainable = True
if layer.name == 'mixed6':
unfreeze = True
from keras.optimizers import SGD
# As an optimizer, here we will use SGD
# with a very low learning rate (0.00001)
model.compile(loss='binary_crossentropy',
optimizer=SGD(lr=0.00001, momentum=0.9),
metrics=['acc'])
```
Now let's retrain the model. We'll train on all 2000 images available, for 50 epochs, and validate on all 1,000 test images. (This may take 15-20 minutes to run.)
```
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=50,
validation_data=validation_generator,
validation_steps=50,
verbose=2)
```
We are seeing a nice improvement, with the validation loss going from ~1.7 down to ~1.2, and accuracy going from 88% to 92%. That's a 4.5% relative improvement in accuracy.
Let's plot the training and validation loss and accuracy to show it conclusively:
```
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# Retrieve a list of accuracy results on training and test data
# sets for each training epoch
acc = history.history['acc']
val_acc = history.history['val_acc']
# Retrieve a list of list results on training and test data
# sets for each training epoch
loss = history.history['loss']
val_loss = history.history['val_loss']
# Get number of epochs
epochs = range(len(acc))
# Plot training and validation accuracy per epoch
plt.plot(epochs, acc)
plt.plot(epochs, val_acc)
plt.title('Training and validation accuracy')
plt.figure()
# Plot training and validation loss per epoch
plt.plot(epochs, loss)
plt.plot(epochs, val_loss)
plt.title('Training and validation loss')
```
Congratulations! Using feature extraction and fine-tuning, you've built an image classification model that can identify cats vs. dogs in images with over 90% accuracy.
## Clean Up
Run the following cell to terminate the kernel and free memory resources:
```
import os, signal
os.kill(os.getpid(), signal.SIGKILL)
```
| github_jupyter |
<!--NAVIGATION-->
_______________
Este documento puede ser utilizado de forma interactiva en las siguientes plataformas:
- [Google Colab](https://colab.research.google.com/github/masdeseiscaracteres/ml_course/blob/master/material/05_random_forests.ipynb)
- [MyBinder](https://mybinder.org/v2/gh/masdeseiscaracteres/ml_course/master)
- [Deepnote](https://deepnote.com/launch?template=python_3.6&url=https%3A%2F%2Fgithub.com%2Fmasdeseiscaracteres%2Fml_course%2Fblob%2Fmaster%2Fmaterial%2F05_random_forests.ipynb)
_______________
# Bagging: Random Forests
Vamos a analizar el funcionamiento de la técnica de *bagging* aplicada a árboles. En concreto veremos los *random forest* para [clasificación](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier) y [regresión](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) mediante ejemplos ilustrativos.
Este notebook se estructura de la siguiente manera:
1. Ejemplo en clasificación
2. Ejemplo en regresión
## 0. Configuración del entorno
```
# clonar el resto del repositorio si no está disponible
import os
curr_dir = os.getcwd()
if not os.path.exists(os.path.join(curr_dir, '../.ROOT_DIR')):
!git clone https://github.com/masdeseiscaracteres/ml_course.git ml_course
os.chdir(os.path.join(curr_dir, 'ml_course/material'))
```
En primer lugar, preparamos el entorno con las bibliotecas y datos necesarios:
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
%matplotlib inline
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
import warnings
warnings.filterwarnings('ignore')
```
## 1. Ejemplo en clasificación
En este primer ejemplo vamos a explorar el conjunto de datos para la detección de cáncer de mama ([Breast Cancer](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29)). Este dataset está también incluido en el módulo `sklearn.datasets`.
El objetivo es detectar si un cáncer es beningo o maligno (B/N) a partir de la información de atributos numéricos que caracterizan los núcleos celulares de las imágenes digitalizadas de biopsias realizadas a distintos pacientes.
La variable objetivo es binaria.
```
from sklearn.datasets import load_breast_cancer
bunch = load_breast_cancer()
X = bunch['data']
y = bunch['target']
target_lut = {k:v for k,v in zip([0,1], bunch['target_names'])} # conversión de variable objetivo a string
feature_names = bunch['feature_names']
# verificamos que los tamaños de los datos son consistentes entre sí
print(X.shape)
print(y.shape)
```
En primer lugar vemos cómo se distribuye la variable objetivo:
```
# Calculamos los valores distintos que toma y el número de veces que aparece cada uno
np.unique(y, return_counts=True)
```
Conviene echar un vistazo a los datos. Como todos los datos son numéricos, un histograma puede ser una buena opción:
```
# Pintamos histogramas para cada clase
plt.figure(figsize=(20,20))
idx_0 = (y==0)
idx_1 = (y==1)
for i, feature in enumerate(feature_names):
plt.subplot(6, 5, i+1)
plt.hist(X[idx_0, i], density=1, alpha=0.6, label='y=0')
plt.hist(X[idx_1, i], density=1, facecolor='red', alpha=0.6, label='y=1')
plt.legend()
plt.title(feature)
plt.show()
```
Vamos a reservar una parte de nuestros datos para la evaluación final y entrenamos sobre el resto, como si se tratase de una situación real: disponemos de los datos recogidos para construir nuestro modelo que después será aplicado sobre nuevos pacientes.
```
from sklearn.model_selection import train_test_split
# separamos los datos
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.2, random_state=0)
print('Datos train: ', X_train.shape)
print('Datos test: ', X_test.shape)
```
## 1.1. Árbol de decisión convencional
En primer lugar entrenamos un árbol de decisión convencional para hacernos una idea de las prestaciones que alcanzamos.
```
from sklearn.model_selection import GridSearchCV
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
max_depth_arr = range(1, 10+1)
param_grid = {'max_depth': max_depth_arr}
n_folds = 10
clf = DecisionTreeClassifier(random_state=0)
grid = GridSearchCV(clf, param_grid=param_grid, cv=n_folds, return_train_score=True)
grid.fit(X_train, y_train)
print("best mean cross-validation score: {:.3f}".format(grid.best_score_))
print("best parameters: {}".format(grid.best_params_))
scores_test = np.array(grid.cv_results_['mean_test_score'])
scores_train = np.array(grid.cv_results_['mean_train_score'])
plt.plot(max_depth_arr, scores_test, '-o', label='Validación')
plt.plot(max_depth_arr, scores_train, '-o', label='Entrenamiento')
plt.xlabel('max_depth', fontsize=16)
plt.ylabel('{}-Fold accuracy'.format(n_folds))
plt.ylim((0.8, 1))
plt.show()
best_max_depth = grid.best_params_['max_depth']
tree_model = DecisionTreeClassifier(max_depth=best_max_depth)
tree_model.fit(X_train, y_train)
print("Train: ", tree_model.score(X_train, y_train))
print("Test: ", tree_model.score(X_test, y_test))
from sklearn.tree import export_graphviz
import graphviz
tree_dot = export_graphviz(tree_model, out_file=None, feature_names=feature_names, class_names=['M', 'B'],
filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(tree_dot)
# Mostrar grafo como SVG
# graph
# Mostrar grafo como PNG
from IPython.display import Image
Image(graph.pipe(format='png'))
importances = tree_model.feature_importances_
importances = importances / np.max(importances)
indices = np.argsort(importances)[::-1]
plt.figure(figsize=(10, 10))
plt.barh(range(X_train.shape[1]), importances[indices])
plt.yticks(range(X_train.shape[1]), feature_names[indices])
plt.show()
```
## 1.2. Random forest
En un *random forest* aparecen nuevos parámetros libres:
- Número de árboles construidos: aquí hemos de asegurarnos que la función de coste es estable para el número de árboles elegido.
- Número máximo de características a seleccionar aleatoriamente para ajustar cada árbol.
Además de los propios de los árboles de decisión:
- Complejidad de los mismos (normalmente `max_depth` o `min_samples_leaf`)
```
from sklearn.ensemble import RandomForestClassifier
# grid search
max_depth_arr = range(1, 15)
params = {'max_depth': max_depth_arr}
n_folds = 10
clf = RandomForestClassifier(random_state=0, n_estimators=200, max_features='sqrt')
grid = GridSearchCV(clf, param_grid=params, cv=n_folds, return_train_score=True)
grid.fit(X_train, y_train)
print("best mean cross-validation score: {:.3f}".format(grid.best_score_))
print("best parameters: {}".format(grid.best_params_))
scores_test = np.array(grid.cv_results_['mean_test_score'])
scores_train = np.array(grid.cv_results_['mean_train_score'])
plt.plot(max_depth_arr, scores_test, '-o', label='Validación')
plt.plot(max_depth_arr, scores_train, '-o', label='Entrenamiento')
plt.xlabel('max_depth', fontsize=16)
plt.ylabel('{}-Fold accuracy'.format(n_folds))
plt.ylim((0.8, 1))
plt.show()
best_max_depth = grid.best_params_['max_depth']
bag_model = RandomForestClassifier(max_depth=best_max_depth, n_estimators=200, max_features='sqrt')
bag_model.fit(X_train, y_train)
print("Train: ", bag_model.score(X_train, y_train))
print("Test: ", bag_model.score(X_test, y_test))
```
## 1.3. Importancia de las variables
Una propiedad muy interesante de los algoritmos basados en árboles es que podemos medir la importancia de las variables
```
importances = bag_model.feature_importances_
importances = importances / np.max(importances)
indices = np.argsort(importances)[::-1]
plt.figure(figsize=(10,10))
plt.barh(range(X_train.shape[1]), importances[indices])
plt.yticks(range(X_train.shape[1]), feature_names[indices])
plt.show()
```
Utilizando este ranking, podemos hacer selección de características:
```
from sklearn.model_selection import KFold
N, N_features = X_train.shape
rf = RandomForestClassifier(max_depth=best_max_depth, n_estimators=200, max_features='sqrt')
n_folds = 10
kf = KFold(n_splits=n_folds, shuffle=True, random_state=1)
cv_error = []
cv_std = []
for nfeatures in range(N_features, 0, -1):
error_i = []
for idxTrain, idxVal in kf.split(X_train):
Xt = X_train[idxTrain,:]
yt = y_train[idxTrain]
Xv = X_train[idxVal,:]
yv = y_train[idxVal]
rf.fit(Xt, yt)
ranking = rf.feature_importances_
indices = np.argsort(ranking)[::-1]
selected = indices[0:(N_features-nfeatures+1)]
Xs = Xt[:, selected]
rf.fit(Xs, yt)
error = (1.0 - rf.score(Xv[:, selected], yv))
error_i.append(error)
cv_error.append(np.mean(error_i))
cv_std.append(np.std(error_i))
print('# features: ' + str(len(selected)) + ', error: ' + str(np.mean(error_i)) + ' +/- ' + str(np.std(error_i)))
plt.plot(range(1, N_features+1,1), cv_error, '-o')
plt.errorbar(range(1, N_features+1,1), cv_error, yerr=cv_std, fmt='o')
plt.xlabel('# features')
plt.ylabel('CV error')
plt.show()
```
Como se puede ver, seleccionando las primeras 7 u 8 características obtenemos unos resultados muy buenos. De esta manera reducimos la complejidad del algoritmo y facilitamos su explicación.
## 2. Ejemplo en regresión
```
# cargamos datos
house_data = pd.read_csv("./data/kc_house_data.csv") # cargamos fichero
# Eliminamos las columnas id y date
house_data = house_data.drop(['id','date'], axis=1)
# convertir las variables en pies al cuadrado en metros al cuadrado
feetFeatures = ['sqft_living', 'sqft_lot', 'sqft_above', 'sqft_basement', 'sqft_living15', 'sqft_lot15']
house_data[feetFeatures] = house_data[feetFeatures].apply(lambda x: x * 0.3048 * 0.3048)
# renombramos
house_data.columns = ['price','bedrooms','bathrooms','sqm_living','sqm_lot','floors','waterfront','view','condition',
'grade','sqm_above','sqm_basement','yr_built','yr_renovated','zip_code','lat','long',
'sqm_living15','sqm_lot15']
# añadimos las nuevas variables
house_data['years'] = pd.Timestamp('today').year - house_data['yr_built']
#house_data['bedrooms_squared'] = house_data['bedrooms'].apply(lambda x: x**2)
#house_data['bed_bath_rooms'] = house_data['bedrooms']*house_data['bathrooms']
house_data['sqm_living'] = house_data['sqm_living'].apply(lambda x: np.log(x))
house_data['price'] = house_data['price'].apply(lambda x: np.log(x))
#house_data['lat_plus_long'] = house_data['lat']*house_data['long']
house_data.head()
# convertimos el DataFrame al formato necesario para scikit-learn
data = house_data.values
y = data[:, 0:1] # nos quedamos con la 1ª columna, price
X = data[:, 1:] # nos quedamos con el resto
feature_names = house_data.columns[1:]
# Dividimos los datos en entrenamiento y test (80 training, 20 test)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.25, random_state = 2)
print('Datos entrenamiento: ', X_train.shape)
print('Datos test: ', X_test.shape)
```
### 2.1 Árbol de decisión
#### Ejercicio
Entrena un árbol de decisión y devuelve las prestaciones para el conjunto de test.
#### Solución
```
from sklearn.tree import DecisionTreeRegressor
# parámetros para GridSearch
max_depth = range(1, 15)
tuned_parameters = {'max_depth': max_depth}
n_folds = 5
clf = DecisionTreeRegressor(random_state=0)
grid = GridSearchCV(clf, param_grid=tuned_parameters, cv=n_folds, return_train_score=True)
grid.fit(X_train, y_train)
print("best mean cross-validation score: {:.3f}".format(grid.best_score_))
print("best parameters: {}".format(grid.best_params_))
scores_test = np.array(grid.cv_results_['mean_test_score'])
scores_train = np.array(grid.cv_results_['mean_train_score'])
plt.plot(max_depth_arr, scores_test, '-o', label='Validación')
plt.plot(max_depth_arr, scores_train, '-o', label='Entrenamiento')
plt.xlabel('max_depth', fontsize=16)
plt.ylabel('{}-fold R-squared'.format(n_folds))
plt.ylim((0.5, 1))
plt.show()
best_max_depth = grid.best_params_['max_depth']
dt = DecisionTreeRegressor(max_depth=best_max_depth)
dt.fit(X_train, y_train)
print("Train: ", dt.score(X_train, y_train))
print("Test: ", dt.score(X_test, y_test))
```
### 2.2. Random forest
#### Ejercicio
Entrena un algoritmo de random forest y devuelve las prestaciones para el conjunto de test.
#### Solución
```
from sklearn.ensemble import RandomForestRegressor
# parámetros para GridSearch
max_depth_arr = range(1, 20+1)
tuned_parameters = {'max_depth': max_depth_arr}
n_folds = 3 # ponemos este valor algo bajo para que no tarde demasiado
clf = RandomForestRegressor(random_state=0, n_estimators=100, max_features='sqrt')
grid = GridSearchCV(clf, param_grid=tuned_parameters, cv=n_folds, return_train_score=True)
grid.fit(X_train, y_train)
print("best mean cross-validation score: {:.3f}".format(grid.best_score_))
print("best parameters: {}".format(grid.best_params_))
scores_test = np.array(grid.cv_results_['mean_test_score'])
scores_train = np.array(grid.cv_results_['mean_train_score'])
plt.plot(max_depth_arr, scores_test, '-o', label='Validación')
plt.plot(max_depth_arr, scores_train, '-o', label='Entrenamiento')
plt.xlabel('max_depth', fontsize=16)
plt.ylabel('{}-fold R-squared'.format(n_folds))
plt.ylim((0.5, 1))
plt.show()
best_max_depth = grid.best_params_['max_depth']
bag_model = RandomForestRegressor(max_depth=best_max_depth)
bag_model.fit(X_train, y_train)
print("Train: ", bag_model.score(X_train, y_train))
print("Test: ", bag_model.score(X_test, y_test))
```
#### Ejercicio
¿Qué características son las más relevantes?¿Coincide este ranking de características con las variables importantes seleccionadas por el algoritmo Lasso?
#### Solución
```
importances = bag_model.feature_importances_
importances = importances / np.max(importances)
indices = np.argsort(importances)[::-1]
plt.figure(figsize=(10,10))
plt.barh(range(X_train.shape[1]), importances[indices])
plt.yticks(range(X_train.shape[1]), feature_names[indices])
plt.show()
from sklearn.linear_model import Lasso
lasso = Lasso(alpha=1e-4, normalize=True)
lasso.fit(X_train, y_train)
print("Train: ", lasso.score(X_train, y_train))
print("Test: ", lasso.score(X_test, y_test))
importances = lasso.coef_
importances = importances / np.max(importances)
indices = np.argsort(importances)[::-1]
plt.figure(figsize=(10, 10))
plt.barh(range(X_train.shape[1]), importances[indices])
plt.yticks(range(X_train.shape[1]), feature_names[indices])
plt.show()
```
Como se puede ver, algunas variables coinciden y otras no. Es natural que esto ocurra, pues cada modelo explica la relación entre las características y la variable objetivo de una forma diferente. Los modelos de árboles son capaces de encontrar relaciones no lineales, mientras que Lasso solo puede encontrar relaciones lineales.
| github_jupyter |
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
% matplotlib inline
```
### Loading Training Transactions Data
```
tr_tr = pd.read_csv('data/train_transaction.csv', index_col='TransactionID')
print('Rows :', tr_tr.shape[0],' Columns : ',tr_tr.shape[1] )
tr_tr.tail()
print('Memory Usage : ', (tr_tr.memory_usage(deep=True).sum()/1024).round(0))
tr_tr.tail()
tr_tr.isFraud.describe()
tr_tr.isFraud.value_counts().plot(kind='bar')
tr_tr.isFraud.value_counts()
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(12,4))
ax1.hist(tr_tr.TransactionAmt[tr_tr.isFraud == 1], bins = 10)
ax1.set_title('Fraud Transactions ='+str(tr_tr.isFraud.value_counts()[1]))
ax2.hist(tr_tr.TransactionAmt[tr_tr.isFraud == 0], bins = 10)
ax2.set_title('Normal Transactions ='+str(tr_tr.isFraud.value_counts()[0]))
plt.xlabel('Amount ($)')
plt.ylabel('Number of Transactions')
plt.yscale('log')
plt.show()
```
### Exploratory Analysis of category Items in Training Transactions data
```
for i in tr_tr.columns:
if tr_tr[i].dtypes == str('object'):
print('Column Name :', i)
print('Unique Items :', tr_tr[i].unique())
print('Number of NaNs :', tr_tr[i].isna().sum())
print('Number of Frauds :','\n', tr_tr[tr_tr.isFraud==1][i].value_counts(dropna=False))
print('*'*50)
```
### Exploratory Analysis of Float Items in Training Transactions data
```
for i in tr_tr.columns:
if tr_tr[i].dtypes == 'float64':
print('Column Name :', i)
print('Number of NaNs :', tr_tr[i].isna().sum())
print('*'*50)
```
### Exploratory Analysis of Int Items in Training Transactions data
```
for i in tr_tr.columns:
if tr_tr[i].dtypes == 'int64':
print('Column Name :', i)
print('Number of NaNs :', tr_tr[i].isna().sum())
print('*'*50)
```
### Loading Test Transactions Data
```
te_tr = pd.read_csv('data/test_transaction.csv', index_col='TransactionID')
print(te_tr.shape)
te_tr.tail()
```
### Exploratory Analysis of category Items in Test Transactions data
```
for i in te_tr.columns:
if te_tr[i].dtypes == str('object'):
print('Column Name :', i)
print('Unique Items :', te_tr[i].unique())
print('Number of NaNs :', te_tr[i].isna().sum())
print('*'*50)
```
### Exploratory Analysis of Float Items in Test Transactions data
```
for i in te_tr.columns:
if te_tr[i].dtypes == 'float64':
print('Column Name :', i)
print('Number of NaNs :', te_tr[i].isna().sum())
print('*'*50)
```
### Check for any missing column in Test transaction data for integrity
```
for i in tr_tr.columns:
if i in te_tr.columns:
pass
elif i == str('isFraud'):
print('All columns are present in Test Transactions data')
else:
print(i)
```
### Check for any mismatching category items between Training and Test transaction data
```
for i in te_tr.columns:
if te_tr[i].dtypes == str('object'):
for j in te_tr[i].unique():
if j in tr_tr[i].unique():
pass
else:
print(j,': item is in test but not in training for category : ',i)
```
### Loading Training Identity Data
```
tr_id = pd.read_csv('data/train_identity.csv', index_col='TransactionID')
print(tr_id.shape)
tr_id.tail()
```
### Exploratory Analysis of category Items in Training Identity data
```
for i in tr_id.columns:
if tr_id[i].dtypes == str('object'):
print('Column Name :', i)
print('Unique Items :', tr_id[i].unique())
print('Number of NaNs :', tr_id[i].isna().sum())
print('*'*50)
```
### Exploratory Analysis of Float Items in Training Identity data
```
for i in tr_id.columns:
if tr_id[i].dtypes == 'float64':
print('Column Name :', i)
print('Number of NaNs :', tr_id[i].isna().sum())
print('*'*50)
```
### Combining training transactions and identity data
```
tr = tr_tr.join(tr_id)
print(tr.shape)
tr.head()
print('percent of NaN data : ',tr.isna().any().mean())
print('Top 10 columns with NaN data :','\n',tr.isna().mean().sort_values(ascending=False).head(10))
```
### Fraud Counts by Category Items for Training Data
```
for i in tr.columns:
if tr[i].dtypes == str('object'):
print('Fraud Counts for : ',i)
print('-'*30)
print(tr[tr.isFraud==1][i].value_counts(dropna=False))
```
### Create categories for items with more than 100 counts of Fraud
```
def map_categories(*args):
columns = [col for col in args]
for column in columns:
if column == index:
return 1
else:
return 0
new_tr_categories = []
for i in tr.columns:
if tr[i].dtypes == str('object'):
fraud_count = tr[tr_tr.isFraud==1][i].value_counts(dropna=False)
for index, value in fraud_count.items():
if value>100:
tr[(str(i)+'_'+str(index))]=list(map(map_categories, tr[i]))
new_tr_categories.append((str(i)+'_'+str(index)))
# else:
# tr[(str(i)+'_'+str('other'))]=list(map(map_categories, tr[i]))
# new_tr_categories.append((str(i)+'_'+str('other')))
tr.drop([i], axis=1, inplace=True)
print(new_tr_categories)
```
### Replace NaN with zero for combined training data
```
tr.fillna(0, inplace=True)
tr.head()
```
### Loading Test Transactions Data
```
te_tr = pd.read_csv('data/test_transaction.csv', index_col='TransactionID')
print(te_tr.shape)
te_tr.tail()
```
### Loading Test Identity Data
```
te_id = pd.read_csv('data/test_identity.csv', index_col='TransactionID')
print(te_id.shape)
te_id.tail()
```
### Exploratory Analysis of category Items in Test Identity data
```
for i in te_id.columns:
if te_id[i].dtypes == str('object'):
print('Column Name :', i)
print('Unique Items :', te_id[i].unique())
print('Number of NaNs :', te_id[i].isna().sum())
print('*'*50)
```
### Exploratory Analysis of Float Items in Test Identity data
```
for i in te_id.columns:
if te_id[i].dtypes == 'float64':
print('Column Name :', i)
print('Number of NaNs :', te_id[i].isna().sum())
print('*'*50)
### check for any missing column in Test identity data for integrity
for i in tr_id.columns:
if i in te_id.columns:
pass
else:
print(i)
for i in te_id.columns:
if te_id[i].dtypes == str('object'):
for j in te_id[i].unique():
if j in tr_id[i].unique():
pass
else:
print(j,': item is in test but not in training for category : ',i)
```
### Combining Test transactions and identity data
```
te = te_tr.join(te_id)
print(te.shape)
te.head()
print('percent of NaN data : ',te.isna().any().mean())
print('Top 10 columns with NaN data :','\n',te.isna().mean().sort_values(ascending=False).head(10))
```
| github_jupyter |
```
midifile = 'data/chopin-fantaisie.mid'
import time
import copy
import subprocess
from abc import abstractmethod
import numpy as np
import midi # Midi file parser
from midipattern import MidiPattern
from distorter import *
from align import align_frame_to_frame, read_align, write_align
MidiPattern.MIDI_DEVICE = 2
```
Init Pygame and Audio
--------
Midi Pattern
--------
```
pattern = MidiPattern(midi.read_midifile(midifile))
simple = pattern.simplified(bpm=160)
simple.stamp_time('t0')
midi.write_midifile("generated/simple.mid", simple)
print simple.attributes[0][-40:]
pattern[0]
pattern.play(180)
simple.play()
```
Distorter
--------
```
distorter = VelocityNoiseDistorter(sigma=20.)
distorter.randomize()
print distorter
dist_pattern = distorter.distort(simple)
midi.write_midifile('generated/velocity-noise.mid', dist_pattern)
dist_pattern.play(bpm=180)
print dist_pattern.attributes[0][-4:]
distorter = VelocityWalkDistorter(sigma=0.1)
distorter.randomize()
print distorter
dist_pattern = distorter.distort(simple)
midi.write_midifile('generated/velocity-walk.mid', dist_pattern)
dist_pattern.play(bpm=180)
distorter = ProgramDistorter()
distorter.randomize()
# for some reason GM 1- 3 makes no sound in pygame?
print distorter
dist_pattern = distorter.distort(simple)
midi.write_midifile('generated/program.mid', dist_pattern)
dist_pattern.play(bpm=180)
distorter = TempoDistorter(sigma=0, min=0.5, max=2.)
distorter.randomize()
print distorter
dist_pattern = distorter.distort(simple)
print 'time warp', dist_pattern.attributes[0][-4:]
midi.write_midifile('generated/tempo.mid', dist_pattern)
dist_pattern.play(bpm=180)
distorter = TimeNoiseDistorter()
distorter.randomize()
print distorter
dist_pattern = distorter.distort(simple)
print 'time warp', dist_pattern.attributes[0][-4:]
midi.write_midifile('generated/time.mid', dist_pattern)
dist_pattern.play(bpm=180)
```
Individual Note Times to Global Alignment
-------
```
stride = 1.
align = align_frame_to_frame(dist_pattern, stride)
align
write_align('generated/align.txt', align, stride)
align2, stride2 = read_align('generated/align.txt')
print align2 == align
print int(stride2) == int(stride), stride2, stride
```
Actual Generation
----
```
dist_pattern = random_distort(simple)
align = align_frame_to_frame(dist_pattern, stride=1.)
print align
dist_pattern.play()
num_samples = 10
stride = 0.1
for i in xrange(num_samples):
base_name = 'generated/sample-{}'.format(i)
align_name = '{}.txt'.format(base_name)
midi_name = '{}.mid'.format(base_name)
wav_name = '{}.wav'.format(base_name)
distorted = random_distort(simple)
align = align_frame_to_frame(distorted, stride)
write_align(align_name, align, stride)
midi.write_midifile(midi_name, distorted)
# Convert to wav using timidity
print wav_name
subprocess.check_call(['timidity', '-Ow', midi_name, '-o', wav_name])
print 'Done generating {}'.format(base_name)
```
| github_jupyter |
# SciPy를 이용한 최적화
- fun: 2.0
hess_inv: array([[ 0.5]])
jac: array([ 0.])
message: 'Optimization terminated successfully.'
nfev: 9 # SciPy는 Sympy가 아니라서, Symbolic을 활용하지 못하기에 수치 미분을 함 - 1위치에서 3번 계산됨 nit가 2 이라는거는 2번 뛰엇나느 것이며, 3곳에서 9번함수를돌림..
nit: 2
njev: 3
status: 0
success: True
x: array([ 1.99999999])
def f1p(x):
return 2 * (x - 2)
result = sp.optimize.minimize(f1, x0, jac=f1p) ## 여기 flp 값을 미리 설정해줘야 1자리에서 3번 계산안하게 됨, 더 빨리됨
print(result)
fun: 2.0
hess_inv: array([[ 0.5]])
jac: array([ 0.])
message: 'Optimization terminated successfully.'
nfev: 3
nit: 2
njev: 3
status: 0
success: True
x: array([ 2.])
```
# 연습문제 1
# 2차원 RosenBerg 함수에 대해
# 1) 최적해에 수렴할 수 있도록 초기점을 변경하여 본다.
# 2) 그레디언트 벡터 함수를 구현하여 jac 인수로 주는 방법으로 계산 속도를 향상시킨다.
# 1) 최적해에 수렴할 수 있도록 초기점을 변경하여 본다.
x0 = 1 # 초기값 설정
result = sp.optimize.minimize(f1, x0)
print(result)
%matplotlib inline
def f1(x):
return (x - 2) ** 2 + 2
xx = np.linspace(-1, 4, 100)
plt.plot(xx, f1(xx))
plt.plot(2, 2, 'ro', markersize=20)
plt.ylim(0, 10)
plt.show()
def f2(x, y):
return (1 - x)**2 + 100.0 * (y - x**2)**2
xx = np.linspace(-4, 4, 800)
yy = np.linspace(-3, 3, 600)
X, Y = np.meshgrid(xx, yy)
Z = f2(X, Y)
plt.contour(X, Y, Z, colors="gray", levels=[0.4, 3, 15, 50, 150, 500, 1500, 5000])
plt.plot(1, 1, 'ro', markersize=20)
plt.xlim(-4, 4)
plt.ylim(-3, 3)
plt.xticks(np.linspace(-4, 4, 9))
plt.yticks(np.linspace(-3, 3, 7))
plt.show()
def f1d(x):
"""derivative of f1(x)"""
return 2 * (x - 2.0)
xx = np.linspace(-1, 4, 100)
plt.plot(xx, f1(xx), 'k-')
# step size
mu = 0.4
# k = 0
x = 0
plt.plot(x, f1(x), 'go', markersize=10)
plt.plot(xx, f1d(x) * (xx - x) + f1(x), 'b--')
print("x = {}, g = {}".format(x, f1d(x)))
# k = 1
x = x - mu * f1d(x)
plt.plot(x, f1(x), 'go', markersize=10)
plt.plot(xx, f1d(x) * (xx - x) + f1(x), 'b--')
print("x = {}, g = {}".format(x, f1d(x)))
# k = 2
x = x - mu * f1d(x)
plt.plot(x, f1(x), 'go', markersize=10)
plt.plot(xx, f1d(x) * (xx - x) + f1(x), 'b--')
print("x = {}, g = {}".format(x, f1d(x)))
plt.ylim(0, 10)
plt.show()
# 1)
def f2g(x, y):
"""gradient of f2(x)"""
return np.array((2.0 * (x - 1) - 400.0 * x * (y - x**2), 200.0 * (y - x**2)))
xx = np.linspace(-4, 4, 800)
yy = np.linspace(-3, 3, 600)
X, Y = np.meshgrid(xx, yy)
Z = f2(X, Y)
levels=np.logspace(-1, 3, 10)
plt.contourf(X, Y, Z, alpha=0.2, levels=levels)
plt.contour(X, Y, Z, colors="green", levels=levels, zorder=0)
plt.plot(1, 1, 'ro', markersize=10)
mu = 8e-4 # step size
s = 0.95 # for arrowr head drawing
x, y = 0, 0 # x = 1 , y = 1 에서 시작
for i in range(5):
g = f2g(x, y)
plt.arrow(x, y, -s * mu * g[0], -s * mu * g[1], head_width=0.04, head_length=0.04, fc='k', ec='k', lw=2)
x = x - mu * g[0]
y = y - mu * g[1]
plt.xlim(-3, 3)
plt.ylim(-2, 2)
plt.xticks(np.linspace(-3, 3, 7))
plt.yticks(np.linspace(-2, 2, 5))
plt.show()
x0 = -0.5 # 초기값
result = sp.optimize.minimize(f1, x0)
print(result)
def f1p(x):
return 2 * (x - 2)
result = sp.optimize.minimize(f1, x0, jac=f1p)
print(result)
def f2(x):
return (1 - x[0])**2 + 400.0 * (x[1] - x[0]**2)**2
x0 = (0.7, 0.7)
result = sp.optimize.minimize(f2, x0)
print(result)
def f2p(x):
return np.array([2*x[0]-2-1600*x[0]*x[1]+1600*x[0]**3, 800*x[1]-800*x[0]**2])
result = sp.optimize.minimize(f2, x0, jac=f2p)
print(result)
```
| github_jupyter |
[Table of Contents](./table_of_contents.ipynb)
# Probabilities, Gaussians, and Bayes' Theorem
```
from __future__ import division, print_function
%matplotlib inline
#format the book
import book_format
book_format.set_style()
```
## Introduction
The last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.
We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features.
## Mean, Variance, and Standard Deviations
Most of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned.
### Random Variables
Each time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6.
This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.
While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.
Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.
Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.
Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable.
In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two.
## Probability Distribution
The [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:
|Value|Probability|
|-----|-----------|
|1|1/6|
|2|1/6|
|3|1/6|
|4|1/6|
|5|1/6|
|6|1/6|
We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:
$$P(X{=}4) = p(4) = \frac{1}{6}$$
This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this.
Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as
$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$
Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.
The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.
To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as
$$\sum\limits_u P(X{=}u)= 1$$
for discrete distributions, and as
$$\int\limits_u P(X{=}u) \,du= 1$$
for continuous distributions.
In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example:
```
import numpy as np
import kf_book.book_plots as book_plots
belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2])
belief = belief / np.sum(belief)
with book_plots.figsize(y=2):
book_plots.bar_plot(belief)
print('sum = ', np.sum(belief))
```
Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction.
### The Mean, Median, and Mode of a Random Variable
Given a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is
$$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$
we compute the mean as
$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$
It is traditional to use the symbol $\mu$ (mu) to denote the mean.
We can formalize this computation with the equation
$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$
NumPy provides `numpy.mean()` for computing the mean.
```
x = [1.8, 2.0, 1.7, 1.9, 1.6]
np.mean(x)
```
As a convenience NumPy arrays provide the method `mean()`.
```
x = np.array([1.8, 2.0, 1.7, 1.9, 1.6])
x.mean()
```
The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.
Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.
Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true.
```
np.median(x)
```
## Expected Value of a Random Variable
The [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?
It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.
Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute
$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$
Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.
We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us
$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$
A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:
$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$
If $x$ is continuous we substitute the sum for an integral, like so
$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$
where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.
We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically.
```
total = 0
N = 1000000
for r in np.random.rand(N):
if r <= .80: total += 1
elif r < .95: total += 3
else: total += 5
total / N
```
You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size.
### Exercise
What is the expected value of a die role?
### Solution
Each side is equally likely, so each has a probability of 1/6. Hence
$$\begin{aligned}
\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\
&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$
### Exercise
Given the uniform continuous distribution
$$f(x) = \frac{1}{b - a}$$
compute the expected value for $a=0$ and $B=20$.
### Solution
$$\begin{aligned}
\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\
&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\
&= 10 - 0 \\
&= 10
\end{aligned}$$
### Variance of a Random Variable
The computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights:
```
X = [1.8, 2.0, 1.7, 1.9, 1.6]
Y = [2.2, 1.5, 2.3, 1.7, 1.3]
Z = [1.8, 1.8, 1.8, 1.8, 1.8]
```
Using NumPy we see that the mean height of each class is the same.
```
print(np.mean(X), np.mean(Y), np.mean(Z))
```
The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.
The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students.
Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is
$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$
Ignoring the square for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get
$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$
Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.
The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute
$$
\begin{aligned}
\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\
&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\
\mathit{VAR}(X)&= 0.02 \, m^2
\end{aligned}$$
NumPy provides the function `var()` to compute the variance:
```
print("{:.2f} meters squared".format(np.var(X)))
```
This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:
$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$
It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.
For the first class we compute the standard deviation with
$$
\begin{aligned}
\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\
&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\
\sigma_x&= 0.1414
\end{aligned}$$
We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation.
```
print('std {:.4f}'.format(np.std(X)))
print('var {:.4f}'.format(np.std(X)**2))
```
And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.
What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters.
We can view this in a plot:
```
from kf_book.gaussian_internal import plot_height_std
import matplotlib.pyplot as plt
plot_height_std(X)
```
For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.
> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on.
```
from numpy.random import randn
data = 1.8 + randn(100)*.1414
mean, std = data.mean(), data.std()
plot_height_std(data, lw=2)
print('mean = {:.3f}'.format(mean))
print('std = {:.3f}'.format(std))
```
By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code.
```
np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100.
```
We'll discuss this in greater depth soon. For now let's compute the standard deviation for
$$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$
The mean of $Y$ is $\mu=1.8$ m, so
$$
\begin{aligned}
\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\
&= \sqrt{0.152} = 0.39 \ m
\end{aligned}$$
We will verify that with NumPy with
```
print('std of Y is {:.2f} m'.format(np.std(Y)))
```
This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.
Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.
$$
\begin{aligned}
\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\
&= \sqrt{\frac{0+0+0+0+0}{5}} \\
\sigma_z&= 0.0 \ m
\end{aligned}$$
```
print(np.std(Z))
```
Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account.
I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school!
We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues.
### Why the Square of the Differences
Why are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$
```
X = [3, -3, 3, -3]
mean = np.average(X)
for i in range(len(X)):
plt.plot([i ,i], [mean, X[i]], color='k')
plt.axhline(mean)
plt.xlim(-1, len(X))
plt.tick_params(axis='x', labelbottom=False)
```
If we didn't take the square of the differences the signs would cancel everything out:
$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$
This is clearly incorrect, as there is more than 0 variance in the data.
Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.
This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have:
```
X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100]
print('Variance of X with outlier = {:6.2f}'.format(np.var(X)))
print('Variance of X without outlier = {:6.2f}'.format(np.var(X[:-1])))
```
Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.
I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. In this book we will always use variance and standard deviation as defined by Gauss.
The point to gather from this is that these *summary* statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way.
## Gaussians
We are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.
> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.
Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about.
```
from filterpy.stats import plot_gaussian_pdf
plot_gaussian_pdf(mean=1.8, variance=0.1414**2,
xlabel='Student Height', ylabel='pdf');
```
This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.
> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the
Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].
This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.
This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.
To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter:
```
import kf_book.book_plots as book_plots
belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0]
book_plots.bar_plot(belief)
```
They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter!
## Nomenclature
A bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this:
```
plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)');
```
The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.
The Gaussian model is imperfect. Though these charts do not show it, the *tails* of the distribution extend out to infinity. *Tails* are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives.
You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*.
## Gaussian Distributions
Let's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:
$$
f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]
$$
$\exp[x]$ is notation for $e^x$.
<p> Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`.
Shorn of the constants, you can see it is a simple exponential:
$$f(x)\propto e^{-x^2}$$
which has the familiar bell curve shape
```
x = np.arange(-3, 3, .01)
plt.plot(x, np.exp(-x**2));
```
Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now.
```
from filterpy.stats import gaussian
#gaussian??
```
Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$.
```
plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$');
```
What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C.
Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.
What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures.
Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.
$$M = \iiint_R p(x,y,z)\, dV$$
We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C kph you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability.
What is the probability of a the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero.
Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.
In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.
We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve.
How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian
$$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$
This is called the *cumulative probability distribution*, commonly abbreviated *cdf*.
I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute
```
from filterpy.stats import norm_cdf
print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format(
norm_cdf((21.5, 22.5), 22,4)*100))
print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format(
norm_cdf((23.5, 24.5), 22,4)*100))
```
The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean.
The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as
$$\text{temp} \sim \mathcal{N}(22,4)$$
This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.
Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example.
## The Variance and Belief
Since this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$)
```
print(norm_cdf((-1e8, 1e8), mu=0, var=4))
```
This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.
Let's look at that graphically. We will use the aforementioned `filterpy.stats.gaussian` which can take either a single value or array of values.
```
from filterpy.stats import gaussian
print(gaussian(x=3.0, mean=2.0, var=1))
print(gaussian(x=[3.0, 2.0], mean=2.0, var=1))
```
By default `gaussian` normalizes the output, which turns the output back into a probability distribution. Use the argument`normed` to control this.
```
print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False))
```
If the Gaussian is not normalized it is called a *Gaussian function* instead of *Gaussian distribution*.
```
xs = np.arange(15, 30, 0.05)
plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$')
plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':')
plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--')
plt.legend();
```
What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.
If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.
An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.
I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using.
## The 68-95-99.7 Rule
It is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$).
Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.
The following graph depicts the relationship between the standard deviation and the normal distribution.
```
from kf_book.gaussian_internal import display_stddev_plot
display_stddev_plot()
```
## Interactive Gaussians
For those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner.
```
import math
from ipywidgets import interact, FloatSlider
def plt_g(mu,variance):
plt.figure()
xs = np.arange(2, 8, 0.01)
ys = gaussian(xs, mu, variance)
plt.plot(xs, ys)
plt.ylim(0, 0.04)
interact(plt_g, mu=FloatSlider(value=5, min=3, max=7),
variance=FloatSlider(value = .03, min=.01, max=1.));
```
Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified.
<img src='animations/04_gaussian_animate.gif'>
## Computational Properties of Gaussians
The discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians.
A remarkable property of Gaussian distributions is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).
Before we do the math, let's test this visually.
```
x = np.arange(-1, 3, 0.01)
g1 = gaussian(x, mean=0.8, var=.1)
g2 = gaussian(x, mean=1.3, var=.2)
plt.plot(x, g1, x, g2)
g = g1 * g2 # element-wise multiplication
g = g / sum(g) # normalize
plt.plot(x, g, ls='-.');
```
Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result *looks* like a Gaussian distribution.
Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from `sin(x)`.
```
x = np.arange(0, 4*np.pi, 0.01)
plt.plot(np.sin(1.2*x))
plt.plot(np.sin(1.2*x) * np.sin(2*x));
```
But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice.
The product of two independent Gaussians is given by:
$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\
\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2}
\end{aligned}$$
The sum of two Gaussians is given by
$$\begin{gathered}\mu = \mu_1 + \mu_2 \\
\sigma^2 = \sigma^2_1 + \sigma^2_2
\end{gathered}$$
At the end of the chapter I derive these equations. However, understanding the deriviation is not very important.
## Putting it all Together
Now we are ready to talk about Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.
In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so:
```
def normalize(p):
return p / sum(p)
def update(likelihood, prior):
return normalize(likelihood * prior)
prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2]))
likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16]))
posterior = update(likelihood, prior)
book_plots.bar_plot(posterior)
```
In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory.
But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart.
```
xs = np.arange(0, 10, .01)
def mean_var(p):
x = np.arange(len(p))
mean = np.sum(p * x,dtype=float)
var = np.sum((x - mean)**2 * p)
return mean, var
mean, var = mean_var(posterior)
book_plots.bar_plot(posterior)
plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r');
print('mean: %.2f' % mean, 'var: %.2f' % var)
```
This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.
Next, recall that our filter implements the update function with
```python
def update(likelihood, prior):
return normalize(likelihood * prior)
```
If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with
$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\
\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2}
\end{aligned}$$
which is three multiplications and two divisions.
### Bayes Theorem
In the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given prior information.
We implemented the `update()` function with this probability calculation:
$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$
It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:
$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$
where $\| \cdot\|$ expresses normalizing the term.
We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.
To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.
Bayes theorem is
$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$
$P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).
I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a *probability distribution*. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions
$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$
In the equation above $B$ is the *evidence*, $p(A)$ is the *prior*, $p(B \mid A)$ is the *likelihood*, and $p(A \mid B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches out update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$.
So, let's plug that into the equation and solve it.
$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$
That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function:
```python
def update(likelihood, prior):
posterior = prior * likelihood # p(z|x) * p(x)
return normalize(posterior)
```
The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes' theorem.
The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as
$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$
This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent [opinion piece ](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.
It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.
But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute
$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$
That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a ***much*** easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable.
Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position `x`. A hard problem becomes easy.
### Total Probability Theorem
We now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is
$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$
That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation
```python
for i in range(N):
for k in range (kN):
index = (i + (width-k) - offset) % N
result[i] += prob_dist[index] * kernel[k]
```
## Computing Probabilities with scipy.stats
In this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.
The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy.
```
from scipy.stats import norm
import filterpy.stats
print(norm(2, 3).pdf(1.5))
print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3))
```
The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so:
```
n23 = norm(2, 3)
print('pdf of 1.5 is %.4f' % n23.pdf(1.5))
print('pdf of 2.5 is also %.4f' % n23.pdf(2.5))
print('pdf of 2 is %.4f' % n23.pdf(2))
```
The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html#scipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function.
```
np.set_printoptions(precision=3, linewidth=50)
print(n23.rvs(size=15))
```
We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$.
```
# probability that a random value is less than the mean 2
print(n23.cdf(2))
```
We can get various properties of the distribution:
```
print('variance is', n23.var())
print('standard deviation is', n23.std())
print('mean is', n23.mean())
```
## Limitations of Using Gaussians to Model the World
Earlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions.
However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading.
This is a broad topic which I will not treat exhaustively.
Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.
But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions.
```
xs = np.arange(10, 100, 0.05)
ys = [gaussian(x, 90, 30) for x in xs]
plt.plot(xs, ys, label='var=0.2')
plt.xlim(0, 120)
plt.ylim(-0.02, 0.09);
```
The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places.
Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution).
Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with:
```
from numpy.random import randn
def sense():
return 10 + randn()*2
```
Let's plot that signal and see what it looks like.
```
zs = [sense() for i in range(5000)]
plt.plot(zs, lw=1);
```
That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening.
Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it.
```
import random
import math
def rand_student_t(df, mu=0, std=1):
"""return random number distributed by Student's t
distribution with `df` degrees of freedom with the
specified mean and standard deviation.
"""
x = random.gauss(0, std)
y = 2.0*random.gammavariate(0.5*df, 2.0)
return x / (math.sqrt(y / df)) + mu
def sense_t():
return 10 + rand_student_t(7)*2
zs = [sense_t() for i in range(5000)]
plt.plot(zs, lw=1);
```
We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13).
It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests.
This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.
The code for rand_student_t is included in `filterpy.stats`. You may use it with
```python
from filterpy.stats import rand_student_t
```
While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others.
```
import scipy
scipy.stats.describe(zs)
```
Let's examine two normal populations, one small, one large:
```
print(scipy.stats.describe(np.random.randn(10)))
print()
print(scipy.stats.describe(np.random.randn(300000)))
```
The small sample has very non-zero skew and kurtosis because the small number of samples is not well distributed around the mean of 0. You can see this also by comparing the computed mean and variance with the theoretical mean of 0 and variance 1. In comparison the large sample's mean and variance are very close to the theoretical values, and both the skew and kurtosis are near zero.
## Product of Gaussians (Optional)
It is not important to read this section. Here I derive the equations for the product of two Gaussians.
You can find this result by multiplying the equation for two Gaussians together and combining terms. The algebra gets messy. I will derive it using Bayes theorem. We can state the problem as: let the prior be $N(\bar\mu, \bar\sigma^2)$, and measurement be $z \propto N(z, \sigma_z^2)$. What is the posterior x given the measurement z?
Write the posterior as $p(x \mid z)$. Now we can use Bayes Theorem to state
$$p(x \mid z) = \frac{p(z \mid x)p(x)}{p(z)}$$
$p(z)$ is a normalizing constant, so we can create a proportinality
$$p(x \mid z) \propto p(z|x)p(x)$$
Now we subtitute in the equations for the Gaussians, which are
$$p(z \mid x) = \frac{1}{\sqrt{2\pi\sigma_z^2}}\exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]$$
$$p(x) = \frac{1}{\sqrt{2\pi\bar\sigma^2}}\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]$$
We can drop the leading terms, as they are constants, giving us
$$\begin{aligned}
p(x \mid z) &\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]\\
&\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big] \\
&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z-x)^2-\sigma_z^2(x-\bar\mu)^2]\Big]
\end{aligned}$$
Now we multiply out the squared terms and group in terms of the posterior $x$.
$$\begin{aligned}
p(x \mid z) &\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z^2 -2xz + x^2) + \sigma_z^2(x^2 - 2x\bar\mu+\bar\mu^2)]\Big ] \\
&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z) + (\bar\sigma^2z^2+\sigma_z^2\bar\mu^2)]\Big ]
\end{aligned}$$
The last parentheses do not contain the posterior $x$, so it can be treated as a constant and discarded.
$$p(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z)}{\sigma_z^2\bar\sigma^2}\Big ]
$$
Divide numerator and denominator by $\bar\sigma^2+\sigma_z^2$ to get
$$p(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2-2x(\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]
$$
Proportionality allows us create or delete constants at will, so we can factor this into
$$p(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{(x-\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})^2}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]
$$
A Gaussian is
$$N(\mu,\, \sigma^2) \propto \exp\Big [-\frac{1}{2}\frac{(x - \mu)^2}{\sigma^2}\Big ]$$
So we can see that $p(x \mid z)$ has a mean of
$$\mu_\mathtt{posterior} = \frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2}$$
and a variance of
$$
\sigma_\mathtt{posterior} = \frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}
$$
I've dropped the constants, and so the result is not a normal, but proportional to one. Bayes theorem normalizes with the $p(z)$ divisor, ensuring that the result is normal. We normalize in the update step of our filters, ensuring the filter estimate is Gaussian.
$$\mathcal N_1 = \| \mathcal N_2\cdot \mathcal N_3\|$$
## Sum of Gaussians (Optional)
Likewise, this section is not important to read. Here I derive the equations for the sum of two Gaussians.
The sum of two Gaussians is given by
$$\begin{gathered}\mu = \mu_1 + \mu_2 \\
\sigma^2 = \sigma^2_1 + \sigma^2_2
\end{gathered}$$
There are several proofs for this. I will use convolution since we used convolution in the previous chapter for the histograms of probabilities.
To find the density function of the sum of two Gaussian random variables we sum the density functions of each. They are nonlinear, continuous functions, so we need to compute the sum with an integral. If the random variables $p$ and $z$ (e.g. prior and measurement) are independent we can compute this with
$p(x) = \int\limits_{-\infty}^\infty f_p(x-z)f_z(z)\, dx$
This is the equation for a convolution. Now we just do some math:
$p(x) = \int\limits_{-\infty}^\infty f_2(x-x_1)f_1(x_1)\, dx$
$= \int\limits_{-\infty}^\infty
\frac{1}{\sqrt{2\pi}\sigma_z}\exp\left[-\frac{(x - z - \mu_z)^2}{2\sigma^2_z}\right]
\frac{1}{\sqrt{2\pi}\sigma_p}\exp\left[-\frac{(x - \mu_p)^2}{2\sigma^2_p}\right] \, dx$
$= \int\limits_{-\infty}^\infty
\frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]
\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$
$= \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right] \int\limits_{-\infty}^\infty
\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$
The expression inside the integral is a normal distribution. The sum of a normal distribution is one, hence the integral is one. This gives us
$$p(x) = \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]$$
This is in the form of a normal, where
$$\begin{gathered}\mu_x = \mu_p + \mu_z \\
\sigma_x^2 = \sigma_z^2+\sigma_p^2\, \square\end{gathered}$$
## Summary and Key Points
This chapter is a poor introduction to statistics in general. I've only covered the concepts that needed to use Gaussians in the remainder of the book, no more. What I've covered will not get you very far if you intend to read the Kalman filter literature. If this is a new topic to you I suggest reading a statistics textbook. I've always liked the Schaum series for self study, and Alan Downey's *Think Stats* [5] is also very good and freely available online.
The following points **must** be understood by you before we continue:
* Normals express a continuous probability distribution
* They are completely described by two parameters: the mean ($\mu$) and variance ($\sigma^2$)
* $\mu$ is the average of all possible values
* The variance $\sigma^2$ represents how much our measurements vary from the mean
* The standard deviation ($\sigma$) is the square root of the variance ($\sigma^2$)
* Many things in nature approximate a normal distribution, but the math is not perfect.
* In filtering problems computing $p(x\mid z)$ is nearly impossible, but computing $p(z\mid x)$ is straightforward. Bayes' lets us compute the former from the latter.
The next several chapters will be using Gaussians with Bayes' theorem to help perform filtering. As noted in the last section, sometimes Gaussians do not describe the world very well. Latter parts of the book are dedicated to filters which work even when the noise or system's behavior is very non-Gaussian.
## References
[1] https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb
[2] http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html
[3] http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html
[4] Huber, Peter J. *Robust Statistical Procedures*, Second Edition. Society for Industrial and Applied Mathematics, 1996.
[5] Downey, Alan. *Think Stats*, Second Edition. O'Reilly Media.
https://github.com/AllenDowney/ThinkStats2
http://greenteapress.com/thinkstats/
## Useful Wikipedia Links
https://en.wikipedia.org/wiki/Probability_distribution
https://en.wikipedia.org/wiki/Random_variable
https://en.wikipedia.org/wiki/Sample_space
https://en.wikipedia.org/wiki/Central_tendency
https://en.wikipedia.org/wiki/Expected_value
https://en.wikipedia.org/wiki/Standard_deviation
https://en.wikipedia.org/wiki/Variance
https://en.wikipedia.org/wiki/Probability_density_function
https://en.wikipedia.org/wiki/Central_limit_theorem
https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule
https://en.wikipedia.org/wiki/Cumulative_distribution_function
https://en.wikipedia.org/wiki/Skewness
https://en.wikipedia.org/wiki/Kurtosis
| github_jupyter |
```
%load_ext autotime
import os
from bs4 import BeautifulSoup
import urllib
import urllib.request
import requests
folder_path = os.getcwd()
# data-tags are the tags used for explaining the conditions of using the
# content in the weblink
file_name = '\\data_tags.txt'
file_path = folder_path + file_name
url_link = input('Please enter the url of the website: ')
with open(file_path, encoding="utf-8") as data_file:
content = data_file.readlines()
data_tags = [l.strip() for l in content]
data_tags
def check_for_data_tags(url):
'''return true if found a match for the data tags or false if found
no match for the data tags'''
with open(file_path, encoding="utf-8") as data_file:
content = data_file.readlines()
data_tags = [l.strip() for l in content]
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
imp_links = []
for link in soup.find_all('a', href=True):
ref_link = link.get('href')
print(ref_link)
print(set(ref_link.split('/')))
num_tag = len(set(ref_link.split('/')).intersection(set(data_tags)))
if num_tag > 0:
imp_links.append(ref_link)
else:
continue
return imp_links
links = check_for_data_tags(url_link)
links
def urls2text(links):
'''extract text from multiple links to one string'''
all_text = ''
for link in links:
print(link)
page = requests.get(link)
soup = BeautifulSoup(page.content, 'html.parser')
paras = soup.find_all('p')
print(paras)
for para in paras:
#print(para)
all_text = all_text + ' ' + para.getText()
return all_text
l = 'https://www.wsj.com/policy/data-policy'
page = requests.get(l)
soup = BeautifulSoup(page.content, 'html.parser')
container = soup.find_all("div")
for p in container.find_all("p"):
print(p.text)
print(container)
for div in container:
paras = soup.find_all('p')
for para in paras:
print(para.getText())
text = urls2text(links)
text
def clean_text(text):
import re
from nltk.corpus import stopwords
text = text.lower()
punc = '!"#$%&\'()*+,-/:;<=>?@[\\]^_`{|}~'
exclud = set(punc) #list of punctuation
text = re.sub(r'[0-9]+', r' ', text)
text = re.sub(' +', ' ', text)
text = re.sub('\. +', '. ', text)
text = ''.join(ch for ch in text if ch not in exclud)
text = re.sub('( \. )+', ' ', text)
text = re.sub('\.+ ', '. ', text)
text = re.sub(' \. ', ' ', text)
return text
full_text = clean_text(text)
full_text
def text2binary_for_scrape(text):
'''extract sentence that contains important information about data usage'''
full_text = clean_text(text)
full_text_sent = full_text.split('. ')
full_tok_sent = []
for sent in full_text_sent:
full_tok_sent.append(sent.split(' '))
file_path = folder_path + '\\data_usage_terms_tags.txt'
with open(file_path, encoding="utf-8") as data_file:
content = data_file.readlines()
data_usage_tags = [l.strip() for l in content]
max_num_common_terms = 0
imp_tok_sent = ''
num_no_term = ''
for tok_sent in full_tok_sent:
msg = 'Please check the terms manually'
num_common_terms = len(set(tok_sent).intersection(set(data_usage_tags)))
num_no_term = len(set(tok_sent).intersection(set(['no', 'not'])))
print(num_common_terms, num_no_term)
if num_common_terms > max_num_common_terms:
max_num_common_terms = num_common_terms
imp_tok_sent = tok_sent
else:
continue
if num_no_term >= 1:
can_scrape = 'no'
sent = ' '.join(imp_tok_sent)
msg = 'You cannot scrape text from the website. Here are more details: ' + ' "' + sent + '"'
else:
can_scrape = 'not sure'
msg = 'Please check the terms as they are complicated to understand by our system'
continue
return msg
text2binary_for_scrape(text)
```
| github_jupyter |
# 1- Business Understanding
The aim of this notebook is to create a model that will be able to find out which users might buy a product after seeing a promotion, have a targeted marketing campain, and thus, make more profits.
We should understand the features V1 through V7 without knowing what does each feature induce.
We will explore 2 approaches with different sampling techniques:
1. Logistic Regression
2. Uplift modeling
The data exploited is provided by Starbucks from one of their take-home assignment for their job candidates.
## Assumptions
- Although a single individual could be represented by multiple data points, for simplicity, we will assume that each data point represents a single individual.
## Loading the data and packages
```
# load the packages
from itertools import combinations
from driver.get_results import test_results, score
from driver.utils import promotion_strategy
import numpy as np
import pandas as pd
import scipy as sp
import sklearn as sk
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix, precision_score, recall_score, accuracy_score
from statsmodels.stats.power import NormalIndPower
from statsmodels.stats.proportion import proportion_effectsize
from imblearn.over_sampling import SMOTE
import xgboost as xgb
import matplotlib.pyplot as plt
import seaborn as sb
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
```
train and test data are split into 2:1 and provided by Starbucks
# 2- Data Understanding
```
# load the train data
train_data = pd.read_csv('./data/Training.csv')
train_data.head()
# load the test data
test_data = pd.read_csv('./data/Test.csv')
test_data.head()
```
---
Get Statistics and types of the data
```
train_data.shape
#test_data.shape
train_data.describe()
# test_data.describe()
train_data.dtypes
# test_data.dtypes
```
---
Check for missing values
```
train_data.isnull().sum()
# test_data.isnull().sum()
```
Checking the promotion values
```
train_promotion = train_data['Promotion'].value_counts()
train_promotion_rate = train_promotion/train_data.shape[0]
train_promotion, train_promotion_rate
```
Checking the purchase values
```
train_purchase = train_data['purchase'].value_counts()
train_purchase_rate = train_purchase/train_data.shape[0]
train_purchase, train_purchase_rate, train_purchase[0]/train_purchase[1]
```
**It is important to note that there is a huge imbalance between the number of data points that did not purchase vs number of data points that did purchase (80.3 times)**
If we do not account/handle these imbalances the model is most likely to predict all the data points to make no purchase.
---
### Exploring the features V1 --> V7
```
# put the features in a list to draw their plots
features = ['V1', 'V2', 'V3', 'V4', 'V5', 'V6', 'V7']
train_data[(train_data['Promotion']=='Yes') & (train_data['purchase']==1)][features].hist(figsize=(15,15))
plt.suptitle('Train Data - Features of Data Points that got the promotion and did purchase', fontsize=16);
# test_data[(test_data['Promotion']=='Yes') & (test_data['purchase']==1)][features].hist(figsize=(15,15))
# plt.suptitle('Test Data - Features of Data Points that got the promotion and did purchase', fontsize=16);
train_data[(train_data['Promotion']=='Yes') & (train_data['purchase']==0)][features].hist(figsize=(15,15))
plt.suptitle('Train Data - Features of Data Points that got the promotion but did not purchase', fontsize=16);
# test_data[(test_data['Promotion']=='Yes') & (test_data['purchase']==0)][features].hist(figsize=(15,15))
# plt.suptitle('Test Data - Features of Data Points that got the promotion but did not purchase', fontsize=16);
train_data[(train_data['Promotion']=='No') & (train_data['purchase']==1)][features].hist(figsize=(15,15))
plt.suptitle('Train Data - Features of Data Points that did not get the promotion but did purchase', fontsize=16);
# test_data[(test_data['Promotion']=='No') & (test_data['purchase']==1)][features].hist(figsize=(15,15))
# plt.suptitle('Test Data - Features of Data Points that did not get the promotion but did purchase', fontsize=16);
train_data[(train_data['Promotion']=='No') & (train_data['purchase']==0)][features].hist(figsize=(15,15))
plt.suptitle('Train Data - Features of Data Points that did not get the promotion and did not purchase', fontsize=16);
# test_data[(test_data['Promotion']=='No') & (test_data['purchase']==0)][features].hist(figsize=(15,15))
# plt.suptitle('Test Data - Features of Data Points that did not get the promotion and did not purchase', fontsize=16);
```
### What if we were to send the promotion to everyone?
```
test_results(promotion_strategy, 'all_purchase')
```
By sending everyone a promotion, the IRR = 0.96% and NIR = -1132.2$
The above numbers clearly show that an optimization strategy is needed to win the company money instead of losing in this promotion campain and get better IRR results.
# 3- Data Preparation
```
train_no_purch = train_data.loc[train_data['purchase'] == 0]
train_no_purch.shape
train_purch = train_data.loc[train_data['purchase'] == 1]
train_purch.shape
# Randomly sample 1040 rows from not purchased dataset
train_no_purch_sampled = train_no_purch.sample(n=1040)
train_no_purch_sampled.shape
# new training dataset with half purchased and half not purchased
df_train = pd.concat([train_no_purch_sampled, train_purch], axis=0)
df_train.head()
```
---
**we can see from the above charts that columns V1, V4, V5, V6, and V7 include categorical variables**
Since the splitting the variables will not conclude to a huge amount of column addition, we do not have a scaling problem here and can split each category into a separate column.
```
# split categorical variables into dummy columns
df_train = pd.get_dummies(data=df_train, columns=['V1','V4', 'V5','V6','V7'])
df_train.head()
# create training and testing vars
x = df_train.loc[:,'V2':]
y = df_train['purchase']
X_train, X_valid, Y_train, Y_valid = train_test_split(x, y, test_size=0.3, random_state=42)
print(X_train.shape, Y_train.shape)
print(X_valid.shape, Y_valid.shape)
```
# 4- Modeling
# Approach 1 - Logistic Regression
```
# logistic regression modeling on new training data df_train
logistic_model = LogisticRegression()
logistic_model.fit(X_train, Y_train)
preds = logistic_model.predict(X_valid)
confusion_matrix(Y_valid, preds)
# precision_score(y_valid, preds)
# accuracy_score(y_valid, preds)
recall_score(Y_valid, preds)
fig, ax= plt.subplots(figsize=(10,10))
sb.heatmap(confusion_matrix(Y_valid, preds), annot=True, fmt='g', cmap='Blues', ax = ax);
ax.set_xlabel('Predicted labels');
ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(['No Purchase', 'Made Purchase']);
ax.yaxis.set_ticklabels(['No Purchase', 'Made Purchase']);
#transform 0/1 array to Yes/No array
my_map = {0: "No", 1: "Yes"}
promotion = np.vectorize(my_map.get)(preds)
promotion
test_results(promotion_strategy,'logistic_regression', model=logistic_model)
```
# Approach 2 - Uplift Modeling - XGBoost
We will train a predictive model on only the treatment group, in other words the data points that received the promotion.
We will split the data into the data points that purchased or did not purchase, then use the SMOTE technique to upsample the minority (purchase) data only on the training set. (which will guarantee equivalent data points for each class)
```
# Data points that made a purchase after receiving a promotion will be assigned a label of 1,
# The other Data points will be given a label of 0
response = []
for index, row in train_data.iterrows():
if (row['purchase'] == 1) and (row['Promotion']=='Yes'):
response.append(1.0)
else:
response.append(0.0)
train_data['response'] = response
train, valid = sk.model_selection.train_test_split(train_data, test_size=0.2,random_state=42)
# generate features and labels
Y_train = train['response']
X_train = train[features] # features is a list containing the features from V1 up to V7
Y_valid = valid['response']
X_valid = valid[features]
# up sample only the train dataset with SMOTE
sm = SMOTE(random_state=42, sampling_strategy = 1.0)
X_train_upsamp, Y_train_upsamp = sm.fit_resample(X_train, Y_train)
X_train_upsamp = pd.DataFrame(X_train_upsamp, columns=features)
Y_train_upsamp = pd.Series(Y_train_upsamp)
# Train the xgboost model
eval_set = [(X_train_upsamp, Y_train_upsamp), (X_valid, Y_valid)]
uplift_model = xgb.XGBClassifier(learning_rate = 0.1, max_depth = 7, min_child_weight = 5, objective = 'binary:logistic', seed = 42, gamma = 0.1, silent = True)
uplift_model.fit(X_train_upsamp, Y_train_upsamp, eval_set=eval_set, eval_metric="auc", verbose=True, early_stopping_rounds=30)
# check which features are important
from xgboost import plot_importance
from matplotlib import pyplot
fig, ax = pyplot.subplots(figsize=(10, 10));
xgb.plot_importance(uplift_model, ax=ax);
# confusion matrix for the validation set
valid_pred = uplift_model.predict(X_valid, ntree_limit=uplift_model.best_ntree_limit)
cm = sk.metrics.confusion_matrix
cm(Y_valid, valid_pred)
# plot the confusion matrix
fig, ax= plt.subplots(figsize=(10,10))
sb.heatmap(cm(Y_valid, valid_pred), annot=True, fmt='g', ax = ax, cmap="Blues");
# labels, title and ticks
ax.set_xlabel('Predicted labels');
ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(['No Purchase', 'Made Purchase']);
ax.yaxis.set_ticklabels(['No Purchase', 'Made Purchase']);
# This will test your results, and provide you back some information on how well your promotion_strategy will work in practice
test_results(promotion_strategy, tpe='uplift', model=uplift_model)
```
---
# Conclusion
Using the Logistic Regression (LR) approach, we were able to reach more than starbucks' expected net incremental revenue (\\$420.7) while we got almost the same nir (\\$182) using the Uplift modeling. Moreover, both Incremental Response Rates were close (1.92% and 1.88% respectively). The LR model outperformed the Uplift model regarding both indices (NIR and IIR).
While we have a big imbalance in the purchased data that needed to be handled, the results showed a clear benefit for starbucks' in using the targeted promotion after the usage of any of these 2 models. Even though more experiments and sampling could be performed in order to induce better predictions, the results were satisfactory compared to Starbucks' goals.
| github_jupyter |
# RadarCOVID-Report
## Data Extraction
```
import datetime
import json
import logging
import os
import shutil
import tempfile
import textwrap
import uuid
import matplotlib.pyplot as plt
import matplotlib.ticker
import numpy as np
import pandas as pd
import pycountry
import retry
import seaborn as sns
%matplotlib inline
current_working_directory = os.environ.get("PWD")
if current_working_directory:
os.chdir(current_working_directory)
sns.set()
matplotlib.rcParams["figure.figsize"] = (15, 6)
extraction_datetime = datetime.datetime.utcnow()
extraction_date = extraction_datetime.strftime("%Y-%m-%d")
extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)
extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d")
extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H")
current_hour = datetime.datetime.utcnow().hour
are_today_results_partial = current_hour != 23
```
### Constants
```
from Modules.ExposureNotification import exposure_notification_io
spain_region_country_code = "ES"
germany_region_country_code = "DE"
default_backend_identifier = spain_region_country_code
backend_generation_days = 7 * 2
daily_summary_days = 7 * 4 * 3
daily_plot_days = 7 * 4
tek_dumps_load_limit = daily_summary_days + 1
```
### Parameters
```
environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER")
if environment_backend_identifier:
report_backend_identifier = environment_backend_identifier
else:
report_backend_identifier = default_backend_identifier
report_backend_identifier
environment_enable_multi_backend_download = \
os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD")
if environment_enable_multi_backend_download:
report_backend_identifiers = None
else:
report_backend_identifiers = [report_backend_identifier]
report_backend_identifiers
environment_invalid_shared_diagnoses_dates = \
os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES")
if environment_invalid_shared_diagnoses_dates:
invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",")
else:
invalid_shared_diagnoses_dates = []
invalid_shared_diagnoses_dates
```
### COVID-19 Cases
```
report_backend_client = \
exposure_notification_io.get_backend_client_with_identifier(
backend_identifier=report_backend_identifier)
@retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10))
def download_cases_dataframe():
return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv")
confirmed_df_ = download_cases_dataframe()
confirmed_df_.iloc[0]
confirmed_df = confirmed_df_.copy()
confirmed_df = confirmed_df[["date", "new_cases", "iso_code"]]
confirmed_df.rename(
columns={
"date": "sample_date",
"iso_code": "country_code",
},
inplace=True)
def convert_iso_alpha_3_to_alpha_2(x):
try:
return pycountry.countries.get(alpha_3=x).alpha_2
except Exception as e:
logging.info(f"Error converting country ISO Alpha 3 code '{x}': {repr(e)}")
return None
confirmed_df["country_code"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2)
confirmed_df.dropna(inplace=True)
confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True)
confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_df.sort_values("sample_date", inplace=True)
confirmed_df.tail()
confirmed_days = pd.date_range(
start=confirmed_df.iloc[0].sample_date,
end=extraction_datetime)
confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"])
confirmed_days_df["sample_date_string"] = \
confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_days_df.tail()
def sort_source_regions_for_display(source_regions: list) -> list:
if report_backend_identifier in source_regions:
source_regions = [report_backend_identifier] + \
list(sorted(set(source_regions).difference([report_backend_identifier])))
else:
source_regions = list(sorted(source_regions))
return source_regions
report_source_regions = report_backend_client.source_regions_for_date(
date=extraction_datetime.date())
report_source_regions = sort_source_regions_for_display(
source_regions=report_source_regions)
report_source_regions
def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None):
source_regions_at_date_df = confirmed_days_df.copy()
source_regions_at_date_df["source_regions_at_date"] = \
source_regions_at_date_df.sample_date.apply(
lambda x: source_regions_for_date_function(date=x))
source_regions_at_date_df.sort_values("sample_date", inplace=True)
source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \
source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x)))
source_regions_at_date_df.tail()
#%%
source_regions_for_summary_df_ = \
source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy()
source_regions_for_summary_df_.rename(columns={"_source_regions_group": "source_regions"}, inplace=True)
source_regions_for_summary_df_.tail()
#%%
confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"]
confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns)
for source_regions_group, source_regions_group_series in \
source_regions_at_date_df.groupby("_source_regions_group"):
source_regions_set = set(source_regions_group.split(","))
confirmed_source_regions_set_df = \
confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy()
confirmed_source_regions_group_df = \
confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \
.reset_index().sort_values("sample_date")
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df.merge(
confirmed_days_df[["sample_date_string"]].rename(
columns={"sample_date_string": "sample_date"}),
how="right")
confirmed_source_regions_group_df["covid_cases"] = \
confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round()
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[confirmed_output_columns]
confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan)
confirmed_source_regions_group_df.fillna(method="ffill", inplace=True)
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[
confirmed_source_regions_group_df.sample_date.isin(
source_regions_group_series.sample_date_string)]
confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df)
result_df = confirmed_output_df.copy()
result_df.tail()
#%%
result_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True)
result_df = confirmed_days_df[["sample_date_string"]].merge(result_df, how="left")
result_df.sort_values("sample_date_string", inplace=True)
result_df.fillna(method="ffill", inplace=True)
result_df.tail()
#%%
result_df[["new_cases", "covid_cases"]].plot()
if columns_suffix:
result_df.rename(
columns={
"new_cases": "new_cases_" + columns_suffix,
"covid_cases": "covid_cases_" + columns_suffix},
inplace=True)
return result_df, source_regions_for_summary_df_
confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe(
report_backend_client.source_regions_for_date)
confirmed_es_df, _ = get_cases_dataframe(
lambda date: [spain_region_country_code],
columns_suffix=spain_region_country_code.lower())
```
### Extract API TEKs
```
raw_zip_path_prefix = "Data/TEKs/Raw/"
base_backend_identifiers = [report_backend_identifier]
multi_backend_exposure_keys_df = \
exposure_notification_io.download_exposure_keys_from_backends(
backend_identifiers=report_backend_identifiers,
generation_days=backend_generation_days,
fail_on_error_backend_identifiers=base_backend_identifiers,
save_raw_zip_path_prefix=raw_zip_path_prefix)
multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"]
multi_backend_exposure_keys_df.rename(
columns={
"generation_datetime": "sample_datetime",
"generation_date_string": "sample_date_string",
},
inplace=True)
multi_backend_exposure_keys_df.head()
early_teks_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.rolling_period < 144].copy()
early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6
early_teks_df[early_teks_df.sample_date_string != extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
early_teks_df[early_teks_df.sample_date_string == extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[
"sample_date_string", "region", "key_data"]]
multi_backend_exposure_keys_df.head()
active_regions = \
multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
active_regions
multi_backend_summary_df = multi_backend_exposure_keys_df.groupby(
["sample_date_string", "region"]).key_data.nunique().reset_index() \
.pivot(index="sample_date_string", columns="region") \
.sort_index(ascending=False)
multi_backend_summary_df.rename(
columns={"key_data": "shared_teks_by_generation_date"},
inplace=True)
multi_backend_summary_df.rename_axis("sample_date", inplace=True)
multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int)
multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days)
multi_backend_summary_df.head()
def compute_keys_cross_sharing(x):
teks_x = x.key_data_x.item()
common_teks = set(teks_x).intersection(x.key_data_y.item())
common_teks_fraction = len(common_teks) / len(teks_x)
return pd.Series(dict(
common_teks=common_teks,
common_teks_fraction=common_teks_fraction,
))
multi_backend_exposure_keys_by_region_df = \
multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index()
multi_backend_exposure_keys_by_region_df["_merge"] = True
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_df.merge(
multi_backend_exposure_keys_by_region_df, on="_merge")
multi_backend_exposure_keys_by_region_combination_df.drop(
columns=["_merge"], inplace=True)
if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1:
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_combination_df[
multi_backend_exposure_keys_by_region_combination_df.region_x !=
multi_backend_exposure_keys_by_region_combination_df.region_y]
multi_backend_exposure_keys_cross_sharing_df = \
multi_backend_exposure_keys_by_region_combination_df \
.groupby(["region_x", "region_y"]) \
.apply(compute_keys_cross_sharing) \
.reset_index()
multi_backend_cross_sharing_summary_df = \
multi_backend_exposure_keys_cross_sharing_df.pivot_table(
values=["common_teks_fraction"],
columns="region_x",
index="region_y",
aggfunc=lambda x: x.item())
multi_backend_cross_sharing_summary_df
multi_backend_without_active_region_exposure_keys_df = \
multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier]
multi_backend_without_active_region = \
multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
multi_backend_without_active_region
exposure_keys_summary_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.region == report_backend_identifier]
exposure_keys_summary_df.drop(columns=["region"], inplace=True)
exposure_keys_summary_df = \
exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame()
exposure_keys_summary_df = \
exposure_keys_summary_df.reset_index().set_index("sample_date_string")
exposure_keys_summary_df.sort_index(ascending=False, inplace=True)
exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True)
exposure_keys_summary_df.head()
```
### Dump API TEKs
```
tek_list_df = multi_backend_exposure_keys_df[
["sample_date_string", "region", "key_data"]].copy()
tek_list_df["key_data"] = tek_list_df["key_data"].apply(str)
tek_list_df.rename(columns={
"sample_date_string": "sample_date",
"key_data": "tek_list"}, inplace=True)
tek_list_df = tek_list_df.groupby(
["sample_date", "region"]).tek_list.unique().reset_index()
tek_list_df["extraction_date"] = extraction_date
tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour
tek_list_path_prefix = "Data/TEKs/"
tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json"
tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json"
tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json"
for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]:
os.makedirs(os.path.dirname(path), exist_ok=True)
tek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier]
tek_list_base_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json(
tek_list_current_path,
lines=True, orient="records")
tek_list_base_df.drop(columns=["extraction_date_with_hour"]).to_json(
tek_list_daily_path,
lines=True, orient="records")
tek_list_base_df.to_json(
tek_list_hourly_path,
lines=True, orient="records")
tek_list_base_df.head()
```
### Load TEK Dumps
```
import glob
def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:
extracted_teks_df = pd.DataFrame(columns=["region"])
file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json"))))
if limit:
file_paths = file_paths[:limit]
for file_path in file_paths:
logging.info(f"Loading TEKs from '{file_path}'...")
iteration_extracted_teks_df = pd.read_json(file_path, lines=True)
extracted_teks_df = extracted_teks_df.append(
iteration_extracted_teks_df, sort=False)
extracted_teks_df["region"] = \
extracted_teks_df.region.fillna(spain_region_country_code).copy()
if region:
extracted_teks_df = \
extracted_teks_df[extracted_teks_df.region == region]
return extracted_teks_df
daily_extracted_teks_df = load_extracted_teks(
mode="Daily",
region=report_backend_identifier,
limit=tek_dumps_load_limit)
daily_extracted_teks_df.head()
exposure_keys_summary_df_ = daily_extracted_teks_df \
.sort_values("extraction_date", ascending=False) \
.groupby("sample_date").tek_list.first() \
.to_frame()
exposure_keys_summary_df_.index.name = "sample_date_string"
exposure_keys_summary_df_["tek_list"] = \
exposure_keys_summary_df_.tek_list.apply(len)
exposure_keys_summary_df_ = exposure_keys_summary_df_ \
.rename(columns={"tek_list": "shared_teks_by_generation_date"}) \
.sort_index(ascending=False)
exposure_keys_summary_df = exposure_keys_summary_df_
exposure_keys_summary_df.head()
```
### Daily New TEKs
```
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True)
tek_list_df.head()
def compute_teks_by_generation_and_upload_date(date):
day_new_teks_set_df = tek_list_df.copy().diff()
try:
day_new_teks_set = day_new_teks_set_df[
day_new_teks_set_df.index == date].tek_list.item()
except ValueError:
day_new_teks_set = None
if pd.isna(day_new_teks_set):
day_new_teks_set = set()
day_new_teks_df = daily_extracted_teks_df[
daily_extracted_teks_df.extraction_date == date].copy()
day_new_teks_df["shared_teks"] = \
day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set))
day_new_teks_df["shared_teks"] = \
day_new_teks_df.shared_teks.apply(len)
day_new_teks_df["upload_date"] = date
day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True)
day_new_teks_df = day_new_teks_df[
["upload_date", "generation_date", "shared_teks"]]
day_new_teks_df["generation_to_upload_days"] = \
(pd.to_datetime(day_new_teks_df.upload_date) -
pd.to_datetime(day_new_teks_df.generation_date)).dt.days
day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0]
return day_new_teks_df
shared_teks_generation_to_upload_df = pd.DataFrame()
for upload_date in daily_extracted_teks_df.extraction_date.unique():
shared_teks_generation_to_upload_df = \
shared_teks_generation_to_upload_df.append(
compute_teks_by_generation_and_upload_date(date=upload_date))
shared_teks_generation_to_upload_df \
.sort_values(["upload_date", "generation_date"], ascending=False, inplace=True)
shared_teks_generation_to_upload_df.tail()
today_new_teks_df = \
shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.upload_date == extraction_date].copy()
today_new_teks_df.tail()
if not today_new_teks_df.empty:
today_new_teks_df.set_index("generation_to_upload_days") \
.sort_index().shared_teks.plot.bar()
generation_to_upload_period_pivot_df = \
shared_teks_generation_to_upload_df[
["upload_date", "generation_to_upload_days", "shared_teks"]] \
.pivot(index="upload_date", columns="generation_to_upload_days") \
.sort_index(ascending=False).fillna(0).astype(int) \
.droplevel(level=0, axis=1)
generation_to_upload_period_pivot_df.head()
new_tek_df = tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
new_tek_df.rename(columns={
"tek_list": "shared_teks_by_upload_date",
"extraction_date": "sample_date_string",}, inplace=True)
new_tek_df.tail()
shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \
[["upload_date", "shared_teks"]].rename(
columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_teks_uploaded_on_generation_date",
})
shared_teks_uploaded_on_generation_date_df.head()
estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \
.groupby(["upload_date"]).shared_teks.max().reset_index() \
.sort_values(["upload_date"], ascending=False) \
.rename(columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_diagnoses",
})
invalid_shared_diagnoses_dates_mask = \
estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates)
estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0
estimated_shared_diagnoses_df.head()
```
### Hourly New TEKs
```
hourly_extracted_teks_df = load_extracted_teks(
mode="Hourly", region=report_backend_identifier, limit=25)
hourly_extracted_teks_df.head()
hourly_new_tek_count_df = hourly_extracted_teks_df \
.groupby("extraction_date_with_hour").tek_list. \
apply(lambda x: set(sum(x, []))).reset_index().copy()
hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \
.sort_index(ascending=True)
hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff()
hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply(
lambda x: len(x) if not pd.isna(x) else 0)
hourly_new_tek_count_df.rename(columns={
"new_tek_count": "shared_teks_by_upload_date"}, inplace=True)
hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[
"extraction_date_with_hour", "shared_teks_by_upload_date"]]
hourly_new_tek_count_df.head()
hourly_summary_df = hourly_new_tek_count_df.copy()
hourly_summary_df.set_index("extraction_date_with_hour", inplace=True)
hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index()
hourly_summary_df["datetime_utc"] = pd.to_datetime(
hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H")
hourly_summary_df.set_index("datetime_utc", inplace=True)
hourly_summary_df = hourly_summary_df.tail(-1)
hourly_summary_df.head()
```
### Official Statistics
```
import requests
import pandas.io.json
official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics")
official_stats_response.raise_for_status()
official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json())
official_stats_df = official_stats_df_.copy()
official_stats_df["date"] = pd.to_datetime(official_stats_df["date"], dayfirst=True)
official_stats_df.head()
official_stats_column_map = {
"date": "sample_date",
"applicationsDownloads.totalAcummulated": "app_downloads_es_accumulated",
"communicatedContagions.totalAcummulated": "shared_diagnoses_es_accumulated",
}
accumulated_suffix = "_accumulated"
accumulated_values_columns = \
list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values()))
interpolated_values_columns = \
list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns))
official_stats_df = \
official_stats_df[official_stats_column_map.keys()] \
.rename(columns=official_stats_column_map)
official_stats_df["extraction_date"] = extraction_date
official_stats_df.head()
official_stats_path = "Data/Statistics/Current/RadarCOVID-Statistics.json"
previous_official_stats_df = pd.read_json(official_stats_path, orient="records", lines=True)
previous_official_stats_df["sample_date"] = pd.to_datetime(previous_official_stats_df["sample_date"], dayfirst=True)
official_stats_df = official_stats_df.append(previous_official_stats_df)
official_stats_df.head()
official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)]
official_stats_df.sort_values("extraction_date", ascending=False, inplace=True)
official_stats_df.drop_duplicates(subset=["sample_date"], keep="first", inplace=True)
official_stats_df.head()
official_stats_stored_df = official_stats_df.copy()
official_stats_stored_df["sample_date"] = official_stats_stored_df.sample_date.dt.strftime("%Y-%m-%d")
official_stats_stored_df.to_json(official_stats_path, orient="records", lines=True)
official_stats_df.drop(columns=["extraction_date"], inplace=True)
official_stats_df = confirmed_days_df.merge(official_stats_df, how="left")
official_stats_df.sort_values("sample_date", ascending=False, inplace=True)
official_stats_df.head()
official_stats_df[accumulated_values_columns] = \
official_stats_df[accumulated_values_columns] \
.astype(float).interpolate(limit_area="inside")
official_stats_df[interpolated_values_columns] = \
official_stats_df[accumulated_values_columns].diff(periods=-1)
official_stats_df.drop(columns="sample_date", inplace=True)
official_stats_df.head()
```
### Data Merge
```
result_summary_df = exposure_keys_summary_df.merge(
new_tek_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
official_stats_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df = confirmed_es_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string)
result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left")
result_summary_df.set_index(["sample_date", "source_regions"], inplace=True)
result_summary_df.drop(columns=["sample_date_string"], inplace=True)
result_summary_df.sort_index(ascending=False, inplace=True)
result_summary_df.head()
with pd.option_context("mode.use_inf_as_na", True):
result_summary_df = result_summary_df.fillna(0).astype(int)
result_summary_df["teks_per_shared_diagnosis"] = \
(result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case"] = \
(result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0)
result_summary_df.head(daily_plot_days)
def compute_aggregated_results_summary(days) -> pd.DataFrame:
aggregated_result_summary_df = result_summary_df.copy()
aggregated_result_summary_df["covid_cases_for_ratio"] = \
aggregated_result_summary_df.covid_cases.mask(
aggregated_result_summary_df.shared_diagnoses == 0, 0)
aggregated_result_summary_df["covid_cases_for_ratio_es"] = \
aggregated_result_summary_df.covid_cases_es.mask(
aggregated_result_summary_df.shared_diagnoses_es == 0, 0)
aggregated_result_summary_df = aggregated_result_summary_df \
.sort_index(ascending=True).fillna(0).rolling(days).agg({
"covid_cases": "sum",
"covid_cases_es": "sum",
"covid_cases_for_ratio": "sum",
"covid_cases_for_ratio_es": "sum",
"shared_teks_by_generation_date": "sum",
"shared_teks_by_upload_date": "sum",
"shared_diagnoses": "sum",
"shared_diagnoses_es": "sum",
}).sort_index(ascending=False)
with pd.option_context("mode.use_inf_as_na", True):
aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int)
aggregated_result_summary_df["teks_per_shared_diagnosis"] = \
(aggregated_result_summary_df.shared_teks_by_upload_date /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case"] = \
(aggregated_result_summary_df.shared_diagnoses /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(aggregated_result_summary_df.shared_diagnoses_es /
aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0)
return aggregated_result_summary_df
aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7)
aggregated_result_with_7_days_window_summary_df.head()
last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient="records")[1]
last_7_days_summary
aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=14)
last_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient="records")[1]
last_14_days_summary
```
## Report Results
```
display_column_name_mapping = {
"sample_date": "Sample\u00A0Date\u00A0(UTC)",
"source_regions": "Source Countries",
"datetime_utc": "Timestamp (UTC)",
"upload_date": "Upload Date (UTC)",
"generation_to_upload_days": "Generation to Upload Period in Days",
"region": "Backend",
"region_x": "Backend\u00A0(A)",
"region_y": "Backend\u00A0(B)",
"common_teks": "Common TEKs Shared Between Backends",
"common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)",
"covid_cases": "COVID-19 Cases (Source Countries)",
"shared_teks_by_generation_date": "Shared TEKs by Generation Date (Source Countries)",
"shared_teks_by_upload_date": "Shared TEKs by Upload Date (Source Countries)",
"shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date (Source Countries)",
"shared_diagnoses": "Shared Diagnoses (Source Countries – Estimation)",
"teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis (Source Countries)",
"shared_diagnoses_per_covid_case": "Usage Ratio (Source Countries)",
"covid_cases_es": "COVID-19 Cases (Spain)",
"app_downloads_es": "App Downloads (Spain – Official)",
"shared_diagnoses_es": "Shared Diagnoses (Spain – Official)",
"shared_diagnoses_per_covid_case_es": "Usage Ratio (Spain)",
}
summary_columns = [
"covid_cases",
"shared_teks_by_generation_date",
"shared_teks_by_upload_date",
"shared_teks_uploaded_on_generation_date",
"shared_diagnoses",
"teks_per_shared_diagnosis",
"shared_diagnoses_per_covid_case",
"covid_cases_es",
"app_downloads_es",
"shared_diagnoses_es",
"shared_diagnoses_per_covid_case_es",
]
summary_percentage_columns= [
"shared_diagnoses_per_covid_case_es",
"shared_diagnoses_per_covid_case",
]
```
### Daily Summary Table
```
result_summary_df_ = result_summary_df.copy()
result_summary_df = result_summary_df[summary_columns]
result_summary_with_display_names_df = result_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
result_summary_with_display_names_df
```
### Daily Summary Plots
```
result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \
.droplevel(level=["source_regions"]) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(
title=f"Daily Summary",
rot=45, subplots=True, figsize=(15, 30), legend=False)
ax_ = summary_ax_list[0]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.95)
_ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist()))
for percentage_column in summary_percentage_columns:
percentage_column_index = summary_columns.index(percentage_column)
summary_ax_list[percentage_column_index].yaxis \
.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
```
### Daily Generation to Upload Period Table
```
display_generation_to_upload_period_pivot_df = \
generation_to_upload_period_pivot_df \
.head(backend_generation_days)
display_generation_to_upload_period_pivot_df \
.head(backend_generation_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping)
fig, generation_to_upload_period_pivot_table_ax = plt.subplots(
figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df)))
generation_to_upload_period_pivot_table_ax.set_title(
"Shared TEKs Generation to Upload Period Table")
sns.heatmap(
data=display_generation_to_upload_period_pivot_df
.rename_axis(columns=display_column_name_mapping)
.rename_axis(index=display_column_name_mapping),
fmt=".0f",
annot=True,
ax=generation_to_upload_period_pivot_table_ax)
generation_to_upload_period_pivot_table_ax.get_figure().tight_layout()
```
### Hourly Summary Plots
```
hourly_summary_ax_list = hourly_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.plot.bar(
title=f"Last 24h Summary",
rot=45, subplots=True, legend=False)
ax_ = hourly_summary_ax_list[-1]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.9)
_ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist()))
```
### Publish Results
```
github_repository = os.environ.get("GITHUB_REPOSITORY")
if github_repository is None:
github_repository = "pvieito/Radar-STATS"
github_project_base_url = "https://github.com/" + github_repository
display_formatters = {
display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case_es"]: lambda x: f"{x:.2%}" if x != 0 else "",
}
general_columns = \
list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values()))
general_formatter = lambda x: f"{x}" if x != 0 else ""
display_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns)))
daily_summary_table_html = result_summary_with_display_names_df \
.head(daily_plot_days) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.to_html(formatters=display_formatters)
multi_backend_summary_table_html = multi_backend_summary_df \
.head(daily_plot_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(formatters=display_formatters)
def format_multi_backend_cross_sharing_fraction(x):
if pd.isna(x):
return "-"
elif round(x * 100, 1) == 0:
return ""
else:
return f"{x:.1%}"
multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(
classes="table-center",
formatters=display_formatters,
float_format=format_multi_backend_cross_sharing_fraction)
multi_backend_cross_sharing_summary_table_html = \
multi_backend_cross_sharing_summary_table_html \
.replace("<tr>","<tr style=\"text-align: center;\">")
extraction_date_result_summary_df = \
result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date]
extraction_date_result_hourly_summary_df = \
hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]
covid_cases = \
extraction_date_result_summary_df.covid_cases.item()
shared_teks_by_generation_date = \
extraction_date_result_summary_df.shared_teks_by_generation_date.item()
shared_teks_by_upload_date = \
extraction_date_result_summary_df.shared_teks_by_upload_date.item()
shared_diagnoses = \
extraction_date_result_summary_df.shared_diagnoses.item()
teks_per_shared_diagnosis = \
extraction_date_result_summary_df.teks_per_shared_diagnosis.item()
shared_diagnoses_per_covid_case = \
extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()
shared_teks_by_upload_date_last_hour = \
extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int)
display_source_regions = ", ".join(report_source_regions)
if len(report_source_regions) == 1:
display_brief_source_regions = report_source_regions[0]
else:
display_brief_source_regions = f"{len(report_source_regions)} 🇪🇺"
def get_temporary_image_path() -> str:
return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png")
def save_temporary_plot_image(ax):
if isinstance(ax, np.ndarray):
ax = ax[0]
media_path = get_temporary_image_path()
ax.get_figure().savefig(media_path)
return media_path
def save_temporary_dataframe_image(df):
import dataframe_image as dfi
df = df.copy()
df_styler = df.style.format(display_formatters)
media_path = get_temporary_image_path()
dfi.export(df_styler, media_path)
return media_path
summary_plots_image_path = save_temporary_plot_image(
ax=summary_ax_list)
summary_table_image_path = save_temporary_dataframe_image(
df=result_summary_with_display_names_df)
hourly_summary_plots_image_path = save_temporary_plot_image(
ax=hourly_summary_ax_list)
multi_backend_summary_table_image_path = save_temporary_dataframe_image(
df=multi_backend_summary_df)
generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image(
ax=generation_to_upload_period_pivot_table_ax)
```
### Save Results
```
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-"
result_summary_df.to_csv(
report_resources_path_prefix + "Summary-Table.csv")
result_summary_df.to_html(
report_resources_path_prefix + "Summary-Table.html")
hourly_summary_df.to_csv(
report_resources_path_prefix + "Hourly-Summary-Table.csv")
multi_backend_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Summary-Table.csv")
multi_backend_cross_sharing_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv")
generation_to_upload_period_pivot_df.to_csv(
report_resources_path_prefix + "Generation-Upload-Period-Table.csv")
_ = shutil.copyfile(
summary_plots_image_path,
report_resources_path_prefix + "Summary-Plots.png")
_ = shutil.copyfile(
summary_table_image_path,
report_resources_path_prefix + "Summary-Table.png")
_ = shutil.copyfile(
hourly_summary_plots_image_path,
report_resources_path_prefix + "Hourly-Summary-Plots.png")
_ = shutil.copyfile(
multi_backend_summary_table_image_path,
report_resources_path_prefix + "Multi-Backend-Summary-Table.png")
_ = shutil.copyfile(
generation_to_upload_period_pivot_table_image_path,
report_resources_path_prefix + "Generation-Upload-Period-Table.png")
```
### Publish Results as JSON
```
def generate_summary_api_results(df: pd.DataFrame) -> list:
api_df = df.reset_index().copy()
api_df["sample_date_string"] = \
api_df["sample_date"].dt.strftime("%Y-%m-%d")
api_df["source_regions"] = \
api_df["source_regions"].apply(lambda x: x.split(","))
return api_df.to_dict(orient="records")
summary_api_results = \
generate_summary_api_results(df=result_summary_df)
today_summary_api_results = \
generate_summary_api_results(df=extraction_date_result_summary_df)[0]
summary_results = dict(
backend_identifier=report_backend_identifier,
source_regions=report_source_regions,
extraction_datetime=extraction_datetime,
extraction_date=extraction_date,
extraction_date_with_hour=extraction_date_with_hour,
last_hour=dict(
shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour,
shared_diagnoses=0,
),
today=today_summary_api_results,
last_7_days=last_7_days_summary,
last_14_days=last_14_days_summary,
daily_results=summary_api_results)
summary_results = \
json.loads(pd.Series([summary_results]).to_json(orient="records"))[0]
with open(report_resources_path_prefix + "Summary-Results.json", "w") as f:
json.dump(summary_results, f, indent=4)
```
### Publish on README
```
with open("Data/Templates/README.md", "r") as f:
readme_contents = f.read()
readme_contents = readme_contents.format(
extraction_date_with_hour=extraction_date_with_hour,
github_project_base_url=github_project_base_url,
daily_summary_table_html=daily_summary_table_html,
multi_backend_summary_table_html=multi_backend_summary_table_html,
multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html,
display_source_regions=display_source_regions)
with open("README.md", "w") as f:
f.write(readme_contents)
```
### Publish on Twitter
```
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER")
github_event_name = os.environ.get("GITHUB_EVENT_NAME")
if enable_share_to_twitter and github_event_name == "schedule" and \
(shared_teks_by_upload_date_last_hour or not are_today_results_partial):
import tweepy
twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"]
twitter_api_auth_keys = twitter_api_auth_keys.split(":")
auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])
auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])
api = tweepy.API(auth)
summary_plots_media = api.media_upload(summary_plots_image_path)
summary_table_media = api.media_upload(summary_table_image_path)
generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path)
media_ids = [
summary_plots_media.media_id,
summary_table_media.media_id,
generation_to_upload_period_pivot_table_image_media.media_id,
]
if are_today_results_partial:
today_addendum = " (Partial)"
else:
today_addendum = ""
def format_shared_diagnoses_per_covid_case(value) -> str:
if value == 0:
return "–"
return f"≤{value:.2%}"
display_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case)
display_last_14_days_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case"])
display_last_14_days_shared_diagnoses_per_covid_case_es = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case_es"])
status = textwrap.dedent(f"""
#RadarCOVID – {extraction_date_with_hour}
Source Countries: {display_brief_source_regions}
Today{today_addendum}:
- Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour)
- Shared Diagnoses: ≤{shared_diagnoses:.0f}
- Usage Ratio: {display_shared_diagnoses_per_covid_case}
Last 14 Days:
- Usage Ratio: {display_last_14_days_shared_diagnoses_per_covid_case}
- Usage Ratio (Spain): {display_last_14_days_shared_diagnoses_per_covid_case_es}
Info: {github_project_base_url}#documentation
""")
status = status.encode(encoding="utf-8")
api.update_status(status=status, media_ids=media_ids)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import math as m
%matplotlib inline
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import random
from torch.utils.data import Dataset, DataLoader
from mpl_toolkits.mplot3d import Axes3D
# point = np.array([1, 2, 3])
# normal = np.array([1, 1, 2])
# point2 = np.array([10, 50, 50])
# # a plane is a*x+b*y+c*z+d=0
# # [a,b,c] is the normal. Thus, we have to calculate
# # d and we're set
# d = -point.dot(normal)
# # create x,y
# xx, yy = np.meshgrid(range(10), range(10))
# # calculate corresponding z
# z = (-normal[0] * xx - normal[1] * yy - d) * 1. /normal[2]
# # plot the surface
# plt3d = plt.figure().gca(projection='3d')
# plt3d.plot_surface(xx, yy, z, alpha=1)
# ax = plt.gca()
# #and i would like to plot this point :
# ax.scatter(point2[0] , point2[1] , point2[2], color='green')
# plt.show()
# # plot the surface
# plt3d = plt.figure().gca(projection='3d')
# plt3d.plot_surface(xx, yy, z, alpha=0.2)
# # Ensure that the next plot doesn't overwrite the first plot
# ax = plt.gca()
# # ax.hold(True)
# ax.scatter(points2[0], point2[1], point2[2], color='green')
y = np.random.randint(0,10,1000)
idx= []
for i in range(10):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((1000,2))
x0=x[idx[0],:] = np.random.multivariate_normal(mean = [2,2],cov=[[0.01,0],[0,0.01]],size=sum(idx[0]))
x[idx[1],:] = np.random.multivariate_normal(mean = [0,-2],cov=[[0.01,0],[0,0.01]],size=sum(idx[1]))
x[idx[2],:] = np.random.multivariate_normal(mean = [-2,2],cov=[[0.01,0],[0,0.01]],size=sum(idx[2]))
x[idx[3],:] = np.random.multivariate_normal(mean = [0,0],cov=[[0.01,0],[0,0.01]],size=sum(idx[3]))
x[idx[4],:] = np.random.multivariate_normal(mean =[-2,-4] ,cov=[[0.01,0],[0,0.01]],size=sum(idx[4]))
x[idx[5],:] = np.random.multivariate_normal(mean = [2,-4],cov=[[0.01,0],[0,0.01]],size=sum(idx[5]))
x[idx[6],:] = np.random.multivariate_normal(mean = [-4,0],cov=[[0.01,0],[0,0.01]],size=sum(idx[6]))
x[idx[7],:] = np.random.multivariate_normal(mean = [-2,4],cov=[[0.01,0],[0,0.01]],size=sum(idx[7]))
x[idx[8],:] = np.random.multivariate_normal(mean = [2,4],cov=[[0.01,0],[0,0.01]],size=sum(idx[8]))
x[idx[9],:] = np.random.multivariate_normal(mean = [4,0],cov=[[0.01,0],[0,0.01]],size=sum(idx[9]))
idx= []
for i in range(10):
#print(i,sum(y==i))
idx.append(y==i)
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
z = np.zeros((1000,1))
x = np.concatenate((x, z) , axis =1)
x
x.shape
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
for i in range(10):
ax.scatter(x[idx[i],0],x[idx[i],1],x[idx[i],2],label="class_"+str(i))
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
normal = np.array([0,0,1])
# a plane is a*x+b*y+c*z+d=0
# [a,b,c] is the normal. Thus, we have to calculate
# d and we're set
d = 0
# create x,y
xx, yy = np.meshgrid(range(5), range(5))
# calculate corresponding z
z = (-normal[0] * xx - normal[1] * yy - d) * 1. /normal[2]
# plot the surface
plt3d = plt.figure().gca(projection='3d')
plt3d.plot_surface(xx, yy, z, alpha=0.5)
# fig = plt.figure()
ax = plt.gca()
for i in range(10):
ax.scatter(x[idx[i],0],x[idx[i],1],x[idx[i],2],label="class_"+str(i))
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
```
## angle = pi/2, p1: z = 0, p2: 2x + 3y = 0
## angle = pi/6, p1: z = 0, p2: 2x + 3y + sqrt(39)z = 0
## angle = pi/3, p1: z = 0, p2: 2x + 3y + sqrt(13/3)z = 0
```
angle = np.pi/4
angle
a = 2
b = 3
if(angle == np.pi/2):
c=0
else:
c = np.sqrt(a*a + b*b )/m.tan(angle)
print(c)
x[idx[0],:]
x[idx[0],2]
if(angle == np.pi/2):
for i in range(3):
x[idx[i],2] = (i+2)*(x[idx[i],0] + x[idx[i],1])/(x[idx[i],0] + x[idx[i],1])
else:
for i in range(3):
x[idx[i],2] = (2*x[idx[i],0] + 3*x[idx[i],1])/c
x[idx[0],:]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
for i in range(10):
ax.scatter(x[idx[i],0],x[idx[i],1],x[idx[i],2],label="class_"+str(i))
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
normal = np.array([a,b,c])
# a plane is a*x+b*y+c*z+d=0
# [a,b,c] is the normal. Thus, we have to calculate
# d and we're set
d = 0
# create x,y
xx, yy = np.meshgrid(range(5), range(5))
# calculate corresponding z
z = -(-normal[0] * xx - normal[1] * yy - d) * 1. /normal[2]
# plot the surface
plt3d = plt.figure().gca(projection='3d')
plt3d.plot_surface(xx, yy, z, alpha=0.5)
# fig = plt.figure()
ax = plt.gca()
for i in range(10):
ax.scatter(x[idx[i],0],x[idx[i],1],x[idx[i],2],label="class_"+str(i))
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
# true_data = 30
# true_class_size = int(true_data/2)
# corruption_size = 240
# corrupted_class_size = int(corruption_size/8)
# x0 = np.random.uniform(low=[2,1.5], high = [2.5,2],size =(true_class_size,2) )
# x1 = np.random.uniform(low=[2.5,2], high = [3,2.5],size =(true_class_size,2) )
# x2 = np.random.uniform(low = [0,1.5] , high = [1,2.5],size=(corrupted_class_size,2))
# x3 = np.random.uniform(low = [0,3] , high = [1,4],size=(corrupted_class_size,2))
# x4 = np.random.uniform(low = [2,0] , high = [3,1],size=(corrupted_class_size,2))
# x5 = np.random.uniform(low = [0,0] , high = [1,1],size=(corrupted_class_size,2))
# x6 = np.random.uniform(low = [2,3] , high = [3,4],size=(corrupted_class_size,2))
# x7 = np.random.uniform(low = [4,0] , high = [5,1],size=(corrupted_class_size,2))
# x8 = np.random.uniform(low = [4,1.5] , high = [5,2.5],size=(corrupted_class_size,2))
# x9 = np.random.uniform(low = [4,3] , high = [5,4],size=(corrupted_class_size,2))
# z = np.zeros((true_class_size,1))
# x0 = np.concatenate((x0, z) , axis =1)
# x1 = np.concatenate((x1, z) , axis =1)
# z= np.zeros((corrupted_class_size,1))
# x2 = np.concatenate((x2, z) , axis =1)
# x3 = np.concatenate((x3, z) , axis =1)
# x4 = np.concatenate((x4, z) , axis =1)
# x5 = np.concatenate((x5, z) , axis =1)
# x6 = np.concatenate((x6, z) , axis =1)
# x7 = np.concatenate((x7, z) , axis =1)
# x8 = np.concatenate((x8, z) , axis =1)
# x9 = np.concatenate((x9, z) , axis =1)
# x0.shape , x1.shape , x2.shape, x3.shape
# fig = plt.figure()
# ax = fig.add_subplot(111, projection='3d')
# # ax = plt.gca()
# ax.scatter(x0[:,0], x0[:, 1], x0[:,2])
# ax.scatter(x1[:,0],x1[:,1], x1[:,2])
# ax.scatter(x2[:,0],x2[:,1], x2[:,2])
# ax.scatter(x3[:,0],x3[:,1], x3[:,2])
# ax.scatter(x4[:,0],x4[:,1], x4[:,2])
# ax.scatter(x5[:,0],x5[:,1], x5[:,2])
# ax.scatter(x6[:,0],x6[:,1], x6[:,2])
# ax.scatter(x7[:,0],x7[:,1], x7[:,2])
# ax.scatter(x8[:,0],x8[:,1], x8[:,2])
# ax.scatter(x9[:,0],x9[:,1], x9[:,2])
# import plotly.express as px
# fig = px.scatter_3d(x0, x='sepal_length', y='sepal_width', z='petal_width',
# color='petal_length', symbol='species')
# fig.show()
x.shape,y.shape
classes = ('0', '1', '2','3', '4', '5', '6', '7','8', '9')
foreground_classes = {'0', '1', '2'}
background_classes = {'3', '4', '5', '6', '7','8', '9'}
class sub_clust_data(Dataset):
def __init__(self,x, y):
self.x = torch.Tensor(x)
self.y = torch.Tensor(y).type(torch.LongTensor)
#self.fore_idx = fore_idx
def __len__(self):
return len(self.y)
def __getitem__(self, idx):
return self.x[idx] , self.y[idx] #, self.fore_idx[idx]
trainset = sub_clust_data(x,y)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=True)
dataiter = iter(trainloader)
background_data=[]
background_label=[]
foreground_data=[]
foreground_label=[]
batch_size=10
for i in range(100): #5000*batch_size = 50000 data points
images, labels = dataiter.next()
for j in range(batch_size):
if(classes[labels[j]] in background_classes):
img = images[j].tolist()
background_data.append(img)
background_label.append(labels[j])
else:
img = images[j].tolist()
foreground_data.append(img)
foreground_label.append(labels[j])
np.shape(foreground_data),np.shape(foreground_label)
np.shape(background_data),np.shape(background_label)
foreground_data = torch.tensor(foreground_data)
foreground_label = torch.tensor(foreground_label).type(torch.LongTensor)
background_data = torch.tensor(background_data)
background_label = torch.tensor(background_label).type(torch.LongTensor)
def create_mosaic_img(bg_idx,fg_idx,fg):
image_list=[]
j=0
for i in range(9):
if i != fg:
image_list.append(background_data[bg_idx[j]].type("torch.DoubleTensor"))
j+=1
else:
image_list.append(foreground_data[fg_idx].type("torch.DoubleTensor"))
label = foreground_label[fg_idx] #-7 # minus 7 because our fore ground classes are 7,8,9 but we have to store it as 0,1,2
#image_list = np.concatenate(image_list ,axis=0)
image_list = torch.stack(image_list)
return image_list,label
desired_num = 1000
mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images
fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9
mosaic_label=[] # label of mosaic image = foreground class present in that mosaic
for i in range(desired_num):
bg_idx = np.random.randint(0,702,8)
fg_idx = np.random.randint(0,298)
fg = np.random.randint(0,9)
fore_idx.append(fg)
image_list,label = create_mosaic_img(bg_idx,fg_idx,fg)
mosaic_list_of_images.append(image_list)
mosaic_label.append(label)
def create_avg_image_from_mosaic_dataset(mosaic_dataset,labels,foreground_index,dataset_number):
avg_image_dataset = []
for i in range(len(mosaic_dataset)):
img = torch.zeros([3], dtype=torch.float64)
for j in range(9):
if j == foreground_index[i]:
img = img + mosaic_dataset[i][j]*dataset_number/9
else :
img = img + mosaic_dataset[i][j]*(9-dataset_number)/(8*9)
avg_image_dataset.append(img)
return avg_image_dataset , labels , foreground_index
avg_image_dataset_1 , labels_1, fg_index_1 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 1)
avg_image_dataset_2 , labels_2, fg_index_2 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 2)
avg_image_dataset_3 , labels_3, fg_index_3 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 3)
avg_image_dataset_4 , labels_4, fg_index_4 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 4)
avg_image_dataset_5 , labels_5, fg_index_5 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 5)
avg_image_dataset_6 , labels_6, fg_index_6 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 6)
avg_image_dataset_7 , labels_7, fg_index_7 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 7)
avg_image_dataset_8 , labels_8, fg_index_8 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 8)
avg_image_dataset_9 , labels_9, fg_index_9 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 9)
#test_dataset_10 , labels_10 , fg_index_10 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[10000:20000], mosaic_label[10000:20000], fore_idx[10000:20000] , 9)
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
#self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] #, self.fore_idx[idx]
batch = 256
# training_data = avg_image_dataset_5 #just change this and training_label to desired dataset for training
# training_label = labels_5
traindata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
trainloader_1 = DataLoader( traindata_1 , batch_size= batch ,shuffle=True)
traindata_2 = MosaicDataset(avg_image_dataset_2, labels_2 )
trainloader_2 = DataLoader( traindata_2 , batch_size= batch ,shuffle=True)
traindata_3 = MosaicDataset(avg_image_dataset_3, labels_3 )
trainloader_3 = DataLoader( traindata_3 , batch_size= batch ,shuffle=True)
traindata_4 = MosaicDataset(avg_image_dataset_4, labels_4 )
trainloader_4 = DataLoader( traindata_4 , batch_size= batch ,shuffle=True)
traindata_5 = MosaicDataset(avg_image_dataset_5, labels_5 )
trainloader_5 = DataLoader( traindata_5 , batch_size= batch ,shuffle=True)
traindata_6 = MosaicDataset(avg_image_dataset_6, labels_6 )
trainloader_6 = DataLoader( traindata_6 , batch_size= batch ,shuffle=True)
traindata_7 = MosaicDataset(avg_image_dataset_7, labels_7 )
trainloader_7 = DataLoader( traindata_7 , batch_size= batch ,shuffle=True)
traindata_8 = MosaicDataset(avg_image_dataset_8, labels_8 )
trainloader_8 = DataLoader( traindata_8 , batch_size= batch ,shuffle=True)
traindata_9 = MosaicDataset(avg_image_dataset_9, labels_9 )
trainloader_9 = DataLoader( traindata_9 , batch_size= batch ,shuffle=True)
testdata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
testloader_1 = DataLoader( testdata_1 , batch_size= batch ,shuffle=False)
testdata_2 = MosaicDataset(avg_image_dataset_2, labels_2 )
testloader_2 = DataLoader( testdata_2 , batch_size= batch ,shuffle=False)
testdata_3 = MosaicDataset(avg_image_dataset_3, labels_3 )
testloader_3 = DataLoader( testdata_3 , batch_size= batch ,shuffle=False)
testdata_4 = MosaicDataset(avg_image_dataset_4, labels_4 )
testloader_4 = DataLoader( testdata_4 , batch_size= batch ,shuffle=False)
testdata_5 = MosaicDataset(avg_image_dataset_5, labels_5 )
testloader_5 = DataLoader( testdata_5 , batch_size= batch ,shuffle=False)
testdata_6 = MosaicDataset(avg_image_dataset_6, labels_6 )
testloader_6 = DataLoader( testdata_6 , batch_size= batch ,shuffle=False)
testdata_7 = MosaicDataset(avg_image_dataset_7, labels_7 )
testloader_7 = DataLoader( testdata_7 , batch_size= batch ,shuffle=False)
testdata_8 = MosaicDataset(avg_image_dataset_8, labels_8 )
testloader_8 = DataLoader( testdata_8 , batch_size= batch ,shuffle=False)
testdata_9 = MosaicDataset(avg_image_dataset_9, labels_9 )
testloader_9 = DataLoader( testdata_9 , batch_size= batch ,shuffle=False)
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__()
self.linear1 = nn.Linear(3,64)
self.linear2 = nn.Linear(64,128)
self.linear3 = nn.Linear(128,3)
def forward(self,x):
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = self.linear3(x)
return x
def test_all(number, testloader,inc):
correct = 0
total = 0
out = []
pred = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to("cuda"),labels.to("cuda")
out.append(labels.cpu().numpy())
outputs= inc(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 1000 test dataset %d: %d %%' % (number , 100 * correct / total))
def train_all(trainloader, ds_number, testloader_list):
print("--"*40)
print("training on data set ", ds_number)
inc = Net().double()
inc = inc.to("cuda")
criterion_inception = nn.CrossEntropyLoss()
optimizer_inception = optim.SGD(inc.parameters(), lr=0.01, momentum=0.9)
acti = []
loss_curi = []
epochs = 70
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_inception.zero_grad()
# forward + backward + optimize
outputs = inc(inputs)
loss = criterion_inception(outputs, labels)
loss.backward()
optimizer_inception.step()
# print statistics
running_loss += loss.item()
mini=4
if i % mini == mini-1: # print every 10 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / mini))
ep_lossi.append(running_loss/mini) # loss per minibatch
running_loss = 0.0
if(np.mean(ep_lossi)<=0.01):
break
loss_curi.append(np.mean(ep_lossi)) #loss per epoch
# if (epoch%5 == 0):
# _,actis= inc(inputs)
# acti.append(actis)
print('Finished Training')
# torch.save(inc.state_dict(),"/content/drive/My Drive/Research/Experiments on CIFAR mosaic/Exp_2_Attention_models_on_9_datasets_made_from_10k_mosaic/weights/train_dataset_"+str(ds_number)+"_"+str(epochs)+".pt")
correct = 0
total = 0
with torch.no_grad():
for data in trainloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
outputs = inc(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 train images: %d %%' % ( 100 * correct / total))
for i, j in enumerate(testloader_list):
test_all(i+1, j,inc)
print("--"*40)
return loss_curi
train_loss_all=[]
testloader_list= [ testloader_1, testloader_2, testloader_3, testloader_4, testloader_5, testloader_6,
testloader_7, testloader_8, testloader_9]
train_loss_all.append(train_all(trainloader_1, 1, testloader_list))
train_loss_all.append(train_all(trainloader_2, 2, testloader_list))
train_loss_all.append(train_all(trainloader_3, 3, testloader_list))
train_loss_all.append(train_all(trainloader_4, 4, testloader_list))
train_loss_all.append(train_all(trainloader_5, 5, testloader_list))
train_loss_all.append(train_all(trainloader_6, 6, testloader_list))
train_loss_all.append(train_all(trainloader_7, 7, testloader_list))
train_loss_all.append(train_all(trainloader_8, 8, testloader_list))
train_loss_all.append(train_all(trainloader_9, 9, testloader_list))
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Reinforcement Learning in Azure Machine Learning - Training a Minecraft agent using custom environments
This tutorial will show how to set up a more complex reinforcement
learning (RL) training scenario. It demonstrates how to train an agent to
navigate through a lava maze in the Minecraft game using Azure Machine
Learning.
**Please note:** This notebook trains an agent on a randomly generated
Minecraft level. As a result, on rare occasions, a training run may fail
to produce a model that can solve the maze. If this happens, you can
re-run the training step as indicated below.
**Please note:** This notebook uses 1 NC6 type node and 8 D2 type nodes
for up to 5 hours of training, which corresponds to approximately $9.06 (USD)
as of May 2020.
Minecraft is currently one of the most popular video
games and as such has been a study object for RL. [Project
Malmo](https://www.microsoft.com/en-us/research/project/project-malmo/) is
a platform for artificial intelligence experimentation and research built on
top of Minecraft. We will use Minecraft [gym](https://gym.openai.com) environments from Project
Malmo's 2019 MineRL competition, which are part of the
[MineRL](http://minerl.io/docs/index.html) Python package.
Minecraft environments require a display to run, so we will demonstrate
how to set up a virtual display within the docker container used for training.
Learning will be based on the agent's visual observations. To
generate the necessary amount of sample data, we will run several
instances of the Minecraft game in parallel. Below, you can see a video of
a trained agent navigating a lava maze. Starting from the green position,
it moves to the blue position by moving forward, turning left or turning right:
<table style="width:50%">
<tr>
<th style="text-align: center;">
<img src="./images/lava_maze_minecraft.gif" alt="Minecraft lava maze" align="middle" margin-left="auto" margin-right="auto"/>
</th>
</tr>
<tr style="text-align: center;">
<th>Fig 1. Video of a trained Minecraft agent navigating a lava maze.</th>
</tr>
</table>
The tutorial will cover the following steps:
- Initializing Azure Machine Learning resources for training
- Training the RL agent with Azure Machine Learning service
- Monitoring training progress
- Reviewing training results
## Prerequisites
The user should have completed the Azure Machine Learning introductory tutorial.
You will need to make sure that you have a valid subscription id, a resource group and a
workspace. For detailed instructions see [Tutorial: Get started creating
your first ML experiment.](https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-1st-experiment-sdk-setup)
While this is a standalone notebook, we highly recommend going over the
introductory notebooks for RL first.
- Getting started:
- [RL using a compute instance with Azure Machine Learning service](../cartpole-on-compute-instance/cartpole_ci.ipynb)
- [Using Azure Machine Learning compute](../cartpole-on-single-compute/cartpole_sc.ipynb)
- [Scaling RL training runs with Azure Machine Learning service](../atari-on-distributed-compute/pong_rllib.ipynb)
## Initialize resources
All required Azure Machine Learning service resources for this tutorial can be set up from Jupyter.
This includes:
- Connecting to your existing Azure Machine Learning workspace.
- Creating an experiment to track runs.
- Setting up a virtual network
- Creating remote compute targets for [Ray](https://docs.ray.io/en/latest/index.html).
### Azure Machine Learning SDK
Display the Azure Machine Learning SDK version.
```
import azureml.core
print("Azure Machine Learning SDK Version: ", azureml.core.VERSION)
```
### Connect to workspace
Get a reference to an existing Azure Machine Learning workspace.
```
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.location, ws.resource_group, sep=' | ')
```
### Create an experiment
Create an experiment to track the runs in your workspace. A
workspace can have multiple experiments and each experiment
can be used to track multiple runs (see [documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py)
for details).
```
from azureml.core import Experiment
exp = Experiment(workspace=ws, name='minecraft-maze')
```
### Create Virtual Network
If you are using separate compute targets for the Ray head and worker, a virtual network must be created in the resource group. If you have alraeady created a virtual network in the resource group, you can skip this step.
To do this, you first must install the Azure Networking API.
`pip install --upgrade azure-mgmt-network`
```
# If you need to install the Azure Networking SDK, uncomment the following line.
#!pip install --upgrade azure-mgmt-network
from azure.mgmt.network import NetworkManagementClient
# Virtual network name
vnet_name ="your_vnet"
# Default subnet
subnet_name ="default"
# The Azure subscription you are using
subscription_id=ws.subscription_id
# The resource group for the reinforcement learning cluster
resource_group=ws.resource_group
# Azure region of the resource group
location=ws.location
network_client = NetworkManagementClient(ws._auth_object, subscription_id)
async_vnet_creation = network_client.virtual_networks.create_or_update(
resource_group,
vnet_name,
{
'location': location,
'address_space': {
'address_prefixes': ['10.0.0.0/16']
}
}
)
async_vnet_creation.wait()
print("Virtual network created successfully: ", async_vnet_creation.result())
```
### Set up Network Security Group on Virtual Network
Depending on your Azure setup, you may need to open certain ports to make it possible for Azure to manage the compute targets that you create. The ports that need to be opened are described [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-enable-virtual-network).
A common situation is that ports `29876-29877` are closed. The following code will add a security rule to open these ports. Or you can do this manually in the [Azure portal](https://portal.azure.com).
You may need to modify the code below to match your scenario.
```
import azure.mgmt.network.models
security_group_name = vnet_name + '-' + "nsg"
security_rule_name = "AllowAML"
# Create a network security group
nsg_params = azure.mgmt.network.models.NetworkSecurityGroup(
location=location,
security_rules=[
azure.mgmt.network.models.SecurityRule(
name=security_rule_name,
access=azure.mgmt.network.models.SecurityRuleAccess.allow,
description='Reinforcement Learning in Azure Machine Learning rule',
destination_address_prefix='*',
destination_port_range='29876-29877',
direction=azure.mgmt.network.models.SecurityRuleDirection.inbound,
priority=400,
protocol=azure.mgmt.network.models.SecurityRuleProtocol.tcp,
source_address_prefix='BatchNodeManagement',
source_port_range='*'
),
],
)
async_nsg_creation = network_client.network_security_groups.create_or_update(
resource_group,
security_group_name,
nsg_params,
)
async_nsg_creation.wait()
print("Network security group created successfully:", async_nsg_creation.result())
network_security_group = network_client.network_security_groups.get(
resource_group,
security_group_name,
)
# Define a subnet to be created with network security group
subnet = azure.mgmt.network.models.Subnet(
id='default',
address_prefix='10.0.0.0/24',
network_security_group=network_security_group
)
# Create subnet on virtual network
async_subnet_creation = network_client.subnets.create_or_update(
resource_group_name=resource_group,
virtual_network_name=vnet_name,
subnet_name=subnet_name,
subnet_parameters=subnet
)
async_subnet_creation.wait()
print("Subnet created successfully:", async_subnet_creation.result())
```
### Review the virtual network security rules
Ensure that the virtual network is configured correctly with required ports open. It is possible that you have configured rules with broader range of ports that allows ports 29876-29877 to be opened. Kindly review your network security group rules.
```
from files.networkutils import *
check_vnet_security_rules(ws._auth_object, ws.subscription_id, ws.resource_group, vnet_name, True)
```
### Create or attach an existing compute resource
A compute target is a designated compute resource where you
run your training script. For more information, see [What
are compute targets in Azure Machine Learning service?](https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-target).
#### GPU target for Ray head
In the experiment setup for this tutorial, the Ray head node
will run on a GPU-enabled node. A maximum cluster size
of 1 node is therefore sufficient. If you wish to run
multiple experiments in parallel using the same GPU
cluster, you may elect to increase this number. The cluster
will automatically scale down to 0 nodes when no training jobs
are scheduled (see `min_nodes`).
The code below creates a compute cluster of GPU-enabled NC6
nodes. If the cluster with the specified name is already in
your workspace the code will skip the creation process.
Note that we must specify a Virtual Network during compute
creation to allow communication between the cluster running
the Ray head node and the additional Ray compute nodes. For
details on how to setup the Virtual Network, please follow the
instructions in the "Prerequisites" section above.
**Note: Creation of a compute resource can take several minutes**
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
gpu_cluster_name = 'gpu-cl-nc6-vnet'
try:
gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name)
print('Found existing compute target')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(
vm_size='Standard_NC6',
min_nodes=0,
max_nodes=1,
vnet_resourcegroup_name=ws.resource_group,
vnet_name=vnet_name,
subnet_name=subnet_name)
gpu_cluster = ComputeTarget.create(ws, gpu_cluster_name, compute_config)
gpu_cluster.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
print('Cluster created.')
```
#### CPU target for additional Ray nodes
The code below creates a compute cluster of D2 nodes. If the cluster with the specified name is already in your workspace the code will skip the creation process.
This cluster will be used to start additional Ray nodes
increasing the clusters CPU resources.
**Note: Creation of a compute resource can take several minutes**
```
cpu_cluster_name = 'cpu-cl-d2-vnet'
try:
cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing compute target')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(
vm_size='STANDARD_D2',
min_nodes=0,
max_nodes=10,
vnet_resourcegroup_name=ws.resource_group,
vnet_name=vnet_name,
subnet_name=subnet_name)
cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
cpu_cluster.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
print('Cluster created.')
```
## Training the agent
### Training environments
This tutorial uses custom docker images (CPU and GPU respectively)
with the necessary software installed. The
[Environment](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-environments)
class stores the configuration for the training environment. The docker
image is set via `env.docker.base_image` which can point to any
publicly available docker image. `user_managed_dependencies`
is set so that the preinstalled Python packages in the image are preserved.
Note that since Minecraft requires a display to start, we set the `interpreter_path`
such that the Python process is started via **xvfb-run**.
```
import os
from azureml.core import Environment
max_train_time = os.environ.get("AML_MAX_TRAIN_TIME_SECONDS", 5 * 60 * 60)
def create_env(env_type):
env = Environment(name='minecraft-{env_type}'.format(env_type=env_type))
env.docker.enabled = True
env.docker.base_image = 'akdmsft/minecraft-{env_type}'.format(env_type=env_type)
env.python.interpreter_path = "xvfb-run -s '-screen 0 640x480x16 -ac +extension GLX +render' python"
env.environment_variables["AML_MAX_TRAIN_TIME_SECONDS"] = str(max_train_time)
env.python.user_managed_dependencies = True
return env
cpu_minecraft_env = create_env('cpu')
gpu_minecraft_env = create_env('gpu')
```
### Training script
As described above, we use the MineRL Python package to launch
Minecraft game instances. MineRL provides several OpenAI gym
environments for different scenarios, such as chopping wood.
Besides predefined environments, MineRL lets its users create
custom Minecraft environments through
[minerl.env](http://minerl.io/docs/api/env.html). In the helper
file **minecraft_environment.py** provided with this tutorial, we use the
latter option to customize a Minecraft level with a lava maze
that the agent has to navigate. The agent receives a negative
reward of -1 for falling into the lava, a negative reward of
-0.02 for sending a command (i.e. navigating through the maze
with fewer actions yields a higher total reward) and a positive reward
of 1 for reaching the goal. To encourage the agent to explore
the maze, it also receives a positive reward of 0.1 for visiting
a tile for the first time.
The agent learns purely from visual observations and the image
is scaled to an 84x84 format, stacking four frames. For the
purposes of this example, we use a small action space of size
three: move forward, turn 90 degrees to the left, and turn 90
degrees to the right.
The training script itself registers the function to create training
environments with the `tune.register_env` function and connects to
the Ray cluster Azure Machine Learning service started on the GPU
and CPU nodes. Lastly, it starts a RL training run with `tune.run()`.
We recommend setting the `local_dir` parameter to `./logs` as this
directory will automatically become available as part of the training
run's files in the Azure Portal. The Tensorboard integration
(see "View the Tensorboard" section below) also depends on the files'
availability. For a list of common parameter options, please refer
to the [Ray documentation](https://docs.ray.io/en/latest/rllib-training.html#common-parameters).
```python
# Taken from minecraft_environment.py and minecraft_train.py
# Define a function to create a MineRL environment
def create_env(config):
mission = config['mission']
port = 1000 * config.worker_index + config.vector_index
print('*********************************************')
print(f'* Worker {config.worker_index} creating from mission: {mission}, port {port}')
print('*********************************************')
if config.worker_index == 0:
# The first environment is only used for checking the action and observation space.
# By using a dummy environment, there's no need to spin up a Minecraft instance behind it
# saving some CPU resources on the head node.
return DummyEnv()
env = EnvWrapper(mission, port)
env = TrackingEnv(env)
env = FrameStack(env, 2)
return env
def stop(trial_id, result):
return result["episode_reward_mean"] >= 1 \
or result["time_total_s"] > 5 * 60 * 60
if __name__ == '__main__':
tune.register_env("Minecraft", create_env)
ray.init(address='auto')
tune.run(
run_or_experiment="IMPALA",
config={
"env": "Minecraft",
"env_config": {
"mission": "minecraft_missions/lava_maze-v0.xml"
},
"num_workers": 10,
"num_cpus_per_worker": 2,
"rollout_fragment_length": 50,
"train_batch_size": 1024,
"replay_buffer_num_slots": 4000,
"replay_proportion": 10,
"learner_queue_timeout": 900,
"num_sgd_iter": 2,
"num_data_loader_buffers": 2,
"exploration_config": {
"type": "EpsilonGreedy",
"initial_epsilon": 1.0,
"final_epsilon": 0.02,
"epsilon_timesteps": 500000
},
"callbacks": {"on_train_result": callbacks.on_train_result},
},
stop=stop,
checkpoint_at_end=True,
local_dir='./logs'
)
```
### Submitting a training run
Below, you create the training run using a `ReinforcementLearningEstimator`
object, which contains all the configuration parameters for this experiment:
- `source_directory`: Contains the training script and helper files to be
copied onto the node running the Ray head.
- `entry_script`: The training script, described in more detail above..
- `compute_target`: The compute target for the Ray head and training
script execution.
- `environment`: The Azure machine learning environment definition for
the node running the Ray head.
- `worker_configuration`: The configuration object for the additional
Ray nodes to be attached to the Ray cluster:
- `compute_target`: The compute target for the additional Ray nodes.
- `node_count`: The number of nodes to attach to the Ray cluster.
- `environment`: The environment definition for the additional Ray nodes.
- `max_run_duration_seconds`: The time after which to abort the run if it
is still running.
- `shm_size`: The size of docker container's shared memory block.
For more details, please take a look at the [online documentation](https://docs.microsoft.com/en-us/python/api/azureml-contrib-reinforcementlearning/?view=azure-ml-py)
for Azure Machine Learning service's reinforcement learning offering.
We configure 8 extra D2 (worker) nodes for the Ray cluster, giving us a total of
22 CPUs and 1 GPU. The GPU and one CPU are used by the IMPALA learner,
and each MineRL environment receives 2 CPUs allowing us to spawn a total
of 10 rollout workers (see `num_workers` parameter in the training script).
Lastly, the `RunDetails` widget displays information about the submitted
RL experiment, including a link to the Azure portal with more details.
```
from azureml.contrib.train.rl import ReinforcementLearningEstimator, WorkerConfiguration
from azureml.widgets import RunDetails
worker_config = WorkerConfiguration(
compute_target=cpu_cluster,
node_count=8,
environment=cpu_minecraft_env)
rl_est = ReinforcementLearningEstimator(
source_directory='files',
entry_script='minecraft_train.py',
compute_target=gpu_cluster,
environment=gpu_minecraft_env,
worker_configuration=worker_config,
max_run_duration_seconds=6 * 60 * 60,
shm_size=1024 * 1024 * 1024 * 30)
train_run = exp.submit(rl_est)
RunDetails(train_run).show()
# If you wish to cancel the run before it completes, uncomment and execute:
#train_run.cancel()
```
## Monitoring training progress
### View the Tensorboard
The Tensorboard can be displayed via the Azure Machine Learning service's
[Tensorboard API](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-monitor-tensorboard).
When running locally, please make sure to follow the instructions in the
link and install required packages. Running this cell will output a URL
for the Tensorboard.
Note that the training script sets the log directory when starting RLlib
via the `local_dir` parameter. `./logs` will automatically appear in
the downloadable files for a run. Since this script is executed on the
Ray head node run, we need to get a reference to it as shown below.
The Tensorboard API will continuously stream logs from the run.
**Note: It may take a couple of minutes after the run is in "Running" state
before Tensorboard files are available and the board will refresh automatically**
```
import time
from azureml.tensorboard import Tensorboard
head_run = None
timeout = 60
while timeout > 0 and head_run is None:
timeout -= 1
try:
head_run = next(r for r in train_run.get_children() if r.id.endswith('head'))
except StopIteration:
time.sleep(1)
tb = Tensorboard([head_run], port=6007)
tb.start()
```
## Review results
Please ensure that the training run has completed before continuing with this section.
```
train_run.wait_for_completion()
print('Training run completed.')
```
**Please note:** If the final "episode_reward_mean" metric from the training run is negative,
the produced model does not solve the problem of navigating the maze well. You can view
the metric on the Tensorboard or in "Metrics" section of the head run in the Azure Machine Learning
portal. We recommend training a new model by rerunning the notebook starting from "Submitting a training run".
### Export final model
The key result from the training run is the final checkpoint
containing the state of the IMPALA trainer (model) upon meeting the
stopping criteria specified in `minecraft_train.py`.
Azure Machine Learning service offers the [Model.register()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model?view=azure-ml-py)
API which allows you to persist the model files from the
training run. We identify the directory containing the
final model written during the training run and register
it with Azure Machine Learning service. We use a Dataset
object to filter out the correct files.
```
import re
import tempfile
from azureml.core import Dataset
path_prefix = os.path.join(tempfile.gettempdir(), 'tmp_training_artifacts')
run_artifacts_path = os.path.join('azureml', head_run.id)
datastore = ws.get_default_datastore()
run_artifacts_ds = Dataset.File.from_files(datastore.path(os.path.join(run_artifacts_path, '**')))
cp_pattern = re.compile('.*checkpoint-\\d+$')
checkpoint_files = [file for file in run_artifacts_ds.to_path() if cp_pattern.match(file)]
# There should only be one checkpoint with our training settings...
final_checkpoint = os.path.dirname(os.path.join(run_artifacts_path, os.path.normpath(checkpoint_files[-1][1:])))
datastore.download(target_path=path_prefix, prefix=final_checkpoint.replace('\\', '/'), show_progress=True)
print('Download complete.')
from azureml.core.model import Model
model_name = 'final_model_minecraft_maze'
model = Model.register(
workspace=ws,
model_path=os.path.join(path_prefix, final_checkpoint),
model_name=model_name,
description='Model of an agent trained to navigate a lava maze in Minecraft.')
```
Models can be used through a varity of APIs. Please see the
[documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-and-where)
for more details.
### Test agent performance in a rollout
To observe the trained agent's behavior, it is a common practice to
view its behavior in a rollout. The previous reinforcement learning
tutorials explain rollouts in more detail.
The provided `minecraft_rollout.py` script loads the final checkpoint
of the trained agent from the model registered with Azure Machine Learning
service. It then starts a rollout on 4 different lava maze layouts, that
are all larger and thus more difficult than the maze the agent was trained
on. The script further records videos by replaying the agent's decisions
in [Malmo](https://github.com/microsoft/malmo). Malmo supports multiple
agents in the same environment, thus allowing us to capture videos that
depict the agent from another agent's perspective. The provided
`malmo_video_recorder.py` file and the Malmo Github repository have more
details on the video recording setup.
You can view the rewards for each rollout episode in the logs for the 'head'
run submitted below. In some episodes, the agent may fail to reach the goal
due to the higher level of difficulty - in practice, we could continue
training the agent on harder tasks starting with the final checkpoint.
```
script_params = {
'--model_name': model_name
}
rollout_est = ReinforcementLearningEstimator(
source_directory='files',
entry_script='minecraft_rollout.py',
script_params=script_params,
compute_target=gpu_cluster,
environment=gpu_minecraft_env,
shm_size=1024 * 1024 * 1024 * 30)
rollout_run = exp.submit(rollout_est)
RunDetails(rollout_run).show()
```
### View videos captured during rollout
To inspect the agent's training progress you can view the videos captured
during the rollout episodes. First, ensure that the training run has
completed.
```
rollout_run.wait_for_completion()
head_run_rollout = next(r for r in rollout_run.get_children() if r.id.endswith('head'))
print('Rollout completed.')
```
Next, you need to download the video files from the training run. We use a
Dataset to filter out the video files which are in tgz archives.
```
rollout_run_artifacts_path = os.path.join('azureml', head_run_rollout.id)
datastore = ws.get_default_datastore()
rollout_run_artifacts_ds = Dataset.File.from_files(datastore.path(os.path.join(rollout_run_artifacts_path, '**')))
video_archives = [file for file in rollout_run_artifacts_ds.to_path() if file.endswith('.tgz')]
video_archives = [os.path.join(rollout_run_artifacts_path, os.path.normpath(file[1:])) for file in video_archives]
datastore.download(
target_path=path_prefix,
prefix=os.path.dirname(video_archives[0]).replace('\\', '/'),
show_progress=True)
print('Download complete.')
```
Next, unzip the video files and rename them by the Minecraft mission seed used
(see `minecraft_rollout.py` for more details on how the seed is used).
```
import tarfile
import shutil
training_artifacts_dir = './training_artifacts'
video_dir = os.path.join(training_artifacts_dir, 'videos')
video_files = []
for tar_file_path in video_archives:
seed = tar_file_path[tar_file_path.index('rollout_') + len('rollout_'): tar_file_path.index('.tgz')]
tar = tarfile.open(os.path.join(path_prefix, tar_file_path).replace('\\', '/'), 'r')
tar_info = next(t_info for t_info in tar.getmembers() if t_info.name.endswith('mp4'))
tar.extract(tar_info, video_dir)
tar.close()
unzipped_folder = os.path.join(video_dir, next(f_ for f_ in os.listdir(video_dir) if not f_.endswith('mp4')))
video_file = os.path.join(unzipped_folder,'video.mp4')
final_video_path = os.path.join(video_dir, '{seed}.mp4'.format(seed=seed))
shutil.move(video_file, final_video_path)
video_files.append(final_video_path)
shutil.rmtree(unzipped_folder)
# Clean up any downloaded 'tmp' files
shutil.rmtree(path_prefix)
print('Local video files:\n', video_files)
```
Finally, run the cell below to display the videos in-line. In some cases,
the agent may struggle to find the goal since the maze size was increased
compared to training.
```
from IPython.core.display import display, HTML
index = 0
while index < len(video_files) - 1:
display(
HTML('\
<video controls alt="cannot display video" autoplay loop width=49%> \
<source src="{f1}" type="video/mp4"> \
</video> \
<video controls alt="cannot display video" autoplay loop width=49%> \
<source src="{f2}" type="video/mp4"> \
</video>'.format(f1=video_files[index], f2=video_files[index + 1]))
)
index += 2
if index < len(video_files):
display(
HTML('\
<video controls alt="cannot display video" autoplay loop width=49%> \
<source src="{f1}" type="video/mp4"> \
</video>'.format(f1=video_files[index]))
)
```
## Cleaning up
Below, you can find code snippets for your convenience to clean up any resources created as part of this tutorial you don't wish to retain.
```
# to stop the Tensorboard, uncomment and run
#tb.stop()
# to delete the gpu compute target, uncomment and run
#gpu_cluster.delete()
# to delete the cpu compute target, uncomment and run
#cpu_cluster.delete()
# to delete the registered model, uncomment and run
#model.delete()
# to delete the local video files, uncomment and run
#shutil.rmtree(training_artifacts_dir)
```
## Next steps
This is currently the last introductory tutorial for Azure Machine Learning
service's Reinforcement
Learning offering. We would love to hear your feedback to build the features
you need!
| github_jupyter |
```
import csv
import tensorflow as tf
import numpy as np
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/bbc-text.csv \
-O /tmp/bbc-text.csv
vocab_size = 1000
embedding_dim = 16
max_length = 120
trunc_type='post'
padding_type='post'
oov_tok = "<OOV>"
training_portion = .8
sentences = []
labels = []
stopwords = [ "a", "about", "above", "after", "again", "against", "all", "am", "an", "and", "any", "are", "as", "at", "be", "because", "been", "before", "being", "below", "between", "both", "but", "by", "could", "did", "do", "does", "doing", "down", "during", "each", "few", "for", "from", "further", "had", "has", "have", "having", "he", "he'd", "he'll", "he's", "her", "here", "here's", "hers", "herself", "him", "himself", "his", "how", "how's", "i", "i'd", "i'll", "i'm", "i've", "if", "in", "into", "is", "it", "it's", "its", "itself", "let's", "me", "more", "most", "my", "myself", "nor", "of", "on", "once", "only", "or", "other", "ought", "our", "ours", "ourselves", "out", "over", "own", "same", "she", "she'd", "she'll", "she's", "should", "so", "some", "such", "than", "that", "that's", "the", "their", "theirs", "them", "themselves", "then", "there", "there's", "these", "they", "they'd", "they'll", "they're", "they've", "this", "those", "through", "to", "too", "under", "until", "up", "very", "was", "we", "we'd", "we'll", "we're", "we've", "were", "what", "what's", "when", "when's", "where", "where's", "which", "while", "who", "who's", "whom", "why", "why's", "with", "would", "you", "you'd", "you'll", "you're", "you've", "your", "yours", "yourself", "yourselves" ]
print(len(stopwords))
# Expected Output
# 153
with open("/tmp/bbc-text.csv", 'r') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
next(reader)
for row in reader:
labels.append(row[0])
sentence = row[1]
for word in stopwords:
token = " " + word + " "
sentence = sentence.replace(token, " ")
sentences.append(sentence)
print(len(labels))
print(len(sentences))
print(sentences[0])
# Expected Output
# 2225
# 2225
# tv future hands viewers home theatre systems plasma high-definition tvs digital video recorders moving living room way people watch tv will radically different five years time. according expert panel gathered annual consumer electronics show las vegas discuss new technologies will impact one favourite pastimes. us leading trend programmes content will delivered viewers via home networks cable satellite telecoms companies broadband service providers front rooms portable devices. one talked-about technologies ces digital personal video recorders (dvr pvr). set-top boxes like us s tivo uk s sky+ system allow people record store play pause forward wind tv programmes want. essentially technology allows much personalised tv. also built-in high-definition tv sets big business japan us slower take off europe lack high-definition programming. not can people forward wind adverts can also forget abiding network channel schedules putting together a-la-carte entertainment. us networks cable satellite companies worried means terms advertising revenues well brand identity viewer loyalty channels. although us leads technology moment also concern raised europe particularly growing uptake services like sky+. happens today will see nine months years time uk adam hume bbc broadcast s futurologist told bbc news website. likes bbc no issues lost advertising revenue yet. pressing issue moment commercial uk broadcasters brand loyalty important everyone. will talking content brands rather network brands said tim hanlon brand communications firm starcom mediavest. reality broadband connections anybody can producer content. added: challenge now hard promote programme much choice. means said stacey jolna senior vice president tv guide tv group way people find content want watch simplified tv viewers. means networks us terms channels take leaf google s book search engine future instead scheduler help people find want watch. kind channel model might work younger ipod generation used taking control gadgets play them. might not suit everyone panel recognised. older generations comfortable familiar schedules channel brands know getting. perhaps not want much choice put hands mr hanlon suggested. end kids just diapers pushing buttons already - everything possible available said mr hanlon. ultimately consumer will tell market want. 50 000 new gadgets technologies showcased ces many enhancing tv-watching experience. high-definition tv sets everywhere many new models lcd (liquid crystal display) tvs launched dvr capability built instead external boxes. one example launched show humax s 26-inch lcd tv 80-hour tivo dvr dvd recorder. one us s biggest satellite tv companies directtv even launched branded dvr show 100-hours recording capability instant replay search function. set can pause rewind tv 90 hours. microsoft chief bill gates announced pre-show keynote speech partnership tivo called tivotogo means people can play recorded programmes windows pcs mobile devices. reflect increasing trend freeing multimedia people can watch want want.
train_size = int(len(sentences) * training_portion)
train_sentences = sentences[:train_size]
train_labels = labels[:train_size]
validation_sentences = sentences[train_size:]
validation_labels = labels[train_size:]
print(train_size)
print(len(train_sentences))
print(len(train_labels))
print(len(validation_sentences))
print(len(validation_labels))
# Expected output (if training_portion=.8)
# 1780
# 1780
# 1780
# 445
# 445
tokenizer = Tokenizer(num_words = vocab_size, oov_token=oov_tok)
tokenizer.fit_on_texts(train_sentences)
word_index = tokenizer.word_index
train_sequences = tokenizer.texts_to_sequences(train_sentences)
train_padded = pad_sequences(train_sequences, padding=padding_type, maxlen=max_length)
print(len(train_sequences[0]))
print(len(train_padded[0]))
print(len(train_sequences[1]))
print(len(train_padded[1]))
print(len(train_sequences[10]))
print(len(train_padded[10]))
# Expected Ouput
# 449
# 120
# 200
# 120
# 192
# 120
validation_sequences = tokenizer.texts_to_sequences(validation_sentences)
validation_padded = pad_sequences(validation_sequences, padding=padding_type, maxlen=max_length)
print(len(validation_sequences))
print(validation_padded.shape)
# Expected output
# 445
# (445, 120)
label_tokenizer = Tokenizer()
label_tokenizer.fit_on_texts(labels)
training_label_seq = np.array(label_tokenizer.texts_to_sequences(train_labels))
validation_label_seq = np.array(label_tokenizer.texts_to_sequences(validation_labels))
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(24, activation='relu'),
tf.keras.layers.Dense(6, activation = 'softmax')
])
model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
num_epochs = 30
history = model.fit(train_padded, training_label_seq, epochs=num_epochs, validation_data=(validation_padded, validation_label_seq), verbose=2)
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot_graphs(history, "accuracy")
plot_graphs(history, "loss")
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_sentence(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
e = model.layers[0]
weights = e.get_weights()[0]
print(weights.shape) # shape: (vocab_size, embedding_dim)
# Expected output
# (1000, 16)
```
| github_jupyter |
# Libraries and setup variables
```
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from utils import *
%matplotlib inline
sns.set()
```
### Loading the processed dataset
Here we'll load the data into a dataframe, and run a series of initial exploratory analysis.
```
df_train = pd.read_csv('../data/interim/preprocessed_train.csv')
df_test = pd.read_csv('../data/interim/preprocessed_test.csv')
df = pd.concat([df_train, df_test], ignore_index=True)
# Education level, we'll group undergrad and postgrad
df['education'] = df['education'].apply(lambda x: 'college' if x in ['Undergraduate', 'Postgraduate'] else 'no-college')
# Just married or not married categories, simpler
df['marital stat'] = df['marital stat'].apply(lambda x: 'married' if x=='Married' else 'not married')
# Grouping occupations
well_paid_occupations = ['Professional specialty', 'Executive admin and managerial', 'Sales']
df['major occupation code'] = df['major occupation code'].apply(lambda x: 'well-paid occ' if x in well_paid_occupations else 'not well-paid occ')
# Grouping industries
well_paid_industries = ['Other professional services', 'Manufacturing-durable goods', 'Finance insurance and real estate']
df['major industry code'] = df['major industry code'].apply(lambda x: 'well-paid ind' if x in well_paid_industries else 'not well-paid ind')
# Separating householders from others types.
df['detailed household summary in household'] = df['detailed household summary in household'].apply(lambda x: x if x=='Householder' else 'Not householder')
# Private workers (I'm assuming this might be something like self employed)
df['class of worker'] = df['class of worker'].apply(lambda x: x if x=='Private' else 'Other')
# Grouping joint tax filers
df['tax filer stat'] = df['tax filer stat'].apply(lambda x: x if x=='Joint both under 65' else 'Other')
# Dropping the detailed industry and occupation recodes, these are numbers but not really
drop_columns = ['detailed industry recode', 'detailed occupation recode', 'year', 'veterans benefits',
"fill inc questionnaire for veteran's admin"]
df = df.drop(columns=drop_columns)
# Train and test
df['set'] = df['set'].apply(lambda x: 1 if x=='train' else 0)
# We want to one-hot the categorical columns
cont_columns, nominal_columns = columns_types(df=df)
# Isolate numeric columns
df_num_columns = df[cont_columns]
# One-hot categories
df_cat_columns = one_hot(df, excl_columns=['set', 'salary'])
# Save the target and set columns
df_excl_columns = df[['set', 'salary']]
# Put it back together
df = pd.concat([df_num_columns, df_cat_columns, df_excl_columns], axis=1)
# Split the dataset again and save under the processed folder
Y_train = df[df['set']==1][['salary']]
Y_test = df[df['set']==0][['salary']]
X_train = df[df['set']==1].drop(columns=['set', 'salary'])
X_test = df[df['set']==0].drop(columns=['set', 'salary'])
# Save
Y_train.to_csv('../data/processed/Y_train.csv', index=False)
Y_test.to_csv('../data/processed/Y_test.csv', index=False)
X_train.to_csv('../data/processed/X_train.csv', index=False)
X_test.to_csv('../data/processed/X_test.csv', index=False)
```
| github_jupyter |
# Learning to play connect 4 using minimax Deep Q-learning
In this notebook we will train a reinforcement learning (RL) agent using minimax deep Q-learning on a classic game: Connect 4.
In Connect 4, your objective is to get 4 of your checkers in a row horizontally, vertically, or diagonally on the game board before your opponent. When it's your turn, you “drop” one of your checkers into one of the columns at the top of the board. Then, let your opponent take their turn. This means each move may be trying to either win for you, or trying to stop your opponent from winning.
See the [Kaggle competition](https://www.kaggle.com/c/connectx) for more background and the [thread](https://www.kaggle.com/c/connectx/discussion/129145) that discusses this notebook. This [high level presentation](https://docs.google.com/presentation/d/1bNwOMZq1_poMRm6zPEFtEuCTSYd5u8OD9HbM4X6PsuI/edit?usp=sharing) on using RL in board games may also be useful. I adapted code from some of the public notebooks but developed all of the RL logic myself.
Most of the successful ideas in this competition are the result of my interaction with the wonderful people of DeepMind. Thank you all for teaching me Reinforcement Learning and so much more.

Last modified on Feb 10th 2020 (Tom Van de Wiele)
## Install dependencies
```
!pip install 'kaggle-environments>=0.1.6'
!pip install 'recordtype'
```
## Google Colab keyboard shortcuts
* Create a new cell : (CTRL + m) + a
* Delete a new cell : (CTRL + m) + d
* Execute cell and move on to the next one: CTRL + ENTER
* Toggle the contents of cells under a title: CTRL + #
## Some Colab tips
Double click on the white input fields to toggle between hiding and showing the code. Try it out on this cell, it also works for text cells!
You can use the arrows on the left to toggle between hiding and showing the code for one/multiple cells (equivalent to CTRL + #).
## Imports
```
#@title Imports
import numpy as np
import pandas as pd
%tensorflow_version 1.x # See https://colab.research.google.com/notebooks/tensorflow_version.ipynb
import tensorflow as tf
import time
from IPython.display import clear_output
from IPython.display import display
from IPython.display import Image
from kaggle_environments import evaluate as evaluate_game
from kaggle_environments import make as make_game
from kaggle_environments import utils as utils_game
from random import choice
from recordtype import recordtype
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import plot_model
```
## Utility functions
```
#@title Various utilities
ExperienceStep = recordtype('ExperienceStep', [
'game_id',
'current_network_input',
'action',
'next_network_input',
'last_episode_action',
'episode_reward',
])
# Collect user input - Modified from https://www.kaggle.com/marcovasquez/how-to-play-with-computer-and-check-winner
def get_input(user, observation, configuration):
ncol = configuration.columns
time.sleep(0.1)
input1 = 'Input from player {}: '.format(your_name)
while True:
try:
print('Enter Value from 1 to 7')
raw_input = input(input1)
user_input = int(raw_input)
except ValueError:
try:
print('Invalid input:', user_input)
continue
except UnboundLocalError:
user_input = -1
if raw_input == 'q':
break
continue
np_board = obs_to_board(observation, configuration)
valid_actions = np.where(np_board[0] == 0)[0]
if user_input <= 0 or user_input > ncol or (
user_input-1) not in valid_actions:
print('invalid input:', user_input)
print('Valid actions: {}'.format(valid_actions+1))
else:
return user_input-1
# Convert the 1D observation list to a 2D numpy array
def obs_to_board(observation, configuration):
return np.array(observation.board).reshape(
configuration.rows, configuration.columns)
def check_winner(observation):
'''
Source: https://www.kaggle.com/marcovasquez/how-to-play-with-computer-and-check-winner
This function returns the value of the winner.
INPUT: observation
OUTPUT: 1 for user Winner or 2 for Computer Winner
'''
line1 = observation.board[0:7] # bottom row
line2 = observation.board[7:14]
line3 = observation.board[14:21]
line4 = observation.board[21:28]
line5 = observation.board[28:35]
line6 = observation.board[35:42]
board = [line1, line2, line3, line4, line5, line6]
# Check rows for winner
for row in range(6):
for col in range(4):
if (board[row][col] == board[row][col + 1] == board[row][col + 2] == (
board[row][col + 3])) and (board[row][col] != 0):
return board[row][col] #Return Number that match row
# Check columns for winner
for col in range(7):
for row in range(3):
if (board[row][col] == board[row + 1][col] == board[row + 2][col] == (
board[row + 3][col])) and (board[row][col] != 0):
return board[row][col] #Return Number that match column
# Check diagonal (top-left to bottom-right) for winner
for row in range(3):
for col in range(4):
if (board[row][col] == board[row + 1][col + 1] == board[
row + 2][col + 2] ==\
board[row + 3][col + 3]) and (board[row][col] != 0):
return board[row][col] #Return Number that match diagonal
# Check diagonal (bottom-left to top-right) for winner
for row in range(5, 2, -1):
for col in range(4):
if (board[row][col] == board[row - 1][col + 1] == (
board[row - 2][col + 2]) == board[row - 3][col + 3]) and (
board[row][col] != 0):
return board[row][col] #Return Number that match diagonal
# No winner: return None
return None
# Custom class to reuse data of subsequent interations with the environment
# FIFO buffer. Experience buffer (also referred to as the replay buffer).
class ExperienceBuffer:
def __init__(self, buffer_size):
self.buffer_size = buffer_size
self.episode_offset = 0
self.data = []
self.episode_ids = np.array([])
def add(self, data):
episode_ids = np.array([d.game_id for d in data])
num_episodes = episode_ids[-1] + 1
if num_episodes > self.buffer_size:
# Keep most recent experience of the experience batch
data = data[
np.where(episode_ids == (num_episodes-self.buffer_size))[0][0]:]
self.data = data
self.episode_ids = episode_ids
self.episode_offset = 0
return
episode_ids = episode_ids + self.episode_offset
self.data = data + self.data
self.episode_ids = np.concatenate([episode_ids, self.episode_ids])
unique_episode_ids = pd.unique(self.episode_ids)
if unique_episode_ids.size > self.buffer_size:
cutoff_index = np.where(self.episode_ids == unique_episode_ids[
self.buffer_size])[0][0]
self.data = self.data[:cutoff_index]
self.episode_ids = self.episode_ids[:cutoff_index]
self.episode_offset += num_episodes
def get_all_data(self):
return self.data
def size(self):
return len(self.data)
def num_episodes(self):
return np.unique(self.episode_ids).size
```
## Let's play against a random agent to try out the game - q to exit the game
```
your_name = 'Tom' #@param {type:"string"}
play_against_random = False #@param ["False", "True"] {type:"raw"}
plot_resolution = 400 #@param {type:"slider", min:200, max:500, step:1}
# Here we define an agent that picks a random non-empty column
def my_random_agent(observation, configuration):
return int(choice([c for c in range(
configuration.columns) if observation.board[c] == 0]))
def play_against_agent(opponent_agent):
# Play as first position against the opposing agent.
# modified from https://www.kaggle.com/marcovasquez/how-to-play-with-computer-and-check-winner
env = make_game("connectx", debug=False, configuration={"timeout": 10})
trainer = env.train([None, opponent_agent])
observation = trainer.reset()
while not env.done:
clear_output(wait=True) # Comment if you want to keep track of every action
print("{}'s color: Blue".format(your_name))
env.render(mode="ipython", width=plot_resolution, height=plot_resolution,
header=False, controls=False)
my_action = get_input(your_name, observation, env.configuration)
if my_action is None:
print("Exiting game after pressing q")
return
# import pdb; pdb.set_trace()
observation, reward, done, info = trainer.step(my_action)
#print(observation, reward, done, info)
if (check_winner(observation) == 1):
print("You Won, Amazing! \nGAME OVER")
elif (check_winner(observation) == 2):
print("The opponent Won! \nGAME OVER")
if (check_winner(observation) is None):
print("That is a draw between you and the opponent")
env.render(mode="ipython", width=plot_resolution, height=plot_resolution,
header=False, controls=False)
if play_against_random:
play_against_agent(my_random_agent)
```
## That was too easy, we should obviously train a deep reinforcement learning agent to play against.
In this tutorial we will be using Q-learning to train an agent. [How does Q-learning work?](https://https://en.wikipedia.org/wiki/Q-learning) The Q-values represent the expected terminal reward when taking action **a** in board state **s**. A win is associated with a terminal reward of 1, a loss with a 0 reward and a draw with a 0.5 reward. The learning process looks as follows:
Randomly initialize Q(s) - a mapping from the current board state to the Q-values of all actions for the current state.
While True:
1. Play a fixed number of games using self-play and add the experience to the experience buffer
2. Update the Q network using the experience from the experience buffer
3. Evaluate the trained agent against a random agent to find out if we are making progress
Attention! This notebook intentionally introduced 2 obvious bugs and one configuration setting that is obviously suboptimal. The bugs and their fixes are listed at the bottom of the notebook but please try to find and fix them yourself first!
```
#@title Define possible network architectures of the Q-network
# Basic MLP Connect 4 network
def mlp_connect4(config):
mlp_layers = config['mlp_layers']
inputs = Input((6, 7, 3), name='encoded_board')
x = inputs
# Flatten the input
x = Flatten()(x)
# MLP layers - sigmoid activation on the final layer
for i, layer_size in enumerate(mlp_layers):
x = Dense(layer_size, activation='linear')(x)
if i < (len(mlp_layers)-1):
x = Activation('relu')(x)
else:
x = Activation('sigmoid', name='Q-values')(x)
outputs = x
return (inputs, outputs)
# Basic convolutional Connect 4 network
def convnet_connect4(config):
inputs = Input((6, 7, 3), name='encoded_board')
x = inputs
# Convolutional layers
conv_outputs = []
for i, (filters, kernel, strides) in enumerate(
config['filters_kernels_strides']):
x = Conv2D(filters=filters, kernel_size=kernel, strides=strides,
padding='same', activation='linear')(x)
x = Activation('relu')(x)
# Flatten the activations
x = Flatten()(x)
# MLP layers - sigmoid activation on the final layer
for i, layer_size in enumerate(config['mlp_layers']):
if i < (len(config['mlp_layers'])-1):
# Non final fully connected layers
x = Dense(layer_size, activation='linear')(x)
x = Activation('relu')(x)
else:
# Head of the network
x = Dense(layer_size, activation='linear')(x)
outputs = Activation('sigmoid', name='Q-values')(x)
return (inputs, outputs)
#@title Network evaluation and action selection
# Convert the raw observation to the network input - this is the place to add
# engineered features
def obs_to_network_input(observation, configuration, player_id):
board = np.array(observation.board).reshape(
configuration.rows, configuration.columns)
# One hot encoding of the inputs (empty, player 1, player 2)
obs_input = (np.arange(3) == board[..., None]).astype(float)
# Swap player 1 and player 2 positions? The network always assumes that 'my'
# stones (player_id) come first
if player_id == 1:
tmp = obs_input[:, :, 1].copy()
obs_input[:, :, 1] = obs_input[:, :, 2]
obs_input[:, :, 2] = tmp
return obs_input
# Predict network outputs where the number of inputs can be large. Use batching
# when there are more inputs than the max_batch_size
def my_keras_predict(model, inputs, max_batch_size=10000):
num_inputs = inputs.shape[0]
num_batches = int(np.ceil(num_inputs/max_batch_size))
outputs = []
for i in range(num_batches):
end_id = num_inputs if i == (num_batches-1) else (i+1)*max_batch_size
batch_inputs = inputs[i*max_batch_size:end_id]
if tf.__version__[0] == '2':
batch_outputs = model(batch_inputs)
batch_outputs = batch_outputs.numpy()
else:
batch_outputs = model.predict(batch_inputs)
outputs.append(batch_outputs)
return np.concatenate([o for o in outputs])
def select_action_from_q(q_values, valid_actions, epsilon_greedy_parameter):
# Select the best valid or a valid exploratory action using epsilon-greedy
best_q = q_values[valid_actions].max()
best_a_ids = np.where(q_values[valid_actions] == best_q)[0]
best_a = valid_actions[np.random.choice(best_a_ids)]
exploratory_a = np.random.choice(valid_actions)
explore = np.random.uniform() < epsilon_greedy_parameter
action = exploratory_a if explore else best_a
return action
def get_agent_q_and_a(agent, board, epsilon_greedy_parameter):
# Obtain the Q-values
q_values = my_keras_predict(agent, np.expand_dims(board, 0))[0]
# Select an action from the Q-values
valid_actions = np.where(board[0, :, 0] == 1)[0]
action = select_action_from_q(q_values, valid_actions,
epsilon_greedy_parameter)
return q_values, action
#@title Self play
def self_play(agent, num_games, verbose, epsilon_greedy_parameter):
experience = []
for game_id in range(num_games):
this_game_data = []
env = make_game('connectx')
env.reset()
episode_step = 0
# Take actions until the game is terminated
while not env.done:
if env.state[0].status == 'ACTIVE':
player_id = 0
elif env.state[1].status == 'ACTIVE':
player_id = 1
# Obtain the Q-values and selected action for the current state
current_network_input = obs_to_network_input(
env.state[player_id].observation, env.configuration, player_id)
q_values, action = get_agent_q_and_a(
agent, current_network_input, epsilon_greedy_parameter)
if episode_step == 0 and game_id == 0:
print("Start move Q-values: {}".format(np.around(q_values, 3)))
env.step([int(action) if i == player_id else None for i in [0, 1]])
# Store the state transition data - swap the player id!
next_network_input = obs_to_network_input(
env.state[player_id].observation, env.configuration, 1-player_id)
this_game_data.append(ExperienceStep(
game_id,
current_network_input,
action,
next_network_input,
False, # Last episode action, overwritten at the end of the episode
np.nan, # Terminal reward, overwritten at the end of the episode
))
episode_step += 1
# Overwrite the terminal reward for all actions
first_terminal_reward = env.state[0].reward
for i in range(len(this_game_data)):
if i % 2 == 0:
this_game_data[i].episode_reward = first_terminal_reward
else:
this_game_data[i].episode_reward = 1-first_terminal_reward
# Update statistics which can not be computed before the episode is over.
this_game_data[-1].last_episode_action = True # Last episode action
experience.extend(this_game_data)
if verbose and game_id % 10 == 9:
print('Completed playing game {} of {}'.format(game_id+1, num_games))
return experience
#@title One step minimax Q-learning target computation - Source of name: https://arxiv.org/pdf/1901.00137.pdf. To be more precise, we actually use negamax Q-learning since we rely on the property that the game is a two-player zero sum game.
def one_step_minimax_q_targets(next_q_vals, experience):
next_q_minimax_star = (1-next_q_vals.max(1)).tolist() # Negamax - 2 player 0-sum
terminal_rewards = [e.episode_reward for e in experience]
last_episode_actions = [e.last_episode_action for e in experience]
target_qs = np.array([t if l else n for(t, l, n) in zip(
terminal_rewards, last_episode_actions, next_q_minimax_star)])
return target_qs
# N-step minimax Q-learning target computation
def minimax_q_n_step_targets(this_q_vals, next_q_vals, experience,
return_steps_trace):
# Collect generic experience values of interest
return_steps, lambda_ = return_steps_trace
actions = [e.action for e in experience]
terminal_actions = np.concatenate(
[np.array([e.last_episode_action for e in experience]),
np.zeros((return_steps), dtype=np.bool)])
terminal_rewards = 1-np.array([e.episode_reward for e in experience])
num_experience_steps = len(experience)
# Determine if the actions are exploratory or greedy
greedy_actions = np.zeros((num_experience_steps+return_steps),
dtype=np.bool)
for i in range(num_experience_steps):
valid_actions = np.where(experience[i].current_network_input[:, :, 0].sum(
0) > 0)[0]
best_valid_q = this_q_vals[i][valid_actions].max()
greedy_actions[i] = best_valid_q == this_q_vals[i, actions[i]]
# Extend and overwrite next q vals to include the true rewards
next_q_vals = np.concatenate([next_q_vals, -999*np.ones((
return_steps, next_q_vals.shape[1]))])
next_q_vals[terminal_actions] = np.tile(
np.expand_dims(terminal_rewards[terminal_actions[
:num_experience_steps]], -1), [1, next_q_vals.shape[1]])
# Consider returns, up to 'return_steps' into the future. The trace is cut
# at episode boundaries and before considering a non-exploratory action
consider_targets = np.ones((num_experience_steps), dtype=np.bool)
target_lambda_sums = np.zeros((num_experience_steps))
target_weighted_sums = np.zeros((num_experience_steps))
trace_multiplier = 1
for i in range(return_steps):
best_qs = next_q_vals[i:(i+num_experience_steps)].max(-1)
if i % 2 == 0:
best_qs = 1-best_qs
target_weighted_sums[consider_targets] += trace_multiplier*best_qs[
consider_targets]
target_lambda_sums[consider_targets] += trace_multiplier
# Don't consider any further targets if this was the episode terminal
# action
consider_targets = np.logical_and(
consider_targets, np.logical_not(
terminal_actions[i:(i+num_experience_steps)]))
# Don't consider the target if the next action is exploratory
# Extending greedy actions with return_steps False values makes sure
# we don't consider N step returns where there is no data
consider_targets = np.logical_and(
consider_targets, greedy_actions[(i+1):(i+1+num_experience_steps)])
trace_multiplier *= lambda_
targets = target_weighted_sums/target_lambda_sums
return targets
# Get the q-learning observations and targets
def minimax_q_learning(model, experience, nan_coding_value, return_steps_trace):
# Evaluate the Q-values of the current and next state for all observations
num_actions = experience[0].current_network_input.shape[1]
num_steps = len(experience)
this_states = np.stack([e.current_network_input for e in experience])
next_states = np.stack([e.next_network_input for e in experience])
this_q_vals = my_keras_predict(model, this_states)
next_q_vals = my_keras_predict(model, next_states)
# Filter out next Q-values where the next action is not valid. Filtered out
# since every target computation performs a max operation.
# This was an unintended bug in the original version of the notebook!
next_q_vals[next_states[:, 0, :, 0] == 0] = -1
# Compute the target Q-values
num_return_steps = return_steps_trace[0]
if num_return_steps == 1:
target_qs = one_step_minimax_q_targets(next_q_vals, experience)
else:
target_qs = minimax_q_n_step_targets(this_q_vals, next_q_vals, experience,
return_steps_trace)
# Don't learn about non acted Q-values
all_target_qs = nan_coding_value*np.ones([num_steps, num_actions])
# Set the targets for the actions that were selected
actions = [e.action for e in experience]
all_target_qs[np.arange(num_steps), actions] = target_qs
return this_states, all_target_qs
# Masked mse loss - values equal to mask_val are ignored in the loss
def masked_mse(y, p, mask_val):
mask = K.cast(K.not_equal(y, mask_val), K.floatx())
if tf.__version__[0] == '2':
masked_loss = tf.losses.mse(y*mask, p*mask)
else:
mask = K.cast(mask, 'float32')
masked_loss = K.mean(tf.math.square(p*mask - y*mask), axis=-1)
# masked_loss = tf.compat.v1.losses.mean_squared_error(y*mask, p*mask)
return masked_loss
# Make the masked mse loss
def make_masked_mse(nan_coding_value):
def loss(y, p):
return masked_mse(y, p, mask_val=nan_coding_value)
return loss
# Update the Q-network by minimzing the difference with the target q-values
def update_agent(experience, agent, config):
nan_coding_value = config['nan_coding_value']
return_steps_trace = config['return_steps_trace']
x_train, y_train = minimax_q_learning(
agent, experience, nan_coding_value, return_steps_trace)
adam = Adam(lr=config['learning_rate'])
agent.compile(optimizer=adam, loss=make_masked_mse(nan_coding_value))
agent.fit(
x_train,
y_train,
batch_size=config['batch_size'],
epochs=config['num_epochs'],
verbose=config['verbose_fit']
)
#@title Evaluate a greedy agent against the random agent
def eval_greedy_versus_random(agent, num_eval_games):
rewards = []
env = make_game('connectx', debug=False)
my_agent_starts = False
for i in range(num_eval_games):
my_agent_starts = not my_agent_starts # Alternate start turns
player_id = int(not my_agent_starts)
episode_step = 0
while not env.done:
if env.state[0].status == 'ACTIVE':
active = 0
elif env.state[1].status == 'ACTIVE':
active = 1
current_network_input = obs_to_network_input(
env.state[active].observation, env.configuration, player_id)
if active == player_id:
# Take the greedy action of the agent
_, action = get_agent_q_and_a(
agent, current_network_input, epsilon_greedy_parameter=0)
else:
# Take a random valid action
valid_actions = np.where(current_network_input[0, :, 0] == 1)[0]
action = np.random.choice(valid_actions)
env.step([int(action) if i == active else None for i in [0, 1]])
episode_step += 1
rewards.append(env.state[player_id].reward)
env.reset()
return np.array(rewards).mean()
#@title Main logic - interrupt manually since this will run forever. Without bugs, the agents should consistently beat the random agent > 90% of the time after <2000 games and eventually win all games with slightly better configuration settings. A non-buggy agent usually prefers central opening moves (higher Q-value). The buggy agent has no action preference at any state (hint to find one of the bugs!). Second hint: does the printed number of transitions look weird to you? It should! As training progresses, the games should get longer on average since the agent learns more advanced strategies.
# The network weights are saved in this session which is bound to be brittle - store to and load from file when expanding on this logic!
reset_agent_weights = False #@param ["False", "True"] {type:"raw"}
reset_experience_buffer = False #@param ["False", "True"] {type:"raw"}
plot_agent_architecture = True #@param ["False", "True"] {type:"raw"}
config = {
'model': [mlp_connect4, convnet_connect4][1],
'model_config': {
'filters_kernels_strides': [
(32, 3, 1), (32, 3, 2), (32, 3, 2)],
'mlp_layers': [64, 7],
},
'num_self_play_games_per_iteration': 200,
'verbose_self_play': False, # Show self-play game id progress?
'epsilon_greedy_parameter': 0.2,
'max_experience_buffer_games': 2000,
'num_learning_updates_per_iteration': 2,
'learning_config': {
# Return-steps: N in N-step returns and lambda (only for N > 1)
'return_steps_trace': (20, 0.9),
'nan_coding_value': -999,
'learning_rate': 5e-4,
'batch_size': 32,
'num_epochs': 2,
'verbose_fit': False, # Show learning progress?
},
'num_evaluation_games': 50,
}
# Reset the agent weights when the option is selected or when the agent is not
# defined yet - This initializes the network.
if 'my_trained_agent' not in locals() or reset_agent_weights:
inputs, outputs = config['model'](config['model_config'])
my_trained_agent = Model(inputs=inputs, outputs=outputs)
if plot_agent_architecture:
plot_model(my_trained_agent, show_shapes=True, show_layer_names=True,
to_file='model.png')
display(Image(retina=True, filename='model.png'))
# Clear the experience buffer when the option is selected or when the buffer is
# not defined yet.
if 'experience_buffer' not in locals() or reset_experience_buffer:
experience_buffer = ExperienceBuffer(config['max_experience_buffer_games'])
while True:
# 1) Add self-play experience to the experience replay buffer.
experience = self_play(my_trained_agent,
config['num_self_play_games_per_iteration'],
config['verbose_self_play'],
config['epsilon_greedy_parameter'],
)
experience_buffer.add(experience)
# 2) Update the Q network using the experience from the experience buffer
print("Experience buffer size: {} transitions ({} episodes)".format(
experience_buffer.size(), experience_buffer.num_episodes()))
for _ in range(config['num_learning_updates_per_iteration']):
update_agent(experience_buffer.get_all_data(), my_trained_agent,
config['learning_config'])
# 3) Evaluate the greedy trained agent against a random agent to see if we are
# making progress
mean_eval_reward = eval_greedy_versus_random(
my_trained_agent, config['num_evaluation_games'])
print("Mean reward against random agent in {} games: {}".format(
config['num_evaluation_games'], mean_eval_reward))
print("")
```
## Let's play against our trained agent to see if we can outsmart it!
```
play_against_trained = True #@param ["False", "True"] {type:"raw"}
plot_resolution = 400 #@param {type:"slider", min:200, max:500, step:1}
print(my_trained_agent.predict(np.ones((1, 6, 7, 3))))
# Here we define an agent that picks the best action according to the trained
# Q-value network. There is a lot of code duplication with play_against_agent
# becaus I ran into issues making the trained agent accessible to the Graph.
# Always play as first position against the opposing agent.
# Adapted from https://www.kaggle.com/marcovasquez/how-to-play-with-computer-and-check-winner
def play_against_trained_agent(agent):
env = make_game("connectx", debug=False, configuration={"timeout": 10})
human_starts = np.random.uniform() > 0.5
human_id = int(not human_starts)
human_move = human_starts
while not env.done:
if human_move:
clear_output(wait=True) # Comment if you want to keep track of every action
print("{}'s color: {}".format(
your_name, "Blue" if human_starts else "Grey"))
env.render(mode="ipython", width=plot_resolution, height=plot_resolution,
header=False, controls=False)
observation = env.state[human_id].observation
# Plot the opponent q-values when they are available
if (np.array(observation.board) == 1).sum() > 0:
print("Previous step agent Q-values: {}".format(np.round(q_values, 2)))
action = get_input(your_name, observation, env.configuration)
if action is None:
print("Exiting game after pressing q")
return
env.step([int(action), None] if human_starts else [None, int(action)])
else:
current_network_input = obs_to_network_input(
env.state[1-human_id].observation, env.configuration,
player_id=1-human_id)
# Take the greedy action of the agent
q_values, action = get_agent_q_and_a(
agent, current_network_input, epsilon_greedy_parameter=0)
env.step([None, int(action)] if human_starts else [int(action), None])
human_move = not human_move
observation = env.state[human_id].observation
if (check_winner(observation) == (human_id+1)):
print("You Won, Amazing! \nGAME OVER")
elif (check_winner(observation) == (2-human_id)):
print("The agent Won! \nGAME OVER")
if (check_winner(observation) is None):
print("That is a draw between you and the agent")
env.render(mode="ipython", width=plot_resolution, height=plot_resolution,
header=False, controls=False)
if play_against_trained:
play_against_trained_agent(my_trained_agent)
```
## Advanced - suggestions on how to make your agent stronger and let the genie out of the bottle.
* Use a [convolutional network](https://https://keras.io/examples/mnist_cnn/) instead of a multi-layer perceptron (logic already provided - you only need to change the 'model' setting) - experiment with what architecture works best.
* Use N-step returns (N=20, lambda=0.9 is a good start setting) instead of one step returns (logic already provided - you only need to change the 'return_steps_trace' setting).
* Make use of the vertical symmetry of the game - double the learning experience and make the acting more consistent.
* More directed exploration - e.g. [Boltzmann exploration](https://automaticaddison.com/boltzmann-distribution-and-epsilon-greedy-search/)
* Evaluate against the previous iteration: define a new agent iteration if we confidently beat the previous iteration (e.g. >= 70% of the rewards)
* Add the evaluation experience to the experience replay buffer - all experience is valuable since the environment interaction is expensive!
* Checkpoint an agent when it is significantly better than the current iteration and play against all previous and the current checkpoint in self-play
* Override the Q-values of actions that lead to a certain win or loss under perfect play of the opponent.
```
# Intentionally blank - cheat sheet below
```
## Cheat sheet: Fixed bugs and obvious areas for improvements in the original code
####Obvious bugs:
* Bug in encoding of who is playing - **obs_to_network_input** - off by one in player id. Root cause: spending too much time in R. The result is that the network does not know who is playing. Fix:
```
def obs_to_network_input(observation, configuration, player_id):
board = np.array(observation.board).reshape(
configuration.rows, configuration.columns)
# One hot encoding of the inputs (empty, player 1, player 2)
obs_input = (np.arange(3) == board[..., None]).astype(float)
# Swap player 1 and player 2 positions? The network always assumes that 'my'
# stones (player_id) come first
if player_id == 1: # <---------------------
tmp = obs_input[:, :, 1].copy()
obs_input[:, :, 1] = obs_input[:, :, 2]
obs_input[:, :, 2] = tmp
return obs_input
```
* Not using the true reward signal - in **one_step_minimax_q_targets** - Change to:
```
target_qs = np.array([t if l else n for(t, l, n) in zip(
terminal_rewards, last_episode_actions, next_q_minimax_star)])
```
####Obvious areas for improvement:
* Not exploring when collecting experience - set epsilon greedy to something like 0.2 - decay as training progresses according to theory.
```
```
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content-dl/blob/main/tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Tutorial 3: Deep linear neural networks
**Week 1, Day 2: Linear Deep Learning**
**By Neuromatch Academy**
__Content creators:__ Andrew Saxe, Saeed Salehi, Spiros Chavlis
__Content reviewers:__ Polina Turishcheva, Antoine De Comite
__Content editors:__ Anoop Kulkarni
__Production editors:__ Khalid Almubarak, Spiros Chavlis
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
---
# Tutorial Objectives
* Deep linear neural networks
* Learning dynamics and singular value decomposition
* Representational Similarity Analysis
* Illusory correlations & ethics
```
# @title Tutorial slides
# @markdown These are the slides for the videos in this tutorial
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/bncr8/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
```
---
# Setup
```
# Imports
import math
import torch
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import torch.nn as nn
import torch.optim as optim
!pip install git+https://github.com/NeuromatchAcademy/evaltools --quiet
from evaltools.airtable import AirtableForm
# @title Figure settings
from matplotlib import gridspec
from ipywidgets import interact, IntSlider, FloatSlider, fixed
from ipywidgets import FloatLogSlider, Layout, VBox
from ipywidgets import interactive_output
from mpl_toolkits.axes_grid1 import make_axes_locatable
import warnings
warnings.filterwarnings("ignore")
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/content-creation/main/nma.mplstyle")
# @title Plotting functions
def plot_x_y_hier_data(im1, im2, subplot_ratio=[1, 2]):
fig = plt.figure(figsize=(12, 5))
gs = gridspec.GridSpec(1, 2, width_ratios=subplot_ratio)
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1])
ax0.imshow(im1, cmap="cool")
ax1.imshow(im2, cmap="cool")
# plt.suptitle("The whole dataset as imshow plot", y=1.02)
ax0.set_title("Labels of all samples")
ax1.set_title("Features of all samples")
ax0.set_axis_off()
ax1.set_axis_off()
plt.tight_layout()
plt.show()
def plot_x_y_hier_one(im1, im2, subplot_ratio=[1, 2]):
fig = plt.figure(figsize=(12, 1))
gs = gridspec.GridSpec(1, 2, width_ratios=subplot_ratio)
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1])
ax0.imshow(im1, cmap="cool")
ax1.imshow(im2, cmap="cool")
ax0.set_title("Labels of a single sample")
ax1.set_title("Features of a single sample")
ax0.set_axis_off()
ax1.set_axis_off()
plt.tight_layout()
plt.show()
def plot_tree_data(label_list = None, feature_array = None, new_feature = None):
cmap = matplotlib.colors.ListedColormap(['cyan', 'magenta'])
n_features = 10
n_labels = 8
im1 = np.eye(n_labels)
if feature_array is None:
im2 = np.array([[1, 1, 1, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 1, 1, 1],
[1, 1, 1, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 1],
[0, 0, 1, 1, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 1, 0, 0],
[0, 1, 1, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 1, 0, 1]]).T
im2[im2 == 0] = -1
feature_list = ['can_grow',
'is_mammal',
'has_leaves',
'can_move',
'has_trunk',
'can_fly',
'can_swim',
'has_stem',
'is_warmblooded',
'can_flower']
else:
im2 = feature_array
if label_list is None:
label_list = ['Goldfish', 'Tuna', 'Robin', 'Canary',
'Rose', 'Daisy', 'Pine', 'Oak']
fig = plt.figure(figsize=(12, 7))
gs = gridspec.GridSpec(1, 2, width_ratios=[1, 1.35])
ax1 = plt.subplot(gs[0])
ax2 = plt.subplot(gs[1])
ax1.imshow(im1, cmap=cmap)
if feature_array is None:
implt = ax2.imshow(im2, cmap=cmap, vmin=-1.0, vmax=1.0)
else:
implt = ax2.imshow(im2[:, -n_features:], cmap=cmap, vmin=-1.0, vmax=1.0)
divider = make_axes_locatable(ax2)
cax = divider.append_axes("right", size="5%", pad=0.1)
cbar = plt.colorbar(implt, cax=cax, ticks=[-0.5, 0.5])
cbar.ax.set_yticklabels(['no', 'yes'])
ax1.set_title("Labels")
ax1.set_yticks(ticks=np.arange(n_labels))
ax1.set_yticklabels(labels=label_list)
ax1.set_xticks(ticks=np.arange(n_labels))
ax1.set_xticklabels(labels=label_list, rotation='vertical')
ax2.set_title("{} random Features".format(n_features))
ax2.set_yticks(ticks=np.arange(n_labels))
ax2.set_yticklabels(labels=label_list)
if feature_array is None:
ax2.set_xticks(ticks=np.arange(n_features))
ax2.set_xticklabels(labels=feature_list, rotation='vertical')
else:
ax2.set_xticks(ticks=[n_features-1])
ax2.set_xticklabels(labels=[new_feature], rotation='vertical')
plt.tight_layout()
plt.show()
def plot_loss(loss_array, title="Training loss (Mean Squared Error)", c="r"):
plt.figure(figsize=(10, 5))
plt.plot(loss_array, color=c)
plt.xlabel("Epoch")
plt.ylabel("MSE")
plt.title(title)
plt.show()
def plot_loss_sv(loss_array, sv_array):
n_sing_values = sv_array.shape[1]
sv_array = sv_array / np.max(sv_array)
cmap = plt.cm.get_cmap("Set1", n_sing_values)
_, (plot1, plot2) = plt.subplots(2, 1, sharex=True, figsize=(10, 10))
plot1.set_title("Training loss (Mean Squared Error)")
plot1.plot(loss_array, color='r')
plot2.set_title("Evolution of singular values (modes)")
for i in range(n_sing_values):
plot2.plot(sv_array[:, i], c=cmap(i))
plot2.set_xlabel("Epoch")
plt.show()
def plot_loss_sv_twin(loss_array, sv_array):
n_sing_values = sv_array.shape[1]
sv_array = sv_array / np.max(sv_array)
cmap = plt.cm.get_cmap("winter", n_sing_values)
fig = plt.figure(figsize=(10, 5))
ax1 = plt.gca()
ax1.set_title("Learning Dynamics")
ax1.set_xlabel("Epoch")
ax1.set_ylabel("Mean Squared Error", c='r')
ax1.tick_params(axis='y', labelcolor='r')
ax1.plot(loss_array, color='r')
ax2 = ax1.twinx()
ax2.set_ylabel("Singular values (modes)", c='b')
ax2.tick_params(axis='y', labelcolor='b')
for i in range(n_sing_values):
ax2.plot(sv_array[:, i], c=cmap(i))
fig.tight_layout()
plt.show()
def plot_ills_sv_twin(ill_array, sv_array, ill_label):
n_sing_values = sv_array.shape[1]
sv_array = sv_array / np.max(sv_array)
cmap = plt.cm.get_cmap("winter", n_sing_values)
fig = plt.figure(figsize=(10, 5))
ax1 = plt.gca()
ax1.set_title("Network training and the Illusory Correlations")
ax1.set_xlabel("Epoch")
ax1.set_ylabel(ill_label, c='r')
ax1.tick_params(axis='y', labelcolor='r')
ax1.plot(ill_array, color='r', linewidth=3)
ax1.set_ylim(-1.05, 1.05)
# ax1.set_yticks([-1, 0, 1])
# ax1.set_yticklabels(['False', 'Not sure', 'True'])
ax2 = ax1.twinx()
ax2.set_ylabel("Singular values (modes)", c='b')
ax2.tick_params(axis='y', labelcolor='b')
for i in range(n_sing_values):
ax2.plot(sv_array[:, i], c=cmap(i))
fig.tight_layout()
plt.show()
def plot_loss_sv_rsm(loss_array, sv_array, rsm_array, i_ep):
n_ep = loss_array.shape[0]
rsm_array = rsm_array / np.max(rsm_array)
sv_array = sv_array / np.max(sv_array)
n_sing_values = sv_array.shape[1]
cmap = plt.cm.get_cmap("winter", n_sing_values)
fig = plt.figure(figsize=(14, 5))
gs = gridspec.GridSpec(1, 2, width_ratios=[5, 3])
ax0 = plt.subplot(gs[1])
ax0.yaxis.tick_right()
implot = ax0.imshow(rsm_array[i_ep], cmap="Purples", vmin=0.0, vmax=1.0)
divider = make_axes_locatable(ax0)
cax = divider.append_axes("right", size="5%", pad=0.9)
cbar = plt.colorbar(implot, cax=cax, ticks=[])
cbar.ax.set_ylabel('Similarity', fontsize=12)
ax0.set_title("RSM at epoch {}".format(i_ep), fontsize=16)
# ax0.set_axis_off()
ax0.set_yticks(ticks=np.arange(n_sing_values))
ax0.set_yticklabels(labels=item_names)
# ax0.set_xticks([])
ax0.set_xticks(ticks=np.arange(n_sing_values))
ax0.set_xticklabels(labels=item_names, rotation='vertical')
ax1 = plt.subplot(gs[0])
ax1.set_title("Learning Dynamics", fontsize=16)
ax1.set_xlabel("Epoch")
ax1.set_ylabel("Mean Squared Error", c='r')
ax1.tick_params(axis='y', labelcolor='r', direction="in")
ax1.plot(np.arange(n_ep), loss_array, color='r')
ax1.axvspan(i_ep-2, i_ep+2, alpha=0.2, color='m')
ax2 = ax1.twinx()
ax2.set_ylabel("Singular values", c='b')
ax2.tick_params(axis='y', labelcolor='b', direction="in")
for i in range(n_sing_values):
ax2.plot(np.arange(n_ep), sv_array[:, i], c=cmap(i))
ax1.set_xlim(-1, n_ep+1)
ax2.set_xlim(-1, n_ep+1)
plt.show()
#@title Helper functions
atform = AirtableForm('appn7VdPRseSoMXEG',\
'W1D2_T3','https://portal.neuromatchacademy.org/api/redirect/to/f60119ed-1c22-4dae-9e18-b6a767f477e1')
def build_tree(n_levels, n_branches, probability, to_np_array=True):
"""Builds a tree
"""
assert 0.0 <= probability <= 1.0
tree = {}
tree["level"] = [0]
for i in range(1, n_levels+1):
tree["level"].extend([i]*(n_branches**i))
tree["pflip"] = [probability]*len(tree["level"])
tree["parent"] = [None]
k = len(tree["level"])-1
for j in range(k//n_branches):
tree["parent"].extend([j]*n_branches)
if to_np_array:
tree["level"] = np.array(tree["level"])
tree["pflip"] = np.array(tree["pflip"])
tree["parent"] = np.array(tree["parent"])
return tree
def sample_from_tree(tree, n):
""" Generates n samples from a tree
"""
items = [i for i, v in enumerate(tree["level"]) if v == max(tree["level"])]
n_items = len(items)
x = np.zeros(shape=(n, n_items))
rand_temp = np.random.rand(n, len(tree["pflip"]))
flip_temp = np.repeat(tree["pflip"].reshape(1, -1), n, 0)
samp = (rand_temp > flip_temp) * 2 - 1
for i in range(n_items):
j = items[i]
prop = samp[:, j]
while tree["parent"][j] is not None:
j = tree["parent"][j]
prop = prop * samp[:, j]
x[:, i] = prop.T
return x
def generate_hsd():
# building the tree
n_branches = 2 # 2 branches at each node
probability = .15 # flipping probability
n_levels = 3 # number of levels (depth of tree)
tree = build_tree(n_levels, n_branches, probability, to_np_array=True)
tree["pflip"][0] = 0.5
n_samples = 10000 # Sample this many features
tree_labels = np.eye(n_branches**n_levels)
tree_features = sample_from_tree(tree, n_samples).T
return tree_labels, tree_features
def linear_regression(X, Y):
"""Analytical Linear regression
"""
assert isinstance(X, np.ndarray)
assert isinstance(Y, np.ndarray)
M, Dx = X.shape
N, Dy = Y.shape
assert Dx == Dy
W = Y @ X.T @ np.linalg.inv(X @ X.T)
return W
def add_feature(existing_features, new_feature):
assert isinstance(existing_features, np.ndarray)
assert isinstance(new_feature, list)
new_feature = np.array([new_feature]).T
# return np.hstack((tree_features, new_feature*2-1))
return np.hstack((tree_features, new_feature))
def net_svd(model, in_dim):
"""Performs a Singular Value Decomposition on a given model weights
Args:
model (torch.nn.Module): neural network model
in_dim (int): the input dimension of the model
Returns:
U, Σ, V (Tensors): Orthogonal, diagonal, and orthogonal matrices
"""
W_tot = torch.eye(in_dim)
for weight in model.parameters():
W_tot = weight.detach() @ W_tot
U, SIGMA, V = torch.svd(W_tot)
return U, SIGMA, V
def net_rsm(h):
"""Calculates the Representational Similarity Matrix
Arg:
h (torch.Tensor): activity of a hidden layer
Returns:
(torch.Tensor): Representational Similarity Matrix
"""
rsm = h @ h.T
return rsm
def initializer_(model, gamma=1e-12):
"""(in-place) Re-initialization of weights
Args:
model (torch.nn.Module): PyTorch neural net model
gamma (float): initialization scale
"""
for weight in model.parameters():
n_out, n_in = weight.shape
sigma = gamma / math.sqrt(n_in + n_out)
nn.init.normal_(weight, mean=0.0, std=sigma)
def test_initializer_ex(seed):
torch.manual_seed(seed)
model = LNNet(5000, 5000, 1)
try:
ex_initializer_(model, gamma=1)
std = torch.std(next(iter(model.parameters())).detach()).item()
if -1e-5 <= (std - 0.01) <= 1e-5:
print("Well done! Seems to be correct!")
else:
print("Please double check your implementation!")
except:
print("Faulty Implementation!")
def test_net_svd_ex(seed):
torch.manual_seed(seed)
model = LNNet(8, 30, 100)
try:
U_ex, Σ_ex, V_ex = ex_net_svd(model, 8)
U, Σ, V = net_svd(model, 8)
if (torch.all(torch.isclose(U_ex.detach(), U.detach(), atol=1e-6)) and
torch.all(torch.isclose(Σ_ex.detach(), Σ.detach(), atol=1e-6)) and
torch.all(torch.isclose(V_ex.detach(), V.detach(), atol=1e-6))):
print("Well done! Seems to be correct!")
else:
print("Please double check your implementation!")
except:
print("Faulty Implementation!")
def test_net_rsm_ex(seed):
torch.manual_seed(seed)
x = torch.rand(7, 17)
try:
y_ex = ex_net_rsm(x)
y = x @ x.T
if (torch.all(torch.isclose(y_ex, y, atol=1e-6))):
print("Well done! Seems to be correct!")
else:
print("Please double check your implementation!")
except:
print("Faulty Implementation!")
#@title Set random seed
#@markdown Executing `set_seed(seed=seed)` you are setting the seed
# for DL its critical to set the random seed so that students can have a
# baseline to compare their results to expected results.
# Read more here: https://pytorch.org/docs/stable/notes/randomness.html
# Call `set_seed` function in the exercises to ensure reproducibility.
import random
import torch
def set_seed(seed=None, seed_torch=True):
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
# In case that `DataLoader` is used
def seed_worker(worker_id):
worker_seed = torch.initial_seed() % 2**32
np.random.seed(worker_seed)
random.seed(worker_seed)
#@title Set device (GPU or CPU). Execute `set_device()`
# especially if torch modules used.
# inform the user if the notebook uses GPU or CPU.
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("GPU is not enabled in this notebook. \n"
"If you want to enable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `GPU` from the dropdown menu")
else:
print("GPU is enabled in this notebook. \n"
"If you want to disable it, in the menu under `Runtime` -> \n"
"`Hardware accelerator.` and select `None` from the dropdown menu")
return device
SEED = 2021
set_seed(seed=SEED)
DEVICE = set_device()
```
This colab notebook is GPU free!
---
# Section 0: Prelude
Throughout this tutorial, we will use a linear neural net with a single hidden layer. We have also excluded `bias` from the layers.
**important to remember**: The forward loop returns the hidden activation, besides the network output (prediction). we will need it in section 3.
```
class LNNet(nn.Module):
"""A Linear Neural Net with one hidden layer
"""
def __init__(self, in_dim, hid_dim, out_dim):
"""
Args:
in_dim (int): input dimension
out_dim (int): ouput dimension
hid_dim (int): hidden dimension
"""
super().__init__()
self.in_hid = nn.Linear(in_dim, hid_dim, bias=False)
self.hid_out = nn.Linear(hid_dim, out_dim, bias=False)
def forward(self, x):
"""
Args:
x (torch.Tensor): input tensor
"""
hid = self.in_hid(x) # hidden activity
out = self.hid_out(hid) # output (prediction)
return out, hid
```
Other than `net_svd` and `net_rsm` functions, the training loop should be mostly familiar to you. We will define these functions in the coming sections.
**important**: Please note that the two functions are part of inner training loop and are therefore executed and recorded at every iteration.
```
def train(model, inputs, targets, n_epochs, lr, illusory_i=0):
"""Training function
Args:
model (torch nn.Module): the neural network
inputs (torch.Tensor): features (input) with shape `[batch_size, input_dim]`
targets (torch.Tensor): targets (labels) with shape `[batch_size, output_dim]`
n_epochs (int): number of training epochs (iterations)
lr (float): learning rate
illusory_i (int): index of illusory feature
Returns:
np.ndarray: record (evolution) of training loss
np.ndarray: record (evolution) of singular values (dynamic modes)
np.ndarray: record (evolution) of representational similarity matrices
np.ndarray: record of network prediction for the last feature
"""
in_dim = inputs.size(1)
losses = np.zeros(n_epochs) # loss records
modes = np.zeros((n_epochs, in_dim)) # singular values (modes) records
rs_mats = [] # representational similarity matrices
illusions = np.zeros(n_epochs) # prediction for the given feature
optimizer = optim.SGD(model.parameters(), lr=lr)
criterion = nn.MSELoss()
for i in range(n_epochs):
optimizer.zero_grad()
predictions, hiddens = model(inputs)
loss = criterion(predictions, targets)
loss.backward()
optimizer.step()
# Section 2 Singular value decomposition
U, Σ, V = net_svd(model, in_dim)
# Section 3 calculating representational similarity matrix
RSM = net_rsm(hiddens.detach())
# Section 4 network prediction of illusory_i inputs for the last feature
pred_ij = predictions.detach()[illusory_i, -1]
# logging (recordings)
losses[i] = loss.item()
modes[i] = Σ.detach().numpy()
rs_mats.append(RSM.numpy())
illusions[i] = pred_ij.numpy()
return losses, modes, np.array(rs_mats), illusions
```
We also need take over the initialization of the weights. In PyTorch, [`nn.init`](https://pytorch.org/docs/stable/nn.init.html) provides us with the functions to initialize tensors from a given distribution.
**important**: Since we need to make sure the plots are correct, so the tutorial message is delivered, we test your exercise implementations but we will not use them for the plots and training.
## Coding Exercise 0: Re-initialization (Optional)
Complete the function `ex_initializer_`, such that the weights are sampled from the following distribution:
\begin{equation}
\mathcal{N}\left(\mu=0, ~~\sigma=\gamma \sqrt{\dfrac{1}{n_{in} + n_{out}}} \right)
\end{equation}
where $\gamma$ is the initialization scale, $n_{in}$ and $n_{out}$ are respectively input and output dimensions of the layer. the Underscore ("_") in `ex_initializer_` and other functions, denotes "[in-place](https://discuss.pytorch.org/t/what-is-in-place-operation/16244/2)" operation.
**important note**: since we did not include bias in the layers, the `model.parameters()` would only return the weights in each layer.
```
#add event to airtable
atform.add_event('Coding Exercise 0: Re-initialization')
def ex_initializer_(model, gamma=1e-12):
"""(in-place) Re-initialization of weights
Args:
model (torch.nn.Module): PyTorch neural net model
gamma (float): initialization scale
"""
for weight in model.parameters():
n_out, n_in = weight.shape
#################################################
## Define the standard deviation (sigma) for the normal distribution
# as given in the equation above
# Complete the function and remove or comment the line below
raise NotImplementedError("Function `ex_initializer_`")
#################################################
sigma = ...
nn.init.normal_(weight, mean=0.0, std=sigma)
## uncomment and run
# test_initializer_ex(SEED)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial3_Solution_e471aef3.py)
---
# Section 1: Deep Linear Neural Nets
```
# @title Video 1: Intro to Representation Learning
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1iM4y1T7eJ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"DqMSU4Bikt0", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 1: Intro to Representation Learning')
display(out)
```
So far, depth just seems to slow down the learning. And we know that a single nonlinear hidden layer (given enough number of neurons and infinite training samples) has the potential to approximate any function. So it seems fair to ask: **What is depth good for**?
One reason can be that shallow nonlinear neural networks hardly meet their true potential in practice. In the contrast, deep neural nets are often surprisingly powerful in learning complex functions without sacrificing generalization. A core intuition behind deep learning is that deep nets derive their power through learning internal representations. How does this work? To address representation learning, we have to go beyond the 1D chain.
For this and the next couple of exercises, we use syntactically generated hierarchically structured data through a *branching diffusion process* (see [this reference](https://www.pnas.org/content/pnas/suppl/2019/05/16/1820226116.DCSupplemental/pnas.1820226116.sapp.pdf) for more details).
<center><img src="https://raw.githubusercontent.com/ssnio/statics/main/neuromatch/tree.png" alt="Simple nn graph" width="600"/></center>
<center> hierarchically structured data (a tree) </center>
The inputs to the network are labels (i.e. names), while the outputs are the features (i.e. attributes). For example, for the label "Goldfish", the network has to learn all the (artificially created) features, such as "*can swim*", "*is cold-blooded*", "*has fins*", and more. Given that we are training on hierarchically structured data, network could also learn the tree structure, that Goldfish and Tuna have rather similar features, and Robin has more in common with Tuna, compared to Rose.
```
# @markdown #### Run to generate and visualize training samples from tree
tree_labels, tree_features = generate_hsd()
# convert (cast) data from np.ndarray to torch.Tensor
label_tensor = torch.tensor(tree_labels).float()
feature_tensor = torch.tensor(tree_features).float()
item_names = ['Goldfish', 'Tuna', 'Robin', 'Canary',
'Rose', 'Daisy', 'Pine', 'Oak']
plot_tree_data()
# dimensions
print("---------------------------------------------------------------")
print("Input Dimension: {}".format(tree_labels.shape[1]))
print("Output Dimension: {}".format(tree_features.shape[1]))
print("Number of samples: {}".format(tree_features.shape[0]))
```
To continue this tutorial, it is vital to understand the premise of our training data and what the task is. Therefore, please take your time to discuss them with your pod.
<center><img src="https://raw.githubusercontent.com/ssnio/statics/main/neuromatch/neural_net.png" alt="neural net" width="600"/></center>
<center> The neural network used for this tutorial </center>
## Interactive Demo 1: Training the deep LNN
Training a neural net on our data is straight forward. But before executing the next cell, remember the training loss curve from previous tutorial.
```
# @markdown #### Make sure you execute this cell to train the network and plot
lr = 100.0 # learning rate
gamma = 1e-12 # initialization scale
n_epochs = 250 # number of epochs
dim_input = 8 # input dimension = `label_tensor.size(1)`
dim_hidden = 30 # hidden neurons
dim_output = 10000 # output dimension = `feature_tensor.size(1)`
# model instantiation
dlnn_model = LNNet(dim_input, dim_hidden, dim_output)
# weights re-initialization
initializer_(dlnn_model, gamma)
# training
losses, *_ = train(dlnn_model,
label_tensor,
feature_tensor,
n_epochs=n_epochs,
lr=lr)
# plotting
plot_loss(losses)
```
**Think!**
Why haven't we seen these "bumps" in training before? And should we look for them in the future? What do these bumps mean?
Recall from previous tutorial, that we are always interested in learning rate ($\eta$) and initialization ($\gamma$) that would give us the fastest but yet stable (reliable) convergence. Try finding the optimal $\eta$ and $\gamma$ using the following widgets. More specifically, try large $\gamma$ and see if we can recover the bumps by tuning the $\eta$.
```
# @markdown #### Make sure you execute this cell to enable the widget!
def loss_lr_init(lr, gamma):
"""Trains and plots the loss evolution given lr and initialization
Args:
lr (float): learning rate
gamma (float): initialization scale
"""
n_epochs = 250 # number of epochs
dim_input = 8 # input dimension = `label_tensor.size(1)`
dim_hidden = 30 # hidden neurons
dim_output = 10000 # output dimension = `feature_tensor.size(1)`
# model instantiation
dlnn_model = LNNet(dim_input, dim_hidden, dim_output)
# weights re-initialization
initializer_(dlnn_model, gamma)
losses, *_ = train(dlnn_model,
label_tensor,
feature_tensor,
n_epochs=n_epochs,
lr=lr)
plot_loss(losses)
_ = interact(loss_lr_init,
lr = FloatSlider(min=1.0, max=200.0,
step=1.0, value=100.0,
continuous_update=False,
readout_format='.1f',
description='eta'),
epochs = fixed(250),
gamma = FloatLogSlider(min=-15, max=1,
step=1, value=1e-12, base=10,
continuous_update=False,
description='gamma')
)
```
---
# Section 2: Singular Value Decomposition (SVD)
```
# @title Video 2: SVD
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1bw411R7DJ", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"18oNWRziskM", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 2: SVD')
display(out)
```
In this section, we intend to study the learning (training) dynamics we just saw. First, we should know that a linear neural network is performing sequential matrix multiplications, which can be simplified to:
\begin{align}
\mathbf{y} &= \mathbf{W}_{L}~\mathbf{W}_{L-1}~\dots~\mathbf{W}_{1} ~ \mathbf{x} \\
&= (\prod_{i=1}^{L}{\mathbf{W}_{i}}) ~ \mathbf{x} \\
&= \mathbf{W}_{tot} ~ \mathbf{x}
\end{align}
where $L$ denotes the number of layers in our network.
Learning through gradient descent seems very alike to the evolution of a dynamic system. They both are described by a set of differential equations. Dynamical systems often have a "time-constant" which describes the rate of change, similar to the learning rate, only instead of time, gradient descent evolves through epochs.
[Saxe et al. (2013)](https://arxiv.org/abs/1312.6120) showed that to analyze and to understanding the nonlinear learning dynamics of a deep LNN, we can use [Singular Value Decomposition (SVD)](https://en.wikipedia.org/wiki/Singular_value_decomposition) to decompose the $\mathbf{W}_{tot}$ into orthogonal vectors, where orthogonality of the vectors would ensure their "individuality (independence)". This means we can break a deep wide LNN into multiple deep narrow LNN, so their activity is untangled from each other.
<br/>
__A Quick intro to SVD__
Any real-valued matix $A$ (yes, ANY) can be decomposed (factorized) to 3 matrices:
\begin{equation}
\mathbf{A} = \mathbf{U} \mathbf{Σ} \mathbf{V}^{\top}
\end{equation}
where $U$ is an orthogonal matrix, $\Sigma$ is a diagonal matrix, and $V$ is again an orthogonal matrix. The diagonal elements of $\Sigma$ are called **singular values**.
The main difference between SVD and EigenValue Decomposition (EVD), is that EVD requires $A$ to be squared and does not guarantee the eigenvectors to be orthogonal. For the complex-valued matrix $A$, the factorization changes to $A = UΣV^*$ and $U$ and $V$ are unitary matrices.
We strongly recommend the [Singular Value Decomposition (the SVD)](https://www.youtube.com/watch?v=mBcLRGuAFUk) by the amazing [Gilbert Strang](http://www-math.mit.edu/~gs/) if you would like to learn more.
## Coding Exercise 2: SVD
The goal is to perform the SVD on $\mathbf{W}_{tot}$ in every epoch, and record the singular values (modes) during the training.
Complete the function `ex_net_svd`, by first calculating the $\mathbf{W}_{tot} = \prod_{i=1}^{L}{\mathbf{W}_{i}}$ and finally performing SVD on the $\mathbf{W}_{tot}$. Please use the PyTorch [`torch.svd`](https://pytorch.org/docs/stable/generated/torch.svd.html) instead of NumPy [`np.linalg.svd`](https://numpy.org/doc/stable/reference/generated/numpy.linalg.svd.html).
```
def ex_net_svd(model, in_dim):
"""Performs a Singular Value Decomposition on a given model weights
Args:
model (torch.nn.Module): neural network model
in_dim (int): the input dimension of the model
Returns:
U, Σ, V (Tensors): Orthogonal, diagonal, and orthogonal matrices
"""
W_tot = torch.eye(in_dim)
for weight in model.parameters():
#################################################
## Calculate the W_tot by multiplication of all weights
# and then perform SVD on the W_tot using pytorch's `torch.svd`
# Remember that weights need to be `.detach()` from the graph
# Complete the function and remove or comment the line below
raise NotImplementedError("Function `ex_net_svd`")
#################################################
W_tot = ...
U, Σ, V = ...
return U, Σ, V
#add event to airtable
atform.add_event('Coding Exercise 2: SVD')
## Uncomment and run
# test_net_svd_ex(SEED)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial3_Solution_51b2e553.py)
```
# @markdown #### Make sure you execute this cell to train the network and plot
lr = 100.0 # learning rate
gamma = 1e-12 # initialization scale
n_epochs = 250 # number of epochs
dim_input = 8 # input dimension = `label_tensor.size(1)`
dim_hidden = 30 # hidden neurons
dim_output = 10000 # output dimension = `feature_tensor.size(1)`
# model instantiation
dlnn_model = LNNet(dim_input, dim_hidden, dim_output)
# weights re-initialization
initializer_(dlnn_model, gamma)
# training
losses, modes, *_ = train(dlnn_model,
label_tensor,
feature_tensor,
n_epochs=n_epochs,
lr=lr)
plot_loss_sv_twin(losses, modes)
```
**Think!**
In EigenValue decomposition, the amount of variance explained by eigenvectors is proportional to the corresponding eigenvalues. What about the SVD? We see that the gradient descent guides the network to first learn the features that carry more information (have higher singular value)!
```
# @title Video 3: SVD - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1t54y1J7Tb", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"JEbRPPG2kUI", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 3: SVD - Discussion')
display(out)
```
---
# Section 3: Representational Similarity Analysis (RSA)
```
# @title Video 4: RSA
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV19f4y157zD", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"YOs1yffysX8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event(' Video 4: RSA')
display(out)
```
The previous section ended with an interesting remark. SVD helped to break our deep "wide" linear neural net into 8 deep "narrow" linear neural nets. Although the naive interpretation could be that each narrow net is learning an item (e.g. Goldfish), the structure of modes evolution implies something deeper. The first narrow net (highest singular value) converges fastest, while the last four narrow nets, converge almost simultaneously and have the smallest singular values. Maybe, one narrow net is learning the difference between "living things" and "objects", while another narrow net is learning the difference between Fish and Birds. And the narrow nets that are learning more informative distinction are trained first. So, how could we check this hypothesis?
Representational Similarity Analysis (RSA) is an approach that could help us understand the internal representation of our network. The main idea is that the activity of hidden units (neurons) in the network must be similar when the network is presented with similar input. For our dataset (hierarchically structured data), we expect the activity of neurons in the hidden layer to be more similar for Tuna and Canary, and less similar for Tuna and Oak.
If we perform RSA in every training iteration, we may be able to see whether the narrow nets are learning the representations or our hypothesis is empty.
## Coding Exercise 3: RSA
The task is simple. We would need to measure the similarity between hidden layer activities $~\mathbf{h} = \mathbf{x} ~\mathbf{W_1}$) for every input $\mathbf{x}$.
For similarity measure, we can use the good old dot (scalar) product, which is also called cosine similarity. For calculating the dot product between multiple vectors (which would be our case), you can simply use matrix multiplication. Therefore the Representational Similarity Matrix for multiple-input (batch) activity could be calculated as follow:
\begin{equation}
RSM = \mathbf{H} \mathbf{H}^{\top}
\end{equation}
where $\mathbf{H} = \mathbf{X} \mathbf{W_1}$ is the activity of hidden neurons for a given batch $\mathbf{X}$.
If we perform RSA in every iteration, we could also see the evolution of representation learning.
```
def ex_net_rsm(h):
"""Calculates the Representational Similarity Matrix
Arg:
h (torch.Tensor): activity of a hidden layer
Returns:
(torch.Tensor): Representational Similarity Matrix
"""
#################################################
## Calculate the Representational Similarity Matrix
# Complete the function and remove or comment the line below
raise NotImplementedError("Function `ex_net_rsm`")
#################################################
rsm = ...
return rsm
#add event to airtable
atform.add_event(' Coding Exercise 3: RSA')
## Uncomment and run
# test_net_rsm_ex(SEED)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial3_Solution_394c7d85.py)
Now we can train the model while recording the losses, modes, and RSMs at every iteration. First, use the epoch slider to explore the evolution of RSM without changing default lr ($\eta$) and initialization ($\gamma$). Then, as we did before, set $\eta$ and $\gamma$ to larger values to see whether you can retrieve the sequential structured learning of representations.
```
#@markdown #### Make sure you execute this cell to enable widgets
def loss_svd_rsm_lr_gamma(lr, gamma, i_ep):
"""
Args:
lr (float): learning rate
gamma (float): initialization scale
i_ep (int): which epoch to show
"""
n_epochs = 250 # number of epochs
dim_input = 8 # input dimension = `label_tensor.size(1)`
dim_hidden = 30 # hidden neurons
dim_output = 10000 # output dimension = `feature_tensor.size(1)`
# model instantiation
dlnn_model = LNNet(dim_input, dim_hidden, dim_output)
# weights re-initialization
initializer_(dlnn_model, gamma)
# training
losses, modes, rsms, _ = train(dlnn_model,
label_tensor,
feature_tensor,
n_epochs=n_epochs,
lr=lr)
plot_loss_sv_rsm(losses, modes, rsms, i_ep)
i_ep_slider = IntSlider(min=10, max=241, step=1, value=61,
continuous_update=False,
description='Epoch',
layout=Layout(width='630px'))
lr_slider = FloatSlider(min=20.0, max=200.0, step=1.0, value=100.0,
continuous_update=False,
readout_format='.1f',
description='eta')
gamma_slider = FloatLogSlider(min=-15, max=1, step=1,
value=1e-12, base=10,
continuous_update=False,
description='gamma')
widgets_ui = VBox([lr_slider, gamma_slider, i_ep_slider])
widgets_out = interactive_output(loss_svd_rsm_lr_gamma,
{'lr': lr_slider,
'gamma': gamma_slider,
'i_ep': i_ep_slider})
display(widgets_ui, widgets_out)
```
Let's take a moment to analyze this more. A deep neural net is learning the representations, rather than a naive mapping (look-up table). This is thought to be the reason for deep neural nets supreme generalization and transfer learning ability. Unsurprisingly, neural nets with no hidden layer are incapable of representation learning, even with extremely small initialization.
```
# @title Video 5: RSA - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV18y4y1j7Xr", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"vprldATyq1o", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 5: RSA - Discussion')
display(out)
```
---
# Section 4: Illusory Correlations
```
# @title Video 6: Illusory Correlations
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1vv411E7Sq", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"RxsAvyIoqEo", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 6: Illusory Correlations')
display(out)
```
So far, everything looks great and all our trainings are successful (training loss converging to zero), and very fast. We even could interpret the dynamics of our deep linear networks and relate them to the data. Unfortunately, this rarely happens in practice. Real-world problems often require very deep and nonlinear networks with many hyper-parameters. And ordinarily, these complex networks take hours, if not days, to train.
Let's recall the training loss curves. There was often a long plateau (where the weights are stuck at a saddle point), followed by a sudden drop. For very deep complex neural nets, such plateaus can last for hours of training, and we often decide to stop the training because we believe it "as good as it gets"! This raises the challenge of whether the network has learned all the "intended" hidden representations. But more importantly, the network might find an illusionary correlation between features that has never seen.
To better understand this, let's do the next demonstration and exercise.
## Demonstration: Illusory Correlations
Our original dataset has 4 animals: Canary, Robin, Goldfish, and Tuna. These animals all have bones. Therefore if we include a "has bone" feature, the network would learn it at the second level (i.e. second bump, second mode convergence), when it learns the animal-plants distinction.
What if the dataset has Shark instead of Goldfish. Sharks don't have bones (their skeletons are made of cartilaginous, which is much lighter than true bone and more flexible). Then we will have a feature which is *True* (i.e. +1) for Tuna, Robin, and Canary, but *False* (i.e. -1) for all the plants and the shark! Let's see what the network does.
First, we add the new feature to the targets. We then start training our LNN and in every epoch, record the network prediction for "sharks having bones".
<center><img src="https://raw.githubusercontent.com/ssnio/statics/main/neuromatch/shark_tree.png" alt="Simple nn graph" width="600"/></center>
```
# sampling new data from the tree
tree_labels, tree_features = generate_hsd()
# replacing Goldfish with Shark
item_names = ['Shark', 'Tuna', 'Robin', 'Canary',
'Rose', 'Daisy', 'Pine', 'Oak']
# index of label to record
illusion_idx = 0 # Shark is the first element
# the new feature (has bones) vector
new_feature = [-1, 1, 1, 1, -1, -1, -1, -1]
its_label = 'has_bones'
# adding feature has_bones to the feature array
tree_features = add_feature(tree_features, new_feature)
# plotting
plot_tree_data(item_names, tree_features, its_label)
```
You can see the new feature shown in the last column of the plot above.
Now we can train the network on the new data, and record the network prediction (output) for Shark (indexed 0) label and "has bone" feature (last feature, indexed -1), during the training.
Here is the snippet from the training loop that keeps track of network prediction for `illusory_i`th label and last (`-1`) feature:
```python
pred_ij = predictions.detach()[illusory_i, -1]
```
```
#@markdown #### Make sure you execute this cell to train the network and plot
# convert (cast) data from np.ndarray to torch.Tensor
label_tensor = torch.tensor(tree_labels).float()
feature_tensor = torch.tensor(tree_features).float()
lr = 100.0 # learning rate
gamma = 1e-12 # initialization scale
n_epochs = 250 # number of epochs
dim_input = 8 # input dimension = `label_tensor.size(1)`
dim_hidden = 30 # hidden neurons
dim_output = feature_tensor.size(1)
# model instantiation
dlnn_model = LNNet(dim_input, dim_hidden, dim_output)
# weights re-initialization
initializer_(dlnn_model, gamma)
# training
_, modes, _, ill_predictions = train(dlnn_model,
label_tensor,
feature_tensor,
n_epochs=n_epochs,
lr=lr,
illusory_i=illusion_idx)
# a label for the plot
ill_label = f"Prediction for {item_names[illusion_idx]} {its_label}"
# plotting
plot_ills_sv_twin(ill_predictions, modes, ill_label)
```
It seems that the network starts by learning an "illusory correlation" that sharks have bones, and in later epochs, as it learns deeper representations, it can see (learn) beyond the illusory correlation. This is important to remember that we never presented the network with any data saying that sharks have bones.
## Exercise 4: Illusory Correlations
This exercise is just for you to explore the idea of illusory correlations. Think of medical, natural, or possibly social illusory correlations which can test the learning power of deep linear neural nets.
**important notes**: the generated data is independent of tree labels, therefore the names are just for convenience.
Here is our example for **Non-human Living things do not speak**. The lines marked by `{edit}` are for you to change in your example.
```
# sampling new data from the tree
tree_labels, tree_features = generate_hsd()
# {edit} replacing Canary with Parrot
item_names = ['Goldfish', 'Tuna', 'Robin', 'Parrot',
'Rose', 'Daisy', 'Pine', 'Oak']
# {edit} index of label to record
illusion_idx = 3 # Parrot is the fourth element
# {edit} the new feature (cannot speak) vector
new_feature = [1, 1, 1, -1, 1, 1, 1, 1]
its_label = 'cannot_speak'
# adding feature has_bones to the feature array
tree_features = add_feature(tree_features, new_feature)
# plotting
plot_tree_data(item_names, tree_features, its_label)
# @markdown #### Make sure you execute this cell to train the network and plot
# convert (cast) data from np.ndarray to torch.Tensor
label_tensor = torch.tensor(tree_labels).float()
feature_tensor = torch.tensor(tree_features).float()
lr = 100.0 # learning rate
gamma = 1e-12 # initialization scale
n_epochs = 250 # number of epochs
dim_input = 8 # input dimension = `label_tensor.size(1)`
dim_hidden = 30 # hidden neurons
dim_output = feature_tensor.size(1)
# model instantiation
dlnn_model = LNNet(dim_input, dim_hidden, dim_output)
# weights re-initialization
initializer_(dlnn_model, gamma)
# training
_, modes, _, ill_predictions = train(dlnn_model,
label_tensor,
feature_tensor,
n_epochs=n_epochs,
lr=lr,
illusory_i=illusion_idx)
# a label for the plot
ill_label = f"Prediction for {item_names[illusion_idx]} {its_label}"
# plotting
plot_ills_sv_twin(ill_predictions, modes, ill_label)
# @title Video 7: Illusory Correlations - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1vv411E7rg", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"6VLHKQjQJmI", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 7: Illusory Correlations')
display(out)
```
---
# Summary
The second day of the course has ended. So, in the third tutorial of the linear deep learning day we have learned more advanced topics. In the beginning we implemented a deep linear neural network and then we studied its learning dynamics using the linear algebra tool called singular value decomposition. Then, we learned about the representational similarity analysis and the illusory correlation.
```
# @title Video 8: Outro
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1AL411n7ns", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"N2szOIsKyXE", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
#add event to airtable
atform.add_event('Video 8: Outro')
display(out)
# @title Airtable Submission Link
from IPython import display as IPydisplay
IPydisplay.HTML(
f"""
<div>
<a href= "{atform.url()}" target="_blank">
<img src="https://github.com/NeuromatchAcademy/course-content-dl/blob/main/tutorials/static/SurveyButton.png?raw=1"
alt="button link end of day Survey" style="width:410px"></a>
</div>""" )
```
---
# Bonus
```
# @title Video 9: Linear Regression
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1Pf4y1L71L", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"uULOAbhYaaE", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
## Section 5.1: Linear Regression
Generally, *regression* refers to a set of methods for modeling the mapping (relationship) between one (or more) independent variable(s) (i.e., features) and one (or more) dependent variable(s) (i.e., labels). For example, if we want to examine the relative impacts of calendar date, GPS coordinates, and time of the say (the independent variables) on air temperature (the dependent variable). On the other hand, regression can be used for predictive analysis. Thus the independent variables are also called predictors. When the model contains more than one predictor, then the method is called *multiple regression*, and if it contains more than one dependent variable called *multivariate regression*. Regression problems pop up whenever we want to predict a numerical (usually continuous) value.
The independent variables are collected in vector $\mathbf{x} \in \mathbb{R}^M$, where $M$ denotes the number of independent variables, while the dependent variables are collected in vector $\mathbf{y} \in \mathbb{R}^N$, where $N$ denotes the number of dependent variables. And the mapping between them is represented by the weight matrix $\mathbf{W} \in \mathbb{R}^{N \times M}$ and a bias vector $\mathbf{b} \in \mathbb{R}^{N}$ (generalizing to affine mappings).
The multivariate regression model can be written as:
\begin{equation}
\mathbf{y} = \mathbf{W} ~ \mathbf{x} + \mathbf{b}
\end{equation}
or it can be written in matrix format as:
\begin{equation}
\begin{bmatrix} y_{1} \\ y_{2} \\ \vdots \\ y_{N} \\ \end{bmatrix} = \begin{bmatrix} w_{1,1} & w_{1,2} & \dots & w_{1,M} \\ w_{2,1} & w_{2,2} & \dots & w_{2,M} \\ \vdots & \ddots & \ddots & \vdots \\ w_{N,1} & w_{N,2} & \dots & w_{N,M} \end{bmatrix} \begin{bmatrix} x_{1} \\ x_{2} \\ \vdots \\ x_{M} \\ \end{bmatrix} + \begin{bmatrix} b_{1} \\ b_{2} \\ \vdots \\b_{N} \\ \end{bmatrix}
\end{equation}
## Section 5.2: Vectorized regression
Linear regression can be simply extended to multi-samples ($D$) input-output mapping, which we can collect in a matrix $\mathbf{X} \in \mathbb{R}^{M \times D}$, sometimes called the design matrix. The sample dimension also shows up in the output matrix $\mathbf{Y} \in \mathbb{R}^{N \times D}$. Thus, linear regression takes the following form:
\begin{equation}
\mathbf{Y} = \mathbf{W} ~ \mathbf{X} + \mathbf{b}
\end{equation}
where matrix $\mathbf{W} \in \mathbb{R}^{N \times M}$ and the vector $\mathbf{b} \in \mathbb{R}^{N}$ (broudcasted over sample dimension) are the desired parameters to find.
## Section 5.3: Analytical Linear Regression
Linear regression is a relatively simple optimization problem. Unlike most other models that we will see in this course, linear regression for mean squared loss can be solved analytically.
For $D$ samples (batch size), $\mathbf{X} \in \mathbb{R}^{M \times D}$, and $\mathbf{Y} \in \mathbb{R}^{N \times D}$, the goal of linear regression is to find $\mathbf{W} \in \mathbb{R}^{N \times M}$ such that:
\begin{equation}
\mathbf{Y} = \mathbf{W} ~ \mathbf{X}
\end{equation}
Given the Squared Error loss function, we have:
\begin{equation}
Loss(\mathbf{W}) = ||\mathbf{Y} - \mathbf{W} ~ \mathbf{X}||^2
\end{equation}
So, using matrix notation, the optimization problem is given by:
\begin{align}
\mathbf{W^{*}} &= \underset{\mathbf{W}}{\mathrm{argmin}} \left( Loss (\mathbf{W}) \right) \\
&= \underset{\mathbf{W}}{\mathrm{argmin}} \left( ||\mathbf{Y} - \mathbf{W} ~ \mathbf{X}||^2 \right) \\
&= \underset{\mathbf{W}}{\mathrm{argmin}} \left( \left( \mathbf{Y} - \mathbf{W} ~ \mathbf{X}\right)^{\top} \left( \mathbf{Y} - \mathbf{W} ~ \mathbf{X}\right) \right)
\end{align}
To solve the minimization problem, we can simply set the derivative of the loss with respect to $\mathbf{W}$ to zero.
\begin{equation}
\dfrac{\partial Loss}{\partial \mathbf{W}} = 0
\end{equation}
Assuming that $\mathbf{X}\mathbf{X}^{\top}$ is full-rank, and thus it is invertible we can write:
\begin{equation}
\mathbf{W}^{\mathbf{*}} = \mathbf{Y} \mathbf{X}^{\top} \left( \mathbf{X} \mathbf{X}^{\top} \right) ^{-1}
\end{equation}
### Coding Exercise 5.3.1: Analytical solution to LR
Complete the function `linear_regression` for finding the analytical solution to linear regression.
```
def linear_regression(X, Y):
"""Analytical Linear regression
Args:
X (np.ndarray): design matrix
Y (np.ndarray): target ouputs
return:
np.ndarray: estimated weights (mapping)
"""
assert isinstance(X, np.ndarray)
assert isinstance(Y, np.ndarray)
M, Dx = X.shape
N, Dy = Y.shape
assert Dx == Dy
#################################################
## Complete the linear_regression_exercise function
# Complete the function and remove or comment the line below
raise NotImplementedError("Linear Regression `linear_regression`")
#################################################
W = ...
return W
W_true = np.random.randint(low=0, high=10, size=(3, 3)).astype(float)
X_train = np.random.rand(3, 37) # 37 samples
noise = np.random.normal(scale=0.01, size=(3, 37))
Y_train = W_true @ X_train + noise
## Uncomment and run
# W_estimate = linear_regression(X_train, Y_train)
# print(f"True weights:\n {W_true}")
# print(f"\nEstimated weights:\n {np.round(W_estimate, 1)}")
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial3_Solution_55ea556e.py)
## Demonstration: Linear Regression vs. DLNN
A linear neural network with NO hidden layer is very similar to linear regression in its core. We also know that no matter how many hidden layers a linear network has, it can be compressed to linear regression (no hidden layers).
In this demonstration, we use the hierarchically structured data to:
* analytically find the mapping between features and labels
* train a zero-depth LNN to find the mapping
* compare them to the $W_{tot}$ from the already trained deep LNN
```
# sampling new data from the tree
tree_labels, tree_features = generate_hsd()
# convert (cast) data from np.ndarray to torch.Tensor
label_tensor = torch.tensor(tree_labels).float()
feature_tensor = torch.tensor(tree_features).float()
# calculating the W_tot for deep network (already trained model)
lr = 100.0 # learning rate
gamma = 1e-12 # initialization scale
n_epochs = 250 # number of epochs
dim_input = 8 # input dimension = `label_tensor.size(1)`
dim_hidden = 30 # hidden neurons
dim_output = 10000 # output dimension = `feature_tensor.size(1)`
# model instantiation
dlnn_model = LNNet(dim_input, dim_hidden, dim_output)
# weights re-initialization
initializer_(dlnn_model, gamma)
# training
losses, modes, rsms, ills = train(dlnn_model,
label_tensor,
feature_tensor,
n_epochs=n_epochs,
lr=lr)
deep_W_tot = torch.eye(dim_input)
for weight in dlnn_model.parameters():
deep_W_tot = weight @ deep_W_tot
deep_W_tot = deep_W_tot.detach().numpy()
# analytically estimation of weights
# our data is batch first dimension, so we need to transpose our data
analytical_weights = linear_regression(tree_labels.T, tree_features.T)
class LRNet(nn.Module):
"""A Linear Neural Net with ZERO hidden layer (LR net)
"""
def __init__(self, in_dim, out_dim):
"""
Args:
in_dim (int): input dimension
hid_dim (int): hidden dimension
"""
super().__init__()
self.in_out = nn.Linear(in_dim, out_dim, bias=False)
def forward(self, x):
"""
Args:
x (torch.Tensor): input tensor
"""
out = self.in_out(x) # output (prediction)
return out
lr = 1000.0 # learning rate
gamma = 1e-12 # initialization scale
n_epochs = 250 # number of epochs
dim_input = 8 # input dimension = `label_tensor.size(1)`
dim_output = 10000 # output dimension = `feature_tensor.size(1)`
# model instantiation
LR_model = LRNet(dim_input, dim_output)
optimizer = optim.SGD(LR_model.parameters(), lr=lr)
criterion = nn.MSELoss()
losses = np.zeros(n_epochs) # loss records
for i in range(n_epochs): # training loop
optimizer.zero_grad()
predictions = LR_model(label_tensor)
loss = criterion(predictions, feature_tensor)
loss.backward()
optimizer.step()
losses[i] = loss.item()
# trained weights from zero_depth_model
LR_model_weights = next(iter(LR_model.parameters())).detach().numpy()
plot_loss(losses, "Training loss for zero depth LNN", c="r")
print("The final weights from all methods are approximately equal?! "
"{}!".format(
(np.allclose(analytical_weights, LR_model_weights, atol=1e-02) and \
np.allclose(analytical_weights, deep_W_tot, atol=1e-02))
)
)
```
As you may have guessed, they all arrive at the same results but through very different paths.
```
# @title Video 10: Linear Regression - Discussion
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV18v411E7Wg", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"gG15_J0i05Y", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
| github_jupyter |
# Credit Corp's Consumer Lending Segment
Note: All numbers are in the form of $'000 unless otherwise stated
Let's import some libraries first...
```
import pandas
from pandas.plotting import scatter_matrix
from sklearn import datasets
from sklearn import model_selection
from sklearn import linear_model
# models
from sklearn.ensemble import RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.linear_model import LinearRegression, Ridge
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_squared_error, r2_score
import matplotlib.pyplot as plt
```
Load the past few years of relevant data.
```
dataset = pandas.read_csv("data/ccp-consumer-lending-full-year.csv")
print (dataset)
```
Let us create a linear regression model with the whole dataset.
```
array = dataset.values
X = array[:,3:5] # data = avg_gross_loan_book, net_lending
Y = array[:,2] # result = NPAT
model = LinearRegression()
model.fit(X, Y) # train model
# the model's linear regression coefficients
print("Coefficients: \t%s" % model.coef_)
print("Intercept: \t%s" % model.intercept_)
print("\nThe equation would look like...")
print("p = %sr + %sl + %s" % (model.coef_[0], model.coef_[1], model.intercept_))
```
Where
```
p = Net profit before tax (npbt)
b = Average gross loan book (gross_book_average)
l = Net lending for the period (net_lending)
```
# FY19 Predictions
## Based on management's forecasts
Assumptions:
* Gross loan book to end the year at $199.896m (long story on how I got to something so specific)
* Average gross loan book will be $191.496m
* Net lending will be $50m, on the upper range of the forecast. Quoting a high number here will actually reduce NPBT.
```
gross_book_average = 191496
net_lending = 50000
npbt = model.predict([[gross_book_average, net_lending]])[0]
print("EBIT = $%sm" % (npbt/1000))
print("NPAT = $%sm" % (npbt/1000 * 0.7))
```
This sits inside the $17 - 19m range forecast by management, so our model is not crazy bad!
## Based on a zero-growth scenario
The higher the Net Lending completed by the company, the lower the reported Net Profit due to the way the company provisions the expected lossed upfront. So you get a situation where the NPAT is under-reported, unless the company stops growing its loan book. So what happens with NPAT when the loan book stops growing?
* Assume 17.34% of gross loan book is the required net lending to maintain the loan book.
* Last 5 years (FY14 - FY18) this figure has been: 14.22%, 17.88%, 16.82%, 13.99%, 17.34%.
```
net_lending = gross_book_average * 0.1734
print("\nNet Lending Assumption = %s\n" % net_lending)
npbt_zero_growth = model.predict([[gross_book_average, net_lending]])[0]
print("EBIT: $%sm" % (npbt_zero_growth / 1000))
print("NPAT: $%sm" % (npbt_zero_growth * 0.7 / 1000))
print("\nNPAT buffer: $%sm" % ((npbt_zero_growth - npbt) / 1000))
```
| github_jupyter |
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<a href="https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/TensorFlow%20In%20Practice/Course%203%20-%20NLP/Course%203%20-%20Week%202%20-%20Exercise%20-%20Question.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import csv
import tensorflow as tf
import numpy as np
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/bbc-text.csv \
-O /tmp/bbc-text.csv
vocab_size = # YOUR CODE HERE
embedding_dim = # YOUR CODE HERE
max_length = # YOUR CODE HERE
trunc_type = # YOUR CODE HERE
padding_type = # YOUR CODE HERE
oov_tok = # YOUR CODE HERE
training_portion = .8
sentences = []
labels = []
stopwords = [ "a", "about", "above", "after", "again", "against", "all", "am", "an", "and", "any", "are", "as", "at", "be", "because", "been", "before", "being", "below", "between", "both", "but", "by", "could", "did", "do", "does", "doing", "down", "during", "each", "few", "for", "from", "further", "had", "has", "have", "having", "he", "he'd", "he'll", "he's", "her", "here", "here's", "hers", "herself", "him", "himself", "his", "how", "how's", "i", "i'd", "i'll", "i'm", "i've", "if", "in", "into", "is", "it", "it's", "its", "itself", "let's", "me", "more", "most", "my", "myself", "nor", "of", "on", "once", "only", "or", "other", "ought", "our", "ours", "ourselves", "out", "over", "own", "same", "she", "she'd", "she'll", "she's", "should", "so", "some", "such", "than", "that", "that's", "the", "their", "theirs", "them", "themselves", "then", "there", "there's", "these", "they", "they'd", "they'll", "they're", "they've", "this", "those", "through", "to", "too", "under", "until", "up", "very", "was", "we", "we'd", "we'll", "we're", "we've", "were", "what", "what's", "when", "when's", "where", "where's", "which", "while", "who", "who's", "whom", "why", "why's", "with", "would", "you", "you'd", "you'll", "you're", "you've", "your", "yours", "yourself", "yourselves" ]
print(len(stopwords))
# Expected Output
# 153
with open("/tmp/bbc-text.csv", 'r') as csvfile:
# YOUR CODE HERE
print(len(labels))
print(len(sentences))
print(sentences[0])
# Expected Output
# 2225
# 2225
# tv future hands viewers home theatre systems plasma high-definition tvs digital video recorders moving living room way people watch tv will radically different five years time. according expert panel gathered annual consumer electronics show las vegas discuss new technologies will impact one favourite pastimes. us leading trend programmes content will delivered viewers via home networks cable satellite telecoms companies broadband service providers front rooms portable devices. one talked-about technologies ces digital personal video recorders (dvr pvr). set-top boxes like us s tivo uk s sky+ system allow people record store play pause forward wind tv programmes want. essentially technology allows much personalised tv. also built-in high-definition tv sets big business japan us slower take off europe lack high-definition programming. not can people forward wind adverts can also forget abiding network channel schedules putting together a-la-carte entertainment. us networks cable satellite companies worried means terms advertising revenues well brand identity viewer loyalty channels. although us leads technology moment also concern raised europe particularly growing uptake services like sky+. happens today will see nine months years time uk adam hume bbc broadcast s futurologist told bbc news website. likes bbc no issues lost advertising revenue yet. pressing issue moment commercial uk broadcasters brand loyalty important everyone. will talking content brands rather network brands said tim hanlon brand communications firm starcom mediavest. reality broadband connections anybody can producer content. added: challenge now hard promote programme much choice. means said stacey jolna senior vice president tv guide tv group way people find content want watch simplified tv viewers. means networks us terms channels take leaf google s book search engine future instead scheduler help people find want watch. kind channel model might work younger ipod generation used taking control gadgets play them. might not suit everyone panel recognised. older generations comfortable familiar schedules channel brands know getting. perhaps not want much choice put hands mr hanlon suggested. end kids just diapers pushing buttons already - everything possible available said mr hanlon. ultimately consumer will tell market want. 50 000 new gadgets technologies showcased ces many enhancing tv-watching experience. high-definition tv sets everywhere many new models lcd (liquid crystal display) tvs launched dvr capability built instead external boxes. one example launched show humax s 26-inch lcd tv 80-hour tivo dvr dvd recorder. one us s biggest satellite tv companies directtv even launched branded dvr show 100-hours recording capability instant replay search function. set can pause rewind tv 90 hours. microsoft chief bill gates announced pre-show keynote speech partnership tivo called tivotogo means people can play recorded programmes windows pcs mobile devices. reflect increasing trend freeing multimedia people can watch want want.
train_size = # YOUR CODE HERE
train_sentences = # YOUR CODE HERE
train_labels = # YOUR CODE HERE
validation_sentences = # YOUR CODE HERE
validation_labels = # YOUR CODE HERE
print(train_size)
print(len(train_sentences))
print(len(train_labels))
print(len(validation_sentences))
print(len(validation_labels))
# Expected output (if training_portion=.8)
# 1780
# 1780
# 1780
# 445
# 445
tokenizer = # YOUR CODE HERE
tokenizer.fit_on_texts(# YOUR CODE HERE)
word_index = # YOUR CODE HERE
train_sequences = # YOUR CODE HERE
train_padded = # YOUR CODE HERE
print(len(train_sequences[0]))
print(len(train_padded[0]))
print(len(train_sequences[1]))
print(len(train_padded[1]))
print(len(train_sequences[10]))
print(len(train_padded[10]))
# Expected Ouput
# 449
# 120
# 200
# 120
# 192
# 120
validation_sequences = # YOUR CODE HERE
validation_padded = # YOUR CODE HERE
print(len(validation_sequences))
print(validation_padded.shape)
# Expected output
# 445
# (445, 120)
label_tokenizer = # YOUR CODE HERE
label_tokenizer.fit_on_texts(# YOUR CODE HERE)
training_label_seq = # YOUR CODE HERE
validation_label_seq = # YOUR CODE HERE
print(training_label_seq[0])
print(training_label_seq[1])
print(training_label_seq[2])
print(training_label_seq.shape)
print(validation_label_seq[0])
print(validation_label_seq[1])
print(validation_label_seq[2])
print(validation_label_seq.shape)
# Expected output
# [4]
# [2]
# [1]
# (1780, 1)
# [5]
# [4]
# [3]
# (445, 1)
model = tf.keras.Sequential([
# YOUR CODE HERE
])
model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
# Expected Output
# Layer (type) Output Shape Param #
# =================================================================
# embedding (Embedding) (None, 120, 16) 16000
# _________________________________________________________________
# global_average_pooling1d (Gl (None, 16) 0
# _________________________________________________________________
# dense (Dense) (None, 24) 408
# _________________________________________________________________
# dense_1 (Dense) (None, 6) 150
# =================================================================
# Total params: 16,558
# Trainable params: 16,558
# Non-trainable params: 0
num_epochs = 30
history = model.fit(# YOUR CODE HERE)
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot_graphs(history, "acc")
plot_graphs(history, "loss")
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_sentence(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
e = model.layers[0]
weights = e.get_weights()[0]
print(weights.shape) # shape: (vocab_size, embedding_dim)
# Expected output
# (1000, 16)
import io
out_v = io.open('vecs.tsv', 'w', encoding='utf-8')
out_m = io.open('meta.tsv', 'w', encoding='utf-8')
for word_num in range(1, vocab_size):
word = reverse_word_index[word_num]
embeddings = weights[word_num]
out_m.write(word + "\n")
out_v.write('\t'.join([str(x) for x in embeddings]) + "\n")
out_v.close()
out_m.close()
try:
from google.colab import files
except ImportError:
pass
else:
files.download('vecs.tsv')
files.download('meta.tsv')
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.metrics import confusion_matrix
fashion_mnist = tf.keras.datasets.fashion_mnist
(X_train_full, y_train_full), (X_test, y_test) = fashion_mnist.load_data()
plt.figure()
for k in range(9):
plt.subplot(3,3,k+1)
plt.imshow(X_train_full[k], cmap="gray")
plt.axis('off')
plt.show()
# Remove 5,000 images from X_train to use as validation
# Convert ints to floats
X_valid = X_train_full[:5000] / 255.0
X_train = X_train_full[5000:] / 255.0
X_test = X_test / 255.0
y_valid = y_train_full[:5000]
y_train = y_train_full[5000:]
# Train fully-connected neural network
from functools import partial
my_dense_layer = partial(tf.keras.layers.Dense, activation="relu", kernel_regularizer=tf.keras.regularizers.l2(0.0001))
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=[28, 28]),
my_dense_layer(400),
my_dense_layer(400),
my_dense_layer(300),
#my_dense_layer(100),
my_dense_layer(100),
my_dense_layer(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=5, validation_data=(X_valid,y_valid))
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1)
#plt.savefig('part1_train4', bbox_inches='tight')
plt.show()
y_pred = model.predict_classes(X_train)
conf_train = confusion_matrix(y_train, y_pred)
print(conf_train)
model.evaluate(X_test,y_test)
y_pred = model.predict_classes(X_test)
conf_test = confusion_matrix(y_test, y_pred)
print(conf_test)
fig, ax = plt.subplots()
# hide axes
fig.patch.set_visible(False)
ax.axis('off')
ax.axis('tight')
# create table and save to file
df = pd.DataFrame(conf_test)
ax.table(cellText=df.values, rowLabels=np.arange(10), colLabels=np.arange(10), loc='center', cellLoc='center')
fig.tight_layout()
#plt.savefig('conf_test1.pdf')
# Remove 5,000 images from X_train to use as validation
# Convert ints to floats
X_valid = X_train_full[:5000] / 255.0
X_train = X_train_full[5000:] / 255.0
X_test = X_test / 255.0
y_valid = y_train_full[:5000]
y_train = y_train_full[5000:]
X_train = X_train[..., np.newaxis]
X_valid = X_valid[..., np.newaxis]
X_test = X_test[..., np.newaxis]
# Train Convolutional Neural Network
from functools import partial
my_dense_layer = partial(tf.keras.layers.Dense, activation="elu", kernel_regularizer=tf.keras.regularizers.l2(0.0001))
my_conv_layer = partial(tf.keras.layers.Conv2D, activation="elu", padding="valid")
model = tf.keras.models.Sequential([
my_conv_layer(10,5,padding="same",input_shape=[28,28,1]),
tf.keras.layers.MaxPooling2D(4, strides=(2,2)),
my_conv_layer(32,5,padding="same"),
tf.keras.layers.MaxPooling2D(4, strides=(2,2)),
my_conv_layer(120,5),
tf.keras.layers.Flatten(),
my_dense_layer(64),
my_dense_layer(10, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy",
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=4, validation_data=(X_valid,y_valid))
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1)
plt.savefig('part2_train2', bbox_inches='tight')
plt.show()
y_pred = model.predict_classes(X_train)
conf_train = confusion_matrix(y_train, y_pred)
print(conf_train)
model.evaluate(X_test,y_test)
y_pred = model.predict_classes(X_test)
conf_test = confusion_matrix(y_test, y_pred)
print(conf_test)
fig, ax = plt.subplots()
# hide axes
fig.patch.set_visible(False)
ax.axis('off')
ax.axis('tight')
# create table and save to file
df = pd.DataFrame(conf_test)
ax.table(cellText=df.values, rowLabels=np.arange(10), colLabels=np.arange(10), loc='center', cellLoc='center')
fig.tight_layout()
plt.savefig('conf_test2.pdf')
```
| github_jupyter |
# Learning Tree-augmented Naive Bayes (TAN) Structure from Data
In this notebook, we show an example for learning the structure of a Bayesian Network using the TAN algorithm. We will first build a model to generate some data and then attempt to learn the model's graph structure back from the generated data.
For comparison of Naive Bayes and TAN classifier, refer to the blog post [Classification with TAN and Pgmpy](https://loudly-soft.blogspot.com/2020/08/classification-with-tree-augmented.html).
## First, create a Naive Bayes graph
```
import networkx as nx
import matplotlib.pyplot as plt
from pgmpy.models import BayesianNetwork
# class variable is A and feature variables are B, C, D, E and R
model = BayesianNetwork([("A", "R"), ("A", "B"), ("A", "C"), ("A", "D"), ("A", "E")])
nx.draw_circular(
model, with_labels=True, arrowsize=30, node_size=800, alpha=0.3, font_weight="bold"
)
plt.show()
```
## Second, add interaction between the features
```
# feature R correlates with other features
model.add_edges_from([("R", "B"), ("R", "C"), ("R", "D"), ("R", "E")])
nx.draw_circular(
model, with_labels=True, arrowsize=30, node_size=800, alpha=0.3, font_weight="bold"
)
plt.show()
```
## Then, parameterize our graph to create a Bayesian network
```
from pgmpy.factors.discrete import TabularCPD
# add CPD to each edge
cpd_a = TabularCPD("A", 2, [[0.7], [0.3]])
cpd_r = TabularCPD(
"R", 3, [[0.6, 0.2], [0.3, 0.5], [0.1, 0.3]], evidence=["A"], evidence_card=[2]
)
cpd_b = TabularCPD(
"B",
3,
[
[0.1, 0.1, 0.2, 0.2, 0.7, 0.1],
[0.1, 0.3, 0.1, 0.2, 0.1, 0.2],
[0.8, 0.6, 0.7, 0.6, 0.2, 0.7],
],
evidence=["A", "R"],
evidence_card=[2, 3],
)
cpd_c = TabularCPD(
"C",
2,
[[0.7, 0.2, 0.2, 0.5, 0.1, 0.3], [0.3, 0.8, 0.8, 0.5, 0.9, 0.7]],
evidence=["A", "R"],
evidence_card=[2, 3],
)
cpd_d = TabularCPD(
"D",
3,
[
[0.3, 0.8, 0.2, 0.8, 0.4, 0.7],
[0.4, 0.1, 0.4, 0.1, 0.1, 0.1],
[0.3, 0.1, 0.4, 0.1, 0.5, 0.2],
],
evidence=["A", "R"],
evidence_card=[2, 3],
)
cpd_e = TabularCPD(
"E",
2,
[[0.5, 0.6, 0.6, 0.5, 0.5, 0.4], [0.5, 0.4, 0.4, 0.5, 0.5, 0.6]],
evidence=["A", "R"],
evidence_card=[2, 3],
)
model.add_cpds(cpd_a, cpd_r, cpd_b, cpd_c, cpd_d, cpd_e)
```
## Next, generate sample data from our Bayesian network
```
from pgmpy.sampling import BayesianModelSampling
# sample data from BN
inference = BayesianModelSampling(model)
df_data = inference.forward_sample(size=10000)
print(df_data)
```
## Now we are ready to learn the TAN structure from sample data
```
from pgmpy.estimators import TreeSearch
# learn graph structure
est = TreeSearch(df_data, root_node="R")
dag = est.estimate(estimator_type="tan", class_node="A")
nx.draw_circular(
dag, with_labels=True, arrowsize=30, node_size=800, alpha=0.3, font_weight="bold"
)
plt.show()
```
## To parameterize the learned graph from data, check out the other tutorials for more info
```
from pgmpy.estimators import BayesianEstimator
# there are many choices of parametrization, here is one example
model = BayesianNetwork(dag.edges())
model.fit(
df_data, estimator=BayesianEstimator, prior_type="dirichlet", pseudo_counts=0.1
)
model.get_cpds()
```
| github_jupyter |
**This notebook is an exercise in the [Intermediate Machine Learning](https://www.kaggle.com/learn/intermediate-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/introduction).**
---
As a warm-up, you'll review some machine learning fundamentals and submit your initial results to a Kaggle competition.
# Setup
The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
```
# Set up code checking
import os
if not os.path.exists("../input/train.csv"):
os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv")
os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.ml_intermediate.ex1 import *
print("Setup Complete")
```
You will work with data from the [Housing Prices Competition for Kaggle Learn Users](https://www.kaggle.com/c/home-data-for-ml-course) to predict home prices in Iowa using 79 explanatory variables describing (almost) every aspect of the homes.

Run the next code cell without changes to load the training and validation features in `X_train` and `X_valid`, along with the prediction targets in `y_train` and `y_valid`. The test features are loaded in `X_test`. (_If you need to review **features** and **prediction targets**, please check out [this short tutorial](https://www.kaggle.com/dansbecker/your-first-machine-learning-model). To read about model **validation**, look [here](https://www.kaggle.com/dansbecker/model-validation). Alternatively, if you'd prefer to look through a full course to review all of these topics, start [here](https://www.kaggle.com/learn/machine-learning).)_
```
import pandas as pd
from sklearn.model_selection import train_test_split
# Read the data
X_full = pd.read_csv('../input/train.csv', index_col='Id')
X_test_full = pd.read_csv('../input/test.csv', index_col='Id')
# Obtain target and predictors
y = X_full.SalePrice
features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
X = X_full[features].copy()
X_test = X_test_full[features].copy()
# Break off validation set from training data
X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2,
random_state=0)
```
Use the next cell to print the first several rows of the data. It's a nice way to get an overview of the data you will use in your price prediction model.
```
X_train.head()
```
The next code cell defines five different random forest models. Run this code cell without changes. (_To review **random forests**, look [here](https://www.kaggle.com/dansbecker/random-forests)._)
```
from sklearn.ensemble import RandomForestRegressor
# Define the models
model_1 = RandomForestRegressor(n_estimators=50, random_state=0)
model_2 = RandomForestRegressor(n_estimators=100, random_state=0)
model_3 = RandomForestRegressor(n_estimators=100, criterion='mae', random_state=0)
model_4 = RandomForestRegressor(n_estimators=200, min_samples_split=20, random_state=0)
model_5 = RandomForestRegressor(n_estimators=100, max_depth=7, random_state=0)
models = [model_1, model_2, model_3, model_4, model_5]
```
To select the best model out of the five, we define a function `score_model()` below. This function returns the mean absolute error (MAE) from the validation set. Recall that the best model will obtain the lowest MAE. (_To review **mean absolute error**, look [here](https://www.kaggle.com/dansbecker/model-validation).)_
Run the code cell without changes.
```
from sklearn.metrics import mean_absolute_error
# Function for comparing different models
def score_model(model, X_t=X_train, X_v=X_valid, y_t=y_train, y_v=y_valid):
model.fit(X_t, y_t)
preds = model.predict(X_v)
return mean_absolute_error(y_v, preds)
for i in range(0, len(models)):
mae = score_model(models[i])
print("Model %d MAE: %d" % (i+1, mae))
```
# Step 1: Evaluate several models
Use the above results to fill in the line below. Which model is the best model? Your answer should be one of `model_1`, `model_2`, `model_3`, `model_4`, or `model_5`.
```
# Fill in the best model
best_model = model_3
# Check your answer
step_1.check()
# Lines below will give you a hint or solution code
#step_1.hint()
#step_1.solution()
```
# Step 2: Generate test predictions
Great. You know how to evaluate what makes an accurate model. Now it's time to go through the modeling process and make predictions. In the line below, create a Random Forest model with the variable name `my_model`.
```
# Define a model
my_model = best_model
# Check your answer
step_2.check()
# Lines below will give you a hint or solution code
#step_2.hint()
#step_2.solution()
```
Run the next code cell without changes. The code fits the model to the training and validation data, and then generates test predictions that are saved to a CSV file. These test predictions can be submitted directly to the competition!
```
# Fit the model to the training data
my_model.fit(X, y)
# Generate test predictions
preds_test = my_model.predict(X_test)
# Save predictions in format used for competition scoring
output = pd.DataFrame({'Id': X_test.index,
'SalePrice': preds_test})
output.to_csv('submission.csv', index=False)
```
# Submit your results
Once you have successfully completed Step 2, you're ready to submit your results to the leaderboard! First, you'll need to join the competition if you haven't already. So open a new window by clicking on [this link](https://www.kaggle.com/c/home-data-for-ml-course). Then click on the **Join Competition** button.

Next, follow the instructions below:
1. Begin by clicking on the blue **Save Version** button in the top right corner of the window. This will generate a pop-up window.
2. Ensure that the **Save and Run All** option is selected, and then click on the blue **Save** button.
3. This generates a window in the bottom left corner of the notebook. After it has finished running, click on the number to the right of the **Save Version** button. This pulls up a list of versions on the right of the screen. Click on the ellipsis **(...)** to the right of the most recent version, and select **Open in Viewer**. This brings you into view mode of the same page. You will need to scroll down to get back to these instructions.
4. Click on the **Output** tab on the right of the screen. Then, click on the file you would like to submit, and click on the blue **Submit** button to submit your results to the leaderboard.
You have now successfully submitted to the competition!
If you want to keep working to improve your performance, select the blue **Edit** button in the top right of the screen. Then you can change your code and repeat the process. There's a lot of room to improve, and you will climb up the leaderboard as you work.
# Keep going
You've made your first model. But how can you quickly make it better?
Learn how to improve your competition results by incorporating columns with **[missing values](https://www.kaggle.com/alexisbcook/missing-values)**.
---
*Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161289) to chat with other Learners.*
| github_jupyter |
<a href="https://colab.research.google.com/github/magenta/ddsp/blob/master/ddsp/colab/demos/timbre_transfer.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
##### Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
```
# Copyright 2020 Google LLC. All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
```
# DDSP Timbre Transfer Demo
This notebook is a demo of timbre transfer using DDSP (Differentiable Digital Signal Processing).
The model here is trained to generate audio conditioned on a time series of fundamental frequency and loudness.
* [DDSP ICLR paper](https://openreview.net/forum?id=B1x1ma4tDr)
* [Audio Examples](http://goo.gl/magenta/ddsp-examples)
This notebook extracts these features from input audio (either uploaded files, or recorded from the microphone) and resynthesizes with the model.
<img src="https://magenta.tensorflow.org/assets/ddsp/ddsp_cat_jamming.png" alt="DDSP Tone Transfer" width="700">
By default, the notebook will download pre-trained models. You can train a model on your own sounds by using the [Train Autoencoder Colab](https://github.com/magenta/ddsp/blob/master/ddsp/colab/demos/train_autoencoder.ipynb).
Have fun! And please feel free to hack this notebook to make your own creative interactions.
### Instructions for running:
* Make sure to use a GPU runtime, click: __Runtime >> Change Runtime Type >> GPU__
* Press ▶️ on the left of each of the cells
* View the code: Double-click any of the cells
* Hide the code: Double click the right side of the cell
```
#@title #Install and Import
#@markdown Install ddsp, define some helper functions, and download the model. This transfers a lot of data and _should take a minute or two_.
%tensorflow_version 2.x
print('Installing from pip package...')
!pip install -qU ddsp
# Ignore a bunch of deprecation warnings
import warnings
warnings.filterwarnings("ignore")
import copy
import os
import time
import crepe
import ddsp
import ddsp.training
from ddsp.colab import colab_utils
from ddsp.colab.colab_utils import (
auto_tune, detect_notes, fit_quantile_transform,
get_tuning_factor, download, play, record,
specplot, upload, DEFAULT_SAMPLE_RATE)
import gin
from google.colab import files
import librosa
import matplotlib.pyplot as plt
import numpy as np
import pickle
import tensorflow.compat.v2 as tf
import tensorflow_datasets as tfds
# Helper Functions
sample_rate = DEFAULT_SAMPLE_RATE # 16000
print('Done!')
#@title Record or Upload Audio
#@markdown * Either record audio from microphone or upload audio from file (.mp3 or .wav)
#@markdown * Audio should be monophonic (single instrument / voice)
#@markdown * Extracts fundmanetal frequency (f0) and loudness features.
record_or_upload = "Record" #@param ["Record", "Upload (.mp3 or .wav)"]
record_seconds = 5#@param {type:"number", min:1, max:10, step:1}
if record_or_upload == "Record":
audio = record(seconds=record_seconds)
else:
# Load audio sample here (.mp3 or .wav3 file)
# Just use the first file.
filenames, audios = upload()
audio = audios[0]
audio = audio[np.newaxis, :]
print('\nExtracting audio features...')
# Plot.
specplot(audio)
play(audio)
# Setup the session.
ddsp.spectral_ops.reset_crepe()
# Compute features.
start_time = time.time()
audio_features = ddsp.training.metrics.compute_audio_features(audio)
audio_features['loudness_db'] = audio_features['loudness_db'].astype(np.float32)
audio_features_mod = None
print('Audio features took %.1f seconds' % (time.time() - start_time))
TRIM = -15
# Plot Features.
fig, ax = plt.subplots(nrows=3,
ncols=1,
sharex=True,
figsize=(6, 8))
ax[0].plot(audio_features['loudness_db'][:TRIM])
ax[0].set_ylabel('loudness_db')
ax[1].plot(librosa.hz_to_midi(audio_features['f0_hz'][:TRIM]))
ax[1].set_ylabel('f0 [midi]')
ax[2].plot(audio_features['f0_confidence'][:TRIM])
ax[2].set_ylabel('f0 confidence')
_ = ax[2].set_xlabel('Time step [frame]')
#@title Load a model
#@markdown Run for ever new audio input
model = 'Violin' #@param ['Violin', 'Flute', 'Flute2', 'Trumpet', 'Tenor_Saxophone', 'Upload your own (checkpoint folder as .zip)']
MODEL = model
def find_model_dir(dir_name):
# Iterate through directories until model directory is found
for root, dirs, filenames in os.walk(dir_name):
for filename in filenames:
if filename.endswith(".gin") and not filename.startswith("."):
model_dir = root
break
return model_dir
if model in ('Violin', 'Flute', 'Flute2', 'Trumpet', 'Tenor_Saxophone'):
# Pretrained models.
PRETRAINED_DIR = '/content/pretrained'
# Copy over from gs:// for faster loading.
!rm -r $PRETRAINED_DIR &> /dev/null
!mkdir $PRETRAINED_DIR &> /dev/null
GCS_CKPT_DIR = 'gs://ddsp/models/tf2'
model_dir = os.path.join(GCS_CKPT_DIR, 'solo_%s_ckpt' % model.lower())
!gsutil cp $model_dir/* $PRETRAINED_DIR &> /dev/null
model_dir = PRETRAINED_DIR
gin_file = os.path.join(model_dir, 'operative_config-0.gin')
else:
# User models.
UPLOAD_DIR = '/content/uploaded'
!mkdir $UPLOAD_DIR
uploaded_files = files.upload()
for fnames in uploaded_files.keys():
print("Unzipping... {}".format(fnames))
!unzip -o "/content/$fnames" -d $UPLOAD_DIR &> /dev/null
model_dir = find_model_dir(UPLOAD_DIR)
gin_file = os.path.join(model_dir, 'operative_config-0.gin')
# Load the dataset statistics.
DATASET_STATS = None
dataset_stats_file = os.path.join(model_dir, 'dataset_statistics.pkl')
print(f'Loading dataset statistics from {dataset_stats_file}')
try:
if tf.io.gfile.exists(dataset_stats_file):
with tf.io.gfile.GFile(dataset_stats_file, 'rb') as f:
DATASET_STATS = pickle.load(f)
except Exception as err:
print('Loading dataset statistics from pickle failed: {}.'.format(err))
# Parse gin config,
with gin.unlock_config():
gin.parse_config_file(gin_file, skip_unknown=True)
# Assumes only one checkpoint in the folder, 'ckpt-[iter]`.
ckpt_files = [f for f in tf.io.gfile.listdir(model_dir) if 'ckpt' in f]
ckpt_name = ckpt_files[0].split('.')[0]
ckpt = os.path.join(model_dir, ckpt_name)
# Ensure dimensions and sampling rates are equal
time_steps_train = gin.query_parameter('DefaultPreprocessor.time_steps')
n_samples_train = gin.query_parameter('Additive.n_samples')
hop_size = int(n_samples_train / time_steps_train)
time_steps = int(audio.shape[1] / hop_size)
n_samples = time_steps * hop_size
# print("===Trained model===")
# print("Time Steps", time_steps_train)
# print("Samples", n_samples_train)
# print("Hop Size", hop_size)
# print("\n===Resynthesis===")
# print("Time Steps", time_steps)
# print("Samples", n_samples)
# print('')
gin_params = [
'RnnFcDecoder.input_keys = ("f0_scaled", "ld_scaled")',
'Additive.n_samples = {}'.format(n_samples),
'FilteredNoise.n_samples = {}'.format(n_samples),
'DefaultPreprocessor.time_steps = {}'.format(time_steps),
]
with gin.unlock_config():
gin.parse_config(gin_params)
# Trim all input vectors to correct lengths
for key in ['f0_hz', 'f0_confidence', 'loudness_db']:
audio_features[key] = audio_features[key][:time_steps]
audio_features['audio'] = audio_features['audio'][:, :n_samples]
# Set up the model just to predict audio given new conditioning
model = ddsp.training.models.Autoencoder()
model.restore(ckpt)
# Build model by running a batch through it.
start_time = time.time()
_ = model(audio_features, training=False)
print('Restoring model took %.1f seconds' % (time.time() - start_time))
#@title Modify conditioning
#@markdown These models were not explicitly trained to perform timbre transfer, so they may sound unnatural if the incoming loudness and frequencies are very different then the training data (which will always be somewhat true).
#@markdown ## Note Detection
#@markdown You can leave this at 1.0 for most cases
threshold = 1 #@param {type:"slider", min: 0.0, max:2.0, step:0.01}
#@markdown ## Automatic
ADJUST = True #@param{type:"boolean"}
#@markdown Quiet parts without notes detected (dB)
quiet = 20 #@param {type:"slider", min: 0, max:60, step:1}
#@markdown Force pitch to nearest note (amount)
autotune = 0 #@param {type:"slider", min: 0.0, max:1.0, step:0.1}
#@markdown ## Manual
#@markdown Shift the pitch (octaves)
pitch_shift = 0 #@param {type:"slider", min:-2, max:2, step:1}
#@markdown Adjsut the overall loudness (dB)
loudness_shift = 0 #@param {type:"slider", min:-20, max:20, step:1}
audio_features_mod = {k: v.copy() for k, v in audio_features.items()}
## Helper functions.
def shift_ld(audio_features, ld_shift=0.0):
"""Shift loudness by a number of ocatves."""
audio_features['loudness_db'] += ld_shift
return audio_features
def shift_f0(audio_features, pitch_shift=0.0):
"""Shift f0 by a number of ocatves."""
audio_features['f0_hz'] *= 2.0 ** (pitch_shift)
audio_features['f0_hz'] = np.clip(audio_features['f0_hz'],
0.0,
librosa.midi_to_hz(110.0))
return audio_features
mask_on = None
if ADJUST and DATASET_STATS is not None:
# Detect sections that are "on".
mask_on, note_on_value = detect_notes(audio_features['loudness_db'],
audio_features['f0_confidence'],
threshold)
if np.any(mask_on):
# Shift the pitch register.
target_mean_pitch = DATASET_STATS['mean_pitch']
pitch = ddsp.core.hz_to_midi(audio_features['f0_hz'])
mean_pitch = np.mean(pitch[mask_on])
p_diff = target_mean_pitch - mean_pitch
p_diff_octave = p_diff / 12.0
round_fn = np.floor if p_diff_octave > 1.5 else np.ceil
p_diff_octave = round_fn(p_diff_octave)
audio_features_mod = shift_f0(audio_features_mod, p_diff_octave)
# Quantile shift the note_on parts.
_, loudness_norm = colab_utils.fit_quantile_transform(
audio_features['loudness_db'],
mask_on,
inv_quantile=DATASET_STATS['quantile_transform'])
# Turn down the note_off parts.
mask_off = np.logical_not(mask_on)
loudness_norm[mask_off] -= quiet * (1.0 - note_on_value[mask_off][:, np.newaxis])
loudness_norm = np.reshape(loudness_norm, audio_features['loudness_db'].shape)
audio_features_mod['loudness_db'] = loudness_norm
# Auto-tune.
if autotune:
f0_midi = np.array(ddsp.core.hz_to_midi(audio_features_mod['f0_hz']))
tuning_factor = get_tuning_factor(f0_midi, audio_features_mod['f0_confidence'], mask_on)
f0_midi_at = auto_tune(f0_midi, tuning_factor, mask_on, amount=autotune)
audio_features_mod['f0_hz'] = ddsp.core.midi_to_hz(f0_midi_at)
else:
print('\nSkipping auto-adjust (no notes detected or ADJUST box empty).')
else:
print('\nSkipping auto-adujst (box not checked or no dataset statistics found).')
# Manual Shifts.
audio_features_mod = shift_ld(audio_features_mod, loudness_shift)
audio_features_mod = shift_f0(audio_features_mod, pitch_shift)
# Plot Features.
has_mask = int(mask_on is not None)
n_plots = 3 if has_mask else 2
fig, axes = plt.subplots(nrows=n_plots,
ncols=1,
sharex=True,
figsize=(2*n_plots, 8))
if has_mask:
ax = axes[0]
ax.plot(np.ones_like(mask_on[:TRIM]) * threshold, 'k:')
ax.plot(note_on_value[:TRIM])
ax.plot(mask_on[:TRIM])
ax.set_ylabel('Note-on Mask')
ax.set_xlabel('Time step [frame]')
ax.legend(['Threshold', 'Likelihood','Mask'])
ax = axes[0 + has_mask]
ax.plot(audio_features['loudness_db'][:TRIM])
ax.plot(audio_features_mod['loudness_db'][:TRIM])
ax.set_ylabel('loudness_db')
ax.legend(['Original','Adjusted'])
ax = axes[1 + has_mask]
ax.plot(librosa.hz_to_midi(audio_features['f0_hz'][:TRIM]))
ax.plot(librosa.hz_to_midi(audio_features_mod['f0_hz'][:TRIM]))
ax.set_ylabel('f0 [midi]')
_ = ax.legend(['Original','Adjusted'])
#@title #Resynthesize Audio
af = audio_features if audio_features_mod is None else audio_features_mod
# Run a batch of predictions.
start_time = time.time()
audio_gen = model(af, training=False)
print('Prediction took %.1f seconds' % (time.time() - start_time))
# Plot
print('Original')
play(audio)
print('Resynthesis')
play(audio_gen)
specplot(audio)
plt.title("Original")
specplot(audio_gen)
_ = plt.title("Resynthesis")
```
| github_jupyter |
# Підручник: Основні засоби приватного глибинного навчання
Ласкаво просимо до вступного підручника PySyft щодо збереження конфіденційності, децентралізованого глибинного навчання. Ця серія зошитів - покрокове керівництво для ознайомлення з новими інструментами та прийомами, необхідними для глибинного навчання на секретних / приватних даних / моделей, без централізації їх за одним повноваженням.
**Сфера застосування:** Зауважте, що ми не прото будемо говорити про те, як децентралізувати / шифрувати дані, але ми розглянемо, як PySyft може бути використаний для децентралізації всієї екосистеми навколо даних, навіть включаючи бази даних, де дані зберігаються та робляться запити, та нейронних мереж, які використовуються для отримання інформації з даних. У міру створення нових розширень до PySyft ці ноутбуки будуть розширені новими навчальними посібниками, щоб пояснити нову функціональність.
Автори:
- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)
Переклад:
- Bogdan Ivanyuk-Skulskiy - Facebook: [Bogdan Ivanyuk](https://www.facebook.com/bogdan.ivanyuk) - LinkedIn: [Bogdan Ivanyuk](https://www.linkedin.com/in/bogdan-ivanyuk-skulskiy-982739163/)
## Зміст:
- Частина 1: Основні засоби приватного глибокого навчання
## Навіщо дивитися ці зошити?
**1) Кар'єрні переваги** - За останні 20 років цифрова революція зробила дані все більш доступними у більш великих кількостях, оскільки аналогічні процеси стають оцифрованими. Однак, згідно з новим регулюванням, таким як [GDPR](https://eugdpr.org/), підприємства опиняються під тиском, щоб мати менше свободи в тому, як вони використовують - а ще важливіше, як вони аналізують - особисту інформацію. ** Підсумок: ** Вчені даних не матимуть доступу до такої кількості даних за допомогою інструментів "старої школи", але, вивчаючи інструменти приватного глибинного навчання, Ви можете випереджати цю криву та мати конкурентну перевагу в Ваша кар'єра.
**2) Підприємницькі можливості** - У суспільстві існує ціла низка проблем, які глибоке навчання може вирішити, але багато найважливіших не були досліджені, оскільки це вимагало б доступу до неймовірно чутливої інформації про людей. Таким чином, навчання приватного глибокого навчання відкриває для вас безліч нових можливостей запуску, які раніше не були доступні іншим без цих наборів інструментів.
**3) Соціальне благо** - глибинне навчання може використовуватися для вирішення найрізноманітніших проблем у реальному світі, але глибоке навчання на *особистій інформації* - це глибинне навчання про людей, *для людей*. Навчитися робити глибинне вивчення даних, якими ви не володієте, представляє собою не лише кар'єру чи підприємницьку можливість, це можливість допомогти вирішити деякі найбільш особисті і найважливіші проблеми в житті людей - і зробити це в масштабах світу.
## Що я можу зробити?
- Поставити зірочку PySyft на GitHub! - [https://github.com/OpenMined/PySyft](https://github.com/OpenMined/PySyft)
- Зробити відео про матеріал цього зошиту!
# Частина -1: Передумови
- Знати PyTorch - якщо ні, пройдіть курс на http://fast.ai та повертайтесь
- Прочитайте PySyft Framework статтю https://arxiv.org/pdf/1811.04017.pdf! Це дасть вам підґрунтя щодо побудови PySyft, що допоможе речам мати більше сенсу..
# Частина 0: Налаштування
Для початку вам потрібно буде переконатися, що ви встановили правильні пакети. Для цього перейдіть до readme PySyft і дотримуйтесь інструкцій із налаштування.
- Встановіть Python 3.6 або вище
- Встановіть PyTorch 1.4
- pip install syft[udacity]
Якщо якась частина цього не працює для вас (або будь-який тест не працює) - спочатку перевірте [README](https://github.com/OpenMined/PySyft.git) на щодо встановлення, а потім відкрийте Issue GitHub, або пінг каналу #beginner в нашому slack! [slack.openmined.org](http://slack.openmined.org/)
```
# Run this cell to see if things work
import sys
import torch
from torch.nn import Parameter
import torch.nn as nn
import torch.nn.functional as F
import syft as sy
hook = sy.TorchHook(torch)
torch.tensor([1,2,3,4,5])
```
Якщо ця клітина виконалася, то все йде добре! Давай зробимо це!
# Частина 1: Основні засоби приватної, децентралізованої науки про дані
Отже - перше питання, яке може вас цікавити, - як у світі ми тренуємо модель на даних, до яких ми не маємо доступу?
Що ж, відповідь напрочуд проста. Якщо ви звикли працювати в PyTorch, то ви звикли працювати з torch.Tensor об'єктами!
```
x = torch.tensor([1,2,3,4,5])
y = x + x
print(y)
```
Очевидно, що використання цих надзвичайних (і потужних!) тензорів є важливим, але також вимагає, щоб ви мали дані на вашому локальному машині. Тут починається наша подорож.
# Розділ 1.1 - Відправлення тензорів до машини Боба
Зазвичай ми б виконували глибинне навчання на машині, яка зберігає дані. Тепер ми хочемо виконати подібні обчислення на деяких **інших** машинах. Більш конкретно, ми не можемо просто припустити, що дані є на нашій локальній машині.
Таким чином, замість використання Torch тензорів, ми будемо працювати з **покажчиками** на тензори. Дозвольте показати вам, що я маю на увазі. По-перше, давайте створимо машину "прикидатися", що належить людині, яка "прикидається" - ми будемо називати його Боб.
```
bob = sy.VirtualWorker(hook, id="bob")
```
Скажімо, машина Боба знаходиться на іншій планеті - можливо, на Марсі! Але, на даний момент машина порожня. Давайте створимо деякі дані, щоб ми могли надіслати їх Бобу та дізнатися про покажчики!
```
x = torch.tensor([1,2,3,4,5])
y = torch.tensor([1,1,1,1,1])
```
А тепер - давайте надішлемо наші тензори до Боба !!
```
x_ptr = x.send(bob)
y_ptr = y.send(bob)
x_ptr
```
БУМ! Зараз у Боба два тензори! Не вірите мені? Подивіться самі!
```
bob._objects
z = x_ptr + x_ptr
z
bob._objects
```
Коли ми викликали `x.send (bob)`, він повернув новий об'єкт, який ми назвали `x_ptr`. Це наш перший *покажчик* на тензор. Покажчики на тензори НЕ зберігають дані. Натомість вони просто містять метадані про тензора (з даними), що зберігаються на іншій машині. Мета цих тензорів полягає в тому, щоб дати нам інтуїтивно зрозумілий API, щоб повідомити іншій машині для обчислення функцій за допомогою цього тензора. Давайте розглянемо метадані, які містять покажчики.
```
x_ptr
```
Подивіться на метадані!
Є два основні атрибути, характерні для вказівникаs:
- `x_ptr.location : bob`, місцеположення, посилання на місце, на яке вказує вказівник
- `x_ptr.id_at_location : <random integer>`, ідентифікатор, де зберігається тензор у місці розташування
Вони друкуються у форматі `<id_at_location>@<location>`
Є й інші більш загальні ознаки:
- `x_ptr.id : <random integer>`, ідентифікатор нашого тензора вказівника, він був розподілений випадковим чином
- `x_ptr.owner : "me"`, воркер, який володіє тензором вказівникои, ось це місцевий воркер, названий "me"
```
x_ptr.location
bob
bob == x_ptr.location
x_ptr.id_at_location
x_ptr.owner
```
Вам може бути цікаво, чому місцевий воркер, який володіє вказівником, також є VirtualWorker, хоча ми його не створили.
Цікаво, що так само, як у нас був об’єкт VirtualWorker для Боба, у нас (за замовчуванням) завжди є і для нас. Цей воркер автоматично створюється, коли ми називаємо `hook = sy.TorchHook()`, і тому вам зазвичай не потрібно створювати його самостійно.
```
me = sy.local_worker
me
me == x_ptr.owner
```
І нарешті, так само, як ми можемо викликати .send () на тензорі, ми можемо викликати .get () на вказівник до тензора, щоб повернути його !!!
```
x_ptr
x_ptr.get()
y_ptr
y_ptr.get()
z.get()
bob._objects
```
І як бачите ... У Боба більше немає тензорів !!! Вони повернулися до нашої машини!
# Розділ 1.2 - Використання тензорних покажчиків
Отже, надсилання та отримання тензорів від Боба це чудово, але це навряд чи глибинне навчання! Ми хочемо мати можливість виконувати tensor _operations_ на віддалених тензорах. На щастя, тензорні вказівники роблять це досить просто! Ви можете просто використовувати вказівники, як звичайні тензори!
```
x = torch.tensor([1,2,3,4,5]).send(bob)
y = torch.tensor([1,1,1,1,1]).send(bob)
z = x + y
z
```
І вуаля!
За лаштунками сталося щось дуже потужне. Замість того, щоб x і y обчислювали додавання локально, команда була серіалізована і надіслана Бобу, який виконував обчислення, створив тензор z, а потім повернув покажчик на z назад до нас!
Якщо ми вкажемо .get () на покажчик, ми отримаємо результат назад на нашу машину!
```
z.get()
```
### Torch функції
Цей API поширився на всі операції Torch !!!
```
x
y
z = torch.add(x,y)
z
z.get()
```
### Змінні (включаючи зворотне розповсюдження!)
```
x = torch.tensor([1,2,3,4,5.], requires_grad=True).send(bob)
y = torch.tensor([1,1,1,1,1.], requires_grad=True).send(bob)
z = (x + y).sum()
z.backward()
x = x.get()
x
x.grad
```
Отже, як ви бачите, API дійсно досить гнучкий і здатний виконувати практично будь-які операції, які ви зазвичай виконували в Torch на *віддалених даних*. Це закладає основу для наших більш просунутих протоколів збереження конфіденційності, таких як об'єднане навчання, безпечне обчислення між партіями та диференціальна конфіденційність!
# Вітаємо !!! - Час приєднатися до спільноти!
Вітаємо вас із проходженням цього зошита! Якщо вам сподобалося і ви хотіли б приєднатися до руху до збереження конфіденційності, децентралізованої власності штучного інтелект, ви можете зробити це наступними чином!
### Поставте зірочку PySyft на GitHub
Найпростіший спосіб допомогти нашій спільноті - це поставити зірочку репозиторію на GitHub! Це допомагає підвищити обізнаність про класні інструменти, які ми створюємо.
- [Star PySyft](https://github.com/OpenMined/PySyft)
### Приєднуйтесь на Slack!
Найкращий спосіб бути в курсі останніх зрушень - приєднатися до нашої спільноти! Це можна зробити, заповнивши форму на веб-сайті [http://slack.openmined.org](http://slack.openmined.org)
### Приєднуйтесь до проекту!
Найкращий спосіб зробити свій внесок у нашу спільноту - стати учасником коду! У будь-який час ви можете перейти на сторінку Issues PySyft GitHub і відфільтрувати "Projects". Це покаже вам усі квитки вищого рівня, що дають огляд того, до яких проектів ви можете приєднатися! Якщо ви не хочете приєднуватися до проекту, але ви хочете трохи покодити, ви також можете шукати більше "одноразових" міні-проектів, шукаючи проблеми GitHub з позначкою "good first issue".
- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
### Донати
Якщо у вас немає часу зробити свій внесок у нашу кодову базу, але ви все ще хочете надати підтримку, ви також можете стати Backer у нашому Open Collective. Усі пожертвування спрямовуються на наш веб-хостинг та інші витрати спільноти, такі як хакатони та зустрічі!
[OpenMined's Open Collective Page](https://opencollective.com/openmined)
| github_jupyter |
<img src="images/usm.jpg" width="480" height="240" align="left"/>
# MAT281 - Laboratorio N°06
## Objetivos de la clase
* Reforzar los conceptos básicos del E.D.A..
## Contenidos
* [Problema 01](#p1)
## Problema 01
<img src="./images/logo_iris.jpg" width="360" height="360" align="center"/>
El **Iris dataset** es un conjunto de datos que contine una muestras de tres especies de Iris (Iris setosa, Iris virginica e Iris versicolor). Se midió cuatro rasgos de cada muestra: el largo y ancho del sépalo y pétalo, en centímetros.
Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
```
# librerias
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
pd.set_option('display.max_columns', 500) # Ver más columnas de los dataframes
# Ver gráficos de matplotlib en jupyter notebook/lab
%matplotlib inline
# cargar datos
df = pd.read_csv(os.path.join("data","iris_contaminados.csv"))
df.columns = ['sepalLength',
'sepalWidth',
'petalLength',
'petalWidth',
'species']
df.head()
```
### Bases del experimento
Lo primero es identificar las variables que influyen en el estudio y la naturaleza de esta.
* **species**:
* Descripción: Nombre de la especie de Iris.
* Tipo de dato: *string*
* Limitantes: solo existen tres tipos (setosa, virginia y versicolor).
* **sepalLength**:
* Descripción: largo del sépalo.
* Tipo de dato: *float*.
* Limitantes: los valores se encuentran entre 4.0 y 7.0 cm.
* **sepalWidth**:
* Descripción: ancho del sépalo.
* Tipo de dato: *float*.
* Limitantes: los valores se encuentran entre 2.0 y 4.5 cm.
* **petalLength**:
* Descripción: largo del pétalo.
* Tipo de dato: *float*.
* Limitantes: los valores se encuentran entre 1.0 y 7.0 cm.
* **petalWidth**:
* Descripción: ancho del pépalo.
* Tipo de dato: *float*.
* Limitantes: los valores se encuentran entre 0.1 y 2.5 cm.
Su objetivo es realizar un correcto **E.D.A.**, para esto debe seguir las siguientes intrucciones:
1. Realizar un conteo de elementos de la columna **species** y corregir según su criterio. Reemplace por "default" los valores nan..
```
df['species'].unique()
```
Se puede ver que tiene distintos tipos de *'Warning'* en los valores, entonces los corregimos así:
```
#Arreglamos la columna species, reemplazando mayus por minus
# quitando los espacios y reemplazando NaN por "default"
df['species'] = df['species'].str.lower().str.strip()
df.loc[df['species'].isnull(),'species'] = "default"
#Formamos mascara sin el valor species = default
mask_species = df['species'] != "default"
#Contamos species
print("En la columna species se tiene la siguiente cantidad de valores:",
len(df[mask_species]['species'].unique()))
# Filtramos
df = df[mask_species]
```
2. Realizar un gráfico de box-plot sobre el largo y ancho de los petalos y sépalos. Reemplace por **0** los valores nan.
```
df['sepalLength'].unique()
df['sepalWidth'].unique()
df['petalLength'].unique()
df['petalWidth'].unique()
```
Se puede ver que en todas las columnas se tienen valores **NaN**, por lo que los corregiremos, cambiandolos por 0:
```
# Cambiamos los valores por 0
df.loc[df['sepalLength'].isnull(),'sepalLength'] = 0
df.loc[df['sepalWidth'].isnull(),'sepalWidth'] = 0
df.loc[df['petalLength'].isnull(),'petalLength'] = 0
df.loc[df['petalWidth'].isnull(),'petalWidth'] = 0
```
Ahora que tenemos los valores corregidos, graficamos:
```
stats_df = df.drop(['species'], axis=1)
### Formamos Dataframe stats_df quitando lo indicado
sns.boxplot(data=stats_df)
### Formamos grafico box-plot
```
3. Anteriormente se define un rango de valores válidos para los valores del largo y ancho de los petalos y sépalos. Agregue una columna denominada **label** que identifique cuál de estos valores esta fuera del rango de valores válidos.
La columna **label** etiquetará con 1 y 0, 0 los que no cumplan los rangos y 1 los que si.
```
#Rango de sepalLength
mask_sepalL_inf = df['sepalLength'] >= 4.0
mask_sepalL_sup = df['sepalLength'] <= 7.0
#Rango de sepalWidth
mask_sepalW_inf = df['sepalWidth'] >= 2.0
mask_sepalW_sup = df['sepalWidth'] <= 4.5
#Rango de petalLength
mask_petalL_inf = df['petalLength'] >= 1.0
mask_petalL_sup = df['petalLength'] <= 7.0
#Rango de petalWidth
mask_petalW_inf = df['petalWidth'] >= 0.1
mask_petalW_sup = df['petalWidth'] <= 2.5
#Ordenamos mask por columna
mask_sepalL = mask_sepalL_inf & mask_sepalL_sup
mask_sepalW = mask_sepalW_inf & mask_sepalW_sup
mask_petalL = mask_petalL_inf & mask_petalL_sup
mask_petalW = mask_petalW_inf & mask_petalW_sup
#Ordenamos mask de label dentro de rango
mask_label = mask_sepalL & mask_sepalW & mask_petalL & mask_petalW
#Agregamos columna label con todos igual a 0
df['label'] = 0
#data que cumple con los rangos
df_aux = df[mask_label]
df_aux['label'] = 1
#Actualiza los que cumplen con el dataframe original
for i in df.index:
if i in df_aux.index:
df['label'][i] = 1
df.head(13)
```
4. Realice un gráfico de *sepalLength* vs *petalLength* y otro de *sepalWidth* vs *petalWidth* categorizados por la etiqueta **label**. Concluya sus resultados.
```
#Se realiza grafico de dispersión
grafico = sns.catplot(x = 'sepalLength',
y = 'petalLength',
data = df,
hue = 'label',
col = 'label',
kind = 'strip')
grafico.set_xticklabels(rotation=-50)
#Se realiza grafico de dispersión
grafico = sns.catplot(x = 'sepalWidth',
y = 'petalWidth',
data = df,
hue = 'label',
col = 'label',
kind = 'strip')
grafico.set_xticklabels(rotation = -50)
```
Se puede notar claramente que los gráficos tanto de *sepalLenght vs petalLenght* y *sepalWidth vs petalWidth*, cuando no están dentro de los rangos establecidos, se encuentran más dispersos, por lo que se pueden considerar como anormalidades en la iris, o que en ciertos casos del dataset se anotaron erroneamente los valores.
5. Filtre los datos válidos y realice un gráfico de *sepalLength* vs *petalLength* categorizados por la etiqueta **species**.
```
# Tenemos los datos que cumplen los rangos
df = df_aux
palette = sns.color_palette("hls", 3)
sns.lineplot(
x = 'sepalLength',
y = 'petalLength',
hue = 'species',# color por specie
data = df,
ci = None,
palette = palette
)
```
| github_jupyter |
```
import pandas as pd
df_all = pd.read_csv("48648_88304_bundle_archive/alldata.csv")
df_all = df_all.loc[df_all['location'].notnull()]
df_all
df_all['location'] = [x.strip().split(',', maxsplit=1)[0] for x in df_all['location']]
df_all['location'].value_counts()[:10]
df_all_location_lat_lon = {
'New York':[40.71, -74],
'Seattle':[47.61, -122.33],
'Cambridge':[42.37, -71.11],
'Boston':[42.36, -71.06],
'San Francisco':[37.77, -122.42],
'Chicago':[41.88, -87.63],
'San Diego':[32.72, -117.16],
'Washington':[38.91, -77.04],
'Mountain View':[37.39, -122.09],
'Atlanta':[33.75, -84.39]
}
df_ny = df_all.loc[df_all['location'] == 'New York']
df_ny_1 = df_ny.loc[[x in df_ny['position'].value_counts().keys().tolist()[:7] for x in df_ny['position']]]
print(df_ny['position'].value_counts()[:7])
sum(df_ny['position'].value_counts()[:7])
df_ny_1
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.basemap import Basemap
df_all_location_text_lat_lon = {
'New York':[39.71, -73],
'Seattle':[48.61, -124.33],
'Cambridge':[45.37, -71.11],
'Boston':[43.36, -72.06],
'San Francisco':[38.77, -123.42],
'Chicago':[43, -89.63],
'San Diego':[32.72, -117.16],
'Washington':[37.91, -76.04],
'Mountain View':[35.39, -121.09],
'Atlanta':[34.75, -85.39]
}
top_10_locs = df_all['location'].value_counts()[:10]
top_10_locs
a = top_10_locs / (max(top_10_locs) - min(top_10_locs))
a = a*10
a
fig = plt.figure(figsize=(8, 8))
m = Basemap(projection='lcc', resolution='c',
width=8E6, height=5E6,
lat_0=40, lon_0=-100,)
#m.etopo(scale=0.5, alpha=0.5)
m.drawcoastlines(color='lightblue', linewidth=1.5)
# Map (long, lat) to (x, y) for plotting
for name, loc in df_all_location_lat_lon.items():
t_loc = m(loc[1], loc[0])
text_loc_ll = df_all_location_text_lat_lon[name]
text_loc = m(text_loc_ll[1], text_loc_ll[0])
plt.plot(t_loc[0], t_loc[1], marker='o', alpha=0.45, markersize=a[name])
plt.text(text_loc[0], text_loc[1], name, fontsize=12);
plt.title("Quantity of Data Science jobs in major cities")
text_loc = m(-130, 15)
plt.text(text_loc[0], text_loc[1], "Size of circle represents the quantity of jobs in that location", fontsize=12)
fig, ax = plt.subplots()
ax.bar(range(0,10), height=top_10_locs)
ax.set_xticks(range(0,10))
ax.set_xticklabels(top_10_locs.keys(), rotation=45)
fig, ax = plt.subplots(figsize=[7,7])
pie, texts = ax.pie((top_10_locs/sum(top_10_locs)))
ax.legend(pie, top_10_locs.keys(), loc='center right', bbox_to_anchor=(1, 0, 0.25, 1))
ax.set_title("Percentage share of total jobs in top 10 locations", pad=-100)
df_all.loc[df_all['location']=="Cambridge"]['position'].value_counts()
plots = []
for name in top_10_locs.keys():
top_5_positions = df_all.loc[df_all['location']==name]['position'].value_counts()[:5]
fig, ax = plt.subplots()
colors = ['r','b','g','orange','lightblue']
bars = ax.bar(range(0,5), top_5_positions, color=colors)
print(top_5_positions)
ax.set_xticks(range(0,5))
ax.set_xticklabels([x[:21] for x in top_5_positions.keys()], rotation=45)
title = f'{name}, Total jobs: {len(df_all.loc[df_all["location"]==name])}'
ax.set_title(title)
ax.legend(handles=bars, labels=[f'{x}: {top_5_positions[x]}' for x,y in top_5_positions.items()])
plots.append([fig, ax])
plt.savefig(f"top_job_by_city/{name}.jpg")
plt.show()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
np.__version__
```
## Predicting diabetes using machine learning
```
# Import all the tools we need
# Regular EDA and plotting libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# Models from Scikit-learn
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
#Model Evaluations
from sklearn.model_selection import train_test_split, cross_val_score,RandomizedSearchCV, GridSearchCV
from sklearn.metrics import confusion_matrix, classification_report, precision_score, recall_score, f1_score, plot_roc_curve
```
## Load data
```
df = pd.read_csv("diabetes.csv")
df
df.shape
```
## Examining the data further.
```
df.head()
df.tail()
df["Outcome"].value_counts()
df["Outcome"].value_counts().plot(kind="bar", color = ["salmon", "lightblue"])
df.info()
df.isna().sum()
df.describe()
```
## age and Body mass index for diabetes
```
#Create another figure
plt.figure(figsize = (10,6))
#Scatter with positive examples
plt.scatter(df.Age[df.Outcome == 1], df.BMI[df.Outcome==1], c="salmon")
#Scatter with negative examples
plt.scatter(df.Age[df.Outcome==0], df.BMI[df.Outcome==0], c = "lightblue")
# Add some helpful info
plt.title("Heart Disease in function of Age and BMI")
plt.xlabel("Age")
plt.ylabel("BMI")
plt.legend(["Disease", "No Disease"])
df.Age.plot.hist()
```
## Disease frequency per blood pressure
```
df.corr()
corr_matrix = df.corr()
fig, ax = plt.subplots(figsize = (15,10))
ax = sns.heatmap(corr_matrix, annot = True, linewidths = 0.5, fmt = ".2f", cmap = "YlGnBu")
```
## Modelling the data
```
df.head()
X = df.drop("Outcome", axis=1)
y = df["Outcome"]
X
y
#Split the data into train and test sets
np.random.seed(42)
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.2)
X_train
y_train, len(y_train)
```
time to build a machine learning model and see which one best fits.
We'll use 6 models,
1. LogisticRegression
2. K_NearestNeighborsClassifier
3. RandomForestClassifier
4. SVM
5. Naive Bayes
6. Decision Tree
```
#put models in a dict
models = {"LogisticRegression": LogisticRegression(), "LinearSVC": LinearSVC(),
"Naive Bayes": GaussianNB(), "KNN": KNeighborsClassifier(), "RandomForest": RandomForestClassifier(),
"Decision Tree": DecisionTreeClassifier()}
#Create a function to fit and score models
def fit_and_score(models, X_train, X_test, y_train, y_test):
np.random.seed(42)
model_scores = {}
#loop through models
for name, model in models.items():
model.fit(X_train, y_train)
model_scores[name] = model.score(X_test, y_test)
return model_scores
model_scores = fit_and_score(models=models, X_train = X_train, X_test = X_test,y_train = y_train, y_test = y_test)
model_scores
model_compare = pd.DataFrame(model_scores, index=["accuracy"])
model_compare.T.plot.bar()
```
# Hypertune Logistic Regression parameters
```
#Create a hyperparameter grid for LogisticRegression
log_reg_grid = {"C": np.logspace(-4, 4, 20),
"solver": ["liblinear"]}
#Create a hyperparameter grid for RandomForestClassifier
rf_grid = {"n_estimators": np.arange(10,1000,50), "max_depth": [None, 3,5,10], "min_samples_split": np.arange(2,20,2),
"min_samples_leaf": np.arange(1,20,2)}
np.logspace(-4, 4, 20)
# Tune LogisticRegressionModel
np.random.seed(42)
# Setup random hyperparatmer search for LogisticRegression
rs_log_reg = RandomizedSearchCV(LogisticRegression(), param_distributions=log_reg_grid, cv=5,n_iter=20,verbose=True)
#Fit random hyperparameter search model for LogisticRegression
rs_log_reg.fit(X_train, y_train)
rs_log_reg.best_params_
rs_log_reg.score(X_test,y_test)
# Different hyperparanmeters for our LogisticRegression model
log_reg_grid = {"C": np.logspace(-4,4,30), "solver": ["liblinear"]}
#Setup grid hyperparameter search for LogisticRegression
gs_log_reg = GridSearchCV(LogisticRegression(), param_grid=log_reg_grid, cv=5, verbose=True)
#Fit grid hyperparameter search model
gs_log_reg.fit(X_train, y_train)
gs_log_reg.best_params_
gs_log_reg.score(X_test, y_test)
```
### Tuning DecisionTree hyperparameters
```
#Create a hyperparameter grid for LogisticRegression
param_dist = {"max_depth": [5, None]}
np.logspace(-4, 4, 20)
# Tune Decision Tree
np.random.seed(42)
# Setup random hyperparatmer search for LogisticRegression
rs_log_reg1 = RandomizedSearchCV(DecisionTreeClassifier(), param_dist, cv=5)
#Fit random hyperparameter search model for LogisticRegression
rs_log_reg1.fit(X_train, y_train)
rs_log_reg1.best_params_
rs_log_reg1.score(X_test,y_test)
```
### Tuning Naive Bayes hyperparameters
```
nb_classifier = GaussianNB()
params_NB = {'var_smoothing': np.logspace(0,-9, num=100)}
gs_NB = GridSearchCV(estimator=nb_classifier,
param_grid=params_NB,
# use any cross validation technique
verbose=1,
scoring='accuracy')
gs_NB.fit(X_train, y_train)
gs_NB.best_params_
gs_NB.score(X_test, y_test)
best_model.score(X_test, y_test)
```
### Evaluate our tuned classifier model
```
y_preds = rs_log_reg1.predict(X_test)
y_preds
y_test
plot_roc_curve(rs_log_reg1, X_test, y_test)
print(confusion_matrix(y_test, y_preds))
sns.set(font_scale=1.5)
def plot_conf_mat(y_test, y_preds):
"""
"""
fig,ax = plt.subplots(figsize=(3,3))
ax = sns.heatmap(confusion_matrix(y_test, y_preds),annot=True, cbar=False)
plt.xlabel("True label")
plt.ylabel("Predicted label")
plot_conf_mat(y_test, y_preds)
print(classification_report(y_test, y_preds))
```
### Doing this same stuff now for LogisticRegression
```
y_preds = gs_log_reg.predict(X_test)
y_preds
y_test
plot_roc_curve(gs_log_reg, X_test, y_test)
print(confusion_matrix(y_test, y_preds))
sns.set(font_scale=1.5)
def plot_conf_mat(y_test, y_preds):
"""
"""
fig,ax = plt.subplots(figsize=(3,3))
ax = sns.heatmap(confusion_matrix(y_test, y_preds),annot=True, cbar=False)
plt.xlabel("True label")
plt.ylabel("Predicted label")
plot_conf_mat(y_test, y_preds)
print(classification_report(y_test, y_preds))
gs_log_reg.best_params_
clf = LogisticRegression(C=2.592943797404667, solver="liblinear")
cv_acc = cross_val_score(clf, X, y, cv=5, scoring="accuracy")
cv_acc
cv_precision = cross_val_score(clf, X, y, cv=5, scoring="precision")
cv_precision = np.mean(cv_precision)
cv_precision
cv_recall = cross_val_score(clf, X, y, cv=5, scoring="recall")
cv_recall = np.mean(cv_recall)
cv_recall
cv_f1 = cross_val_score(clf, X, y, cv=5, scoring="f1")
cv_f1 = np.mean(cv_f1)
cv_f1
```
### Feature importance after data modeling
```
clf = LogisticRegression(C=2.592943797404667, solver = "liblinear")
clf.fit(X_train, y_train)
df.head()
clf.coef_
feature_dict = dict(zip(df.columns, list(clf.coef_[0])))
feature_dict
feature_df = pd.DataFrame(feature_dict, index = [0])
feature_df.T.plot.bar(title = "Feature Importance", legend=False)
```
| github_jupyter |
# 2017 Open Data Day Hackathon at Mobile Web Ghana
### Open Tender Contracts Awarded by Ministry of Health
After investigating the dataset for this small project, I realised that the data wasn't scraped well. The dataset has many empty rows. This limited the scope of the project as it cannot answer all questions I have concerning the dataset.
However, for demonstration of data visualisation, I went ahead to work with the dataset.
```
import csv
# open and read in the file
f = open("Health_Sector_Contracts.csv", 'r')
raw_data = list(csv.reader(f))
raw_data = raw_data[1:]
```
### A snapshot of the dataset
```
raw_data[:3]
```
### Empty fields need to be taken out of the dataset to make it clean
```
# data in file not clean make a new object for cleaned data
contracts = []
for data in raw_data:
contracts.append((data[:6]))
```
### After empty fields taken out
```
contracts[:3]
import pandas as pd
fields = ['Tender Description', 'Awarding Agency/Ministry', 'Contract Awarded To', 'Contract Signed on', 'Estimated Contract Completion Date', 'Contract Award Price']
contracts = pd.DataFrame(contracts, columns=fields)
```
### The first 5 rows of the dataset
```
contracts.head()
```
### Further clean up
I will work with the amount that was involved in each contract. Hence, the amount must be of type that will make doing math with it possible. To begin with, I would have to strip of all the "GH¢" that precedes the amount in each awarded contact.
```
contracts['Contract Award Price'] = contracts['Contract Award Price'].str.replace("GH¢", "").str.replace(",", "").str.replace("\n", "").str.rstrip()
pd.options.display.float_format = '{:20,.2f}'.format
contracts['Contract Award Price'] = pd.to_numeric(contracts['Contract Award Price'])
contracts.head()
```
### Inconsistent company name in "Contract Awarded To" column.
```
contracts[contracts['Contract Awarded To'].str.contains("ERNEST CHEMIST")]
```
From the above, we will realise that the company name "Ernest Chemist" in the "Contract Awarded To" column isn't consistent. It's in different virations. This will break our analysis as for example, if we wanted to see the unique companies that have been awared with contracts. "ERNEST CHEMISTS LTD", "ERNEST CHEMIST LTD", "ERNEST CHEMIST", "ERNEST CHEMIST LIMITED" will be seen as 4 different companies by the code.
Hence, there is a strong need to make then consistent.
```
contracts.loc[contracts['Contract Awarded To'].str.contains("ERNEST CHEMIST"), 'Contract Awarded To'] ='ERNEST CHEMIST LTD'
contracts[contracts['Contract Awarded To'].str.contains("ERNEST CHEMIST")]
```
### Replace "LIMITED" with "LTD" - A way to check against redundancy
```
contracts.loc[contracts['Contract Awarded To'].str.contains("LIMITED"), 'Contract Awarded To'] = contracts.loc[contracts['Contract Awarded To'].str.contains("LIMITED"), 'Contract Awarded To'].str.replace('LIMITED', 'LTD')
```
### Make a dictionary of companies that got two or more contracts
```
contract_frequency = {}
for company in contracts['Contract Awarded To']:
if company and company not in contract_frequency:
contract_frequency[company] = 1
elif company and company in contract_frequency:
contract_frequency[company] += 1
more_than_once_contracts = {}
for key, value in contract_frequency.items():
if value > 1:
more_than_once_contracts[key] = value
more_than_once_contracts
```
### Make a DataFrame of the companies that got two or more contracts
```
more_than_once = []
for key, value in more_than_once_contracts.items():
temp = [key.title(),value]
more_than_once.append(temp)
more_than_once
fields = ['Company', 'Contract Frequency']
more_than_once_freq = pd.DataFrame(more_than_once, columns=fields)
```
### Final DataFrame
```
more_than_once_freq.head(n=11)
```
### What is the total amount of contracts awarded to the above companies?
```
tot_amt_more_than_once_contracts = {}
for company in more_than_once_contracts:
total_contract_price = contracts[contracts['Contract Awarded To'] == company]['Contract Award Price'].sum()
tot_amt_more_than_once_contracts[company] = round(total_contract_price, 2)
tot_amt_more_than_once_contracts
```
### Make a DataFrame from the above dictionary
```
total_price_more_than_once = []
for key, value in tot_amt_more_than_once_contracts.items():
temp = [key.title(),value]
total_price_more_than_once.append(temp)
total_price_more_than_once
fields = ['Company', 'Total Contract Price']
more_once_contracts_amt_totals = pd.DataFrame(total_price_more_than_once, columns=fields)
more_once_contracts_amt_totals.head()
```
### Consolidate the DataFrames
```
more_than_once_freq['Total Contract Price'] = more_once_contracts_amt_totals['Total Contract Price']
more_than_once_freq.head(n=11)
```
### CAUTION:
Dataset provided during the hackathon had over 800 missing rows. Hence, the analysis in this small project is for demonstration purposes. It will be very misleading to use the analysis here in any way part from demonstration.
### Quick visualization of the above DataFrame
```
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
company = more_than_once_freq.pop('Company')
more_than_once_freq.index = company
from matplotlib.ticker import FormatStrFormatter
fig, ax = plt.subplots()
ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))
more_than_once_freq['Total Contract Price'].plot(kind='barh', figsize=(14,6), sort_columns=True, fontsize=12)
plt.xlabel("Total Amount in GH¢")
plt.ylabel("Companies")
plt.title("Companies that were awarded Contracts more than once\nThe longer the bar, the more money received")
plt.show()
```
| github_jupyter |
```
import numpy as np
import copy
from sklearn import preprocessing
import tensorflow as tf
from tensorflow import keras
import os
import pandas as pd
from matplotlib import pyplot as plt
from numpy.random import seed
np.random.seed(2095)
data = pd.read_excel('Dataset/CardiacPrediction.xlsx')
data.drop(['SEQN','Annual-Family-Income','Height','Ratio-Family-Income-Poverty','X60-sec-pulse',
'Health-Insurance','Glucose','Vigorous-work','Total-Cholesterol','CoronaryHeartDisease','Blood-Rel-Stroke','Red-Cell-Distribution-Width','Triglycerides','Mean-Platelet-Vol','Platelet-count','Lymphocyte','Monocyte','Eosinophils','Mean-cell-Hemoglobin','White-Blood-Cells','Red-Blood-Cells','Basophils','Mean-Cell-Vol','Mean-Cell-Hgb-Conc.','Hematocrit','Segmented-Neutrophils'], axis = 1, inplace=True)
#data['Diabetes'] = data['Diabetes'].replace('3','1')
#data = data.astype(float)
data['Diabetes'].loc[(data['Diabetes'] == 3 )] = 1
#data= data["Diabetes"].replace({"3": "1"},inplace=True)
data["Diabetes"].value_counts()
data["Diabetes"].describe()
#del data['Basophils']
#del data['Health-Insurance']
#del data['Platelet-count']
data.shape
data.columns
data = data[['Gender', 'Age', 'Systolic', 'Diastolic', 'Weight', 'Body-Mass-Index',
'Hemoglobin', 'Albumin', 'ALP', 'AST', 'ALT', 'Cholesterol',
'Creatinine', 'GGT', 'Iron', 'LDH', 'Phosphorus',
'Bilirubin', 'Protein', 'Uric.Acid', 'HDL',
'Glycohemoglobin', 'Moderate-work',
'Blood-Rel-Diabetes', 'Diabetes']]
data.columns
data.isnull().sum()
data.describe()
data.shape
data['Diabetes'].describe()
data.columns
data["Diabetes"].value_counts().sort_index().plot.barh()
#data["Gender"].value_counts().sort_index().plot.barh()
#balanced
#data.corr()
data.columns
data.shape
data.info()
data = data.astype(float)
data.info()
import seaborn as sns
plt.subplots(figsize=(12,8))
sns.heatmap(data.corr(),cmap='inferno', annot=True)
plt.subplots(figsize=(25,15))
data.boxplot(patch_artist=True, sym="k.")
plt.xticks(rotation=90)
minimum = 0
maximum = 0
def detect_outlier(feature):
first_q = np.percentile(feature, 25)
third_q = np.percentile(feature, 75)
IQR = third_q - first_q
IQR *= 1.5
minimum = first_q - IQR
maximum = third_q + IQR
flag = False
if(minimum > np.min(feature)):
flag = True
if(maximum < np.max(feature)):
flag = True
return flag
def remove_outlier(feature):
first_q = np.percentile(X[feature], 25)
third_q = np.percentile(X[feature], 75)
IQR = third_q - first_q
IQR *= 1.5
minimum = first_q - IQR # the acceptable minimum value
maximum = third_q + IQR # the acceptable maximum value
median = X[feature].median()
"""
# any value beyond the acceptance range are considered
as outliers.
# we replace the outliers with the median value of that
feature.
"""
X.loc[X[feature] < minimum, feature] = median
X.loc[X[feature] > maximum, feature] = median
# taking all the columns except the last one
# last column is the label
X = data.iloc[:, :-1]
for i in range(len(X.columns)):
remove_outlier(X.columns[i])
X = data.iloc[:, :-1]
for i in range(len(X.columns)):
if(detect_outlier(X[X.columns[i]])):
print(X.columns[i], "Contains Outlier")
for i in range (50):
for i in range(len(X.columns)):
remove_outlier(X.columns[i])
plt.subplots(figsize=(15,6))
X.boxplot(patch_artist=True, sym="k.")
plt.xticks(rotation=90)
for i in range(len(X.columns)):
if(detect_outlier(X[X.columns[i]])):
print(X.columns[i], "Contains Outlier")
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
from sklearn.preprocessing import MinMaxScaler, StandardScaler, LabelEncoder
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
#from xgboost import XGBClassifier, plot_importance
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score,confusion_matrix
scaler = StandardScaler()
scaled_data = scaler.fit_transform(X)
scaled_df = pd.DataFrame(data = scaled_data, columns = X.columns)
scaled_df.head()
label = data["Diabetes"]
encoder = LabelEncoder()
label = encoder.fit_transform(label)
X = scaled_df
y = label
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=420)
print(X_train.shape, y_test.shape)
print(y_train.shape, y_test.shape)
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
xnew2=SelectKBest(f_classif, k=20).fit_transform(X, y)
import sklearn.feature_selection as fs
import matplotlib.pyplot as plt
df2 = fs.SelectKBest(k='all')
df2.fit(X, y)
names = X.columns.values[df2.get_support()]
scores = df2.scores_[df2.get_support()]
names_scores = list(zip(names, scores))
ns_df = pd.DataFrame(data = names_scores, columns=
['Features','F_Scores'])
ns_df_sorted = ns_df.sort_values(['F_Scores','Features'], ascending =
[False, True])
print(ns_df_sorted)
#import statsmodels.api as sm
#import pandas
#from patsy import dmatrices
#logit_model = sm.OLS(y_train, X_train)
#result = logit_model.fit()
#print(result.summary2())
#np.exp(result.params)
#params = result.params
#conf = result.conf_int()
#conf['Odds Ratio'] = params.sort_index()
#conf.columns = ['5%', '95%', 'Odds Ratio']
#print(np.exp(conf))
#result.pvalues.sort_values()
#from sklearn.utils import class_weight
#class_weights = class_weight.compute_class_weight('balanced',
# np.unique(y_train),
# y_train)
#model.fit(X_train, y_train, class_weight=class_weights)
from sklearn.model_selection import GridSearchCV
weights = np.linspace(0.05, 0.95, 20)
gsc = GridSearchCV(
estimator=LogisticRegression(),
param_grid={
'class_weight': [{0: x, 1: 1.0-x} for x in weights]
},
scoring='accuracy',
cv=15
)
grid_result = gsc.fit(X, y)
print("Best parameters : %s" % grid_result.best_params_)
# Plot the weights vs f1 score
dataz = pd.DataFrame({ 'score': grid_result.cv_results_['mean_test_score'],
'weight': weights })
dataz.plot(x='weight')
class_weight = {0: 0.5236842105263158,
1: 0.47631578947368425}
#LR
'''
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix, classification_report
from mlxtend.plotting import plot_decision_regions, plot_confusion_matrix
from matplotlib import pyplot as plt
lr = LogisticRegression(class_weight='balanced',random_state=420)
# Fit..
lr.fit(X_train, y_train)
# Predict..
y_pred = lr.predict(X_test)
# Evaluate the model
print(classification_report(y_test, y_pred))
plot_confusion_matrix(confusion_matrix(y_test, y_pred))
from sklearn.metrics import roc_curve, auc
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred)
roc_auc = auc(false_positive_rate, true_positive_rate)
roc_auc
'''
'''
from sklearn.svm import SVC
clf_svc_rbf = SVC(kernel="rbf",class_weight='balanced',random_state=4200)
clf_svc_rbf.fit(X_train,y_train)
y_pred_clf_svc_rbf = clf_svc_rbf.predict(X_test)
import matplotlib.pyplot as plt
cm = confusion_matrix(y_test,y_pred_clf_svc_rbf)
#plt.figure(figsize=(5,5))
#sns.heatmap(cm,annot=True)
#plt.show()
#print(classification_report(y_test,y_pred_clf_svc_rbf))
print(classification_report(y_test, y_pred_clf_svc_rbf))
plot_confusion_matrix(confusion_matrix(y_test, y_pred_clf_svc_rbf))
from sklearn.metrics import roc_curve, auc
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred_clf_svc_rbf)
roc_auc = auc(false_positive_rate, true_positive_rate)
roc_auc
'''
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
rd = RandomForestClassifier(class_weight='balanced',random_state=4200)
rd.fit(X_train,y_train)
y_pred_rd = rd.predict(X_test)
import matplotlib.pyplot as plt
cm = confusion_matrix(y_test,y_pred_rd)
#plt.figure(figsize=(5,5))
#sns.heatmap(cm,annot=True,linewidths=.3)
#plt.show()
print(classification_report(y_test,y_pred_rd))
plot_confusion_matrix(confusion_matrix(y_test, y_pred_rd))
from sklearn.metrics import roc_curve, auc
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred_rd)
roc_auc = auc(false_positive_rate, true_positive_rate)
roc_auc
#CV appraoach
```
## SVM
```
# evaluate a logistic regression model using k-fold cross-validation
from numpy import mean
from numpy import std
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import ShuffleSplit
from sklearn.linear_model import LogisticRegression
# create dataset
#X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1)
# prepare the cross-validation procedure
#cv = KFold(n_splits=5, test_size= 0.2, random_state=0)
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=42)
# create model
model = SVC(kernel='rbf', C=1, class_weight=class_weight)
# evaluate model
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)
# report performance
print('Accuracy: %.4f (%.4f)' % (mean(scores), std(scores)))
scores
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.metrics import auc
from sklearn.metrics import plot_roc_curve
from sklearn.model_selection import StratifiedKFold
# #############################################################################
# Classification and ROC analysis
# Run classifier with cross-validation and plot ROC curves
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=42)
classifier = svm.SVC(kernel='rbf', probability=True, class_weight=class_weight,
random_state=42)
tprs = []
aucs = []
mean_fpr = np.linspace(0, 1, 100)
fig, ax = plt.subplots()
for i, (train, test) in enumerate(cv.split(X, y)):
classifier.fit(X, y)
viz = plot_roc_curve(classifier, X, y,
name='ROC fold {}'.format(i),
alpha=0.3, lw=1, ax=ax)
interp_tpr = np.interp(mean_fpr, viz.fpr, viz.tpr)
interp_tpr[0] = 0.0
tprs.append(interp_tpr)
aucs.append(viz.roc_auc)
ax.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',
label='Chance', alpha=.8)
mean_tpr = np.mean(tprs, axis=0)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
std_auc = np.std(aucs)
ax.plot(mean_fpr, mean_tpr, color='b',
label=r'Mean ROC (AUC = %0.4f $\pm$ %0.4f)' % (mean_auc, std_auc),
lw=2, alpha=.8)
std_tpr = np.std(tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
ax.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2,
label=r'$\pm$ 1 std. dev.')
ax.set(xlim=[-0.05, 1.05], ylim=[-0.05, 1.05],
title="Receiver operating characteristic")
ax.legend(loc="lower right")
plt.show()
```
# LR
```
from numpy import mean
from numpy import std
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import ShuffleSplit
from sklearn.linear_model import LogisticRegression
# create dataset
#X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1)
# prepare the cross-validation procedure
#cv = KFold(n_splits=5, test_size= 0.2, random_state=0)
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=42)
# create model
model = LogisticRegression(class_weight=class_weight)
# evaluate model
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)
# report performance
print('Accuracy: %.4f (%.4f)' % (mean(scores), std(scores)))
scores
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.metrics import auc
from sklearn.metrics import plot_roc_curve
from sklearn.model_selection import StratifiedKFold
# #############################################################################
# Data IO and generation
# Import some data to play with
#iris = datasets.load_iris()
#X = iris.data
#y = iris.target
#X, y = X[y != 2], y[y != 2]
#n_samples, n_features = X.shape
# Add noisy features
#random_state = np.random.RandomState(0)
#X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]
# #############################################################################
# Classification and ROC analysis
# Run classifier with cross-validation and plot ROC curves
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=42)
classifier = LogisticRegression(class_weight=class_weight,random_state=42)
tprs = []
aucs = []
mean_fpr = np.linspace(0, 1, 100)
fig, ax = plt.subplots()
for i, (train, test) in enumerate(cv.split(X, y)):
classifier.fit(X, y)
viz = plot_roc_curve(classifier, X, y,
name='ROC fold {}'.format(i),
alpha=0.3, lw=1, ax=ax)
interp_tpr = np.interp(mean_fpr, viz.fpr, viz.tpr)
interp_tpr[0] = 0.0
tprs.append(interp_tpr)
aucs.append(viz.roc_auc)
ax.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',
label='Chance', alpha=.8)
mean_tpr = np.mean(tprs, axis=0)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
std_auc = np.std(aucs)
ax.plot(mean_fpr, mean_tpr, color='b',
label=r'Mean ROC (AUC = %0.4f $\pm$ %0.4f)' % (mean_auc, std_auc),
lw=2, alpha=.8)
std_tpr = np.std(tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
ax.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2,
label=r'$\pm$ 1 std. dev.')
ax.set(xlim=[-0.05, 1.05], ylim=[-0.05, 1.05],
title="Receiver operating characteristic example")
ax.legend(loc="lower right")
plt.show()
```
## RF
```
from numpy import mean
from numpy import std
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import ShuffleSplit
# create dataset
#X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1)
# prepare the cross-validation procedure
#cv = KFold(n_splits=5, test_size= 0.2, random_state=0)
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=42)
# create model
model = RandomForestClassifier(class_weight=class_weight)
# evaluate model
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)
# report performance
print('Accuracy: %.4f (%.4f)' % (mean(scores), std(scores)))
scores
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.metrics import auc
from sklearn.metrics import plot_roc_curve
from sklearn.model_selection import StratifiedKFold
# #############################################################################
# Data IO and generation
# Import some data to play with
#iris = datasets.load_iris()
#X = iris.data
#y = iris.target
#X, y = X[y != 2], y[y != 2]
#n_samples, n_features = X.shape
# Add noisy features
#random_state = np.random.RandomState(0)
#X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]
# #############################################################################
# Classification and ROC analysis
# Run classifier with cross-validation and plot ROC curves
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=42)
classifier = RandomForestClassifier(class_weight=class_weight,random_state=42)
tprs = []
aucs = []
mean_fpr = np.linspace(0, 1, 100)
fig, ax = plt.subplots()
for i, (train, test) in enumerate(cv.split(X, y)):
classifier.fit(X, y)
#viz = plot_roc_curve(classifier, X, y,
# name='ROC fold {}'.format(i),
# alpha=0.3, lw=1, ax=ax)
interp_tpr = np.interp(mean_fpr, viz.fpr, viz.tpr)
interp_tpr[0] = 0.0
tprs.append(interp_tpr)
aucs.append(viz.roc_auc)
ax.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',
label='Chance', alpha=.8)
mean_tpr = np.mean(tprs, axis=0)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
std_auc = np.std(aucs)
ax.plot(mean_fpr, mean_tpr, color='b',
label=r'Mean ROC (AUC = %0.4f $\pm$ %0.4f)' % (mean_auc, std_auc),
lw=2, alpha=.8)
std_tpr = np.std(tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
ax.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2,
label=r'$\pm$ 1 std. dev.')
ax.set(xlim=[-0.05, 1.05], ylim=[-0.05, 1.05],
title="Receiver operating characteristic of Random Forest Classifier")
ax.legend(loc="lower right")
plt.show()
```
## DT
```
from numpy import mean
from numpy import std
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import ShuffleSplit
from sklearn.tree import DecisionTreeClassifier
# create dataset
#X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1)
# prepare the cross-validation procedure
#cv = KFold(n_splits=5, test_size= 0.2, random_state=0)
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=42)
# create model
model = DecisionTreeClassifier(class_weight=class_weight)
# evaluate model
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)
# report performance
print('Accuracy: %.4f (%.4f)' % (mean(scores), std(scores)))
scores
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
from sklearn.metrics import auc
from sklearn.metrics import plot_roc_curve
from sklearn.model_selection import StratifiedKFold
# #############################################################################
# Data IO and generation
# Import some data to play with
#iris = datasets.load_iris()
#X = iris.data
#y = iris.target
#X, y = X[y != 2], y[y != 2]
#n_samples, n_features = X.shape
# Add noisy features
#random_state = np.random.RandomState(0)
#X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]
# #############################################################################
# Classification and ROC analysis
# Run classifier with cross-validation and plot ROC curves
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=42)
classifier = DecisionTreeClassifier(class_weight=class_weight,random_state=42)
tprs = []
aucs = []
mean_fpr = np.linspace(0, 1, 100)
fig, ax = plt.subplots()
for i, (train, test) in enumerate(cv.split(X, y)):
classifier.fit(X, y)
viz = plot_roc_curve(classifier, X, y,
name='ROC fold {}'.format(i),
alpha=0.3, lw=1, ax=ax)
interp_tpr = np.interp(mean_fpr, viz.fpr, viz.tpr)
interp_tpr[0] = 0.0
tprs.append(interp_tpr)
aucs.append(viz.roc_auc)
ax.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',
label='Chance', alpha=.8)
mean_tpr = np.mean(tprs, axis=0)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
std_auc = np.std(aucs)
ax.plot(mean_fpr, mean_tpr, color='b',
label=r'Mean ROC (AUC = %0.4f $\pm$ %0.4f)' % (mean_auc, std_auc),
lw=2, alpha=.8)
std_tpr = np.std(tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
ax.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2,
label=r'$\pm$ 1 std. dev.')
ax.set(xlim=[-0.05, 1.05], ylim=[-0.05, 1.05],
title="Receiver operating characteristic example")
ax.legend(loc="lower right")
plt.show()
#from sklearn.model_selection import cross_val_score
#from sklearn import svm
#clf = svm.SVC(kernel='rbf', C=1, class_weight=class_weight)
#scores = cross_val_score(clf, X, y, cv=5)
#print("Accuracy: %0.4f (+/- %0.4f)" % (scores.mean(), scores.std() * 2))
#clf.score(X_test, y_test)
```
## ANN
```
import keras
from keras.models import Sequential
from keras.layers import Dense,Dropout
classifier=Sequential()
classifier.add(Dense(units=256, kernel_initializer='uniform',activation='relu',input_dim=24))
classifier.add(Dense(units=128, kernel_initializer='uniform',activation='relu'))
classifier.add(Dropout(p=0.5))
classifier.add(Dense(units=64, kernel_initializer='uniform',activation='relu'))
classifier.add(Dropout(p=0.4))
classifier.add(Dense(units=32, kernel_initializer='uniform',activation='relu'))
classifier.add(Dense(units=1, kernel_initializer='uniform',activation='sigmoid'))
classifier.compile(optimizer='adam',loss="binary_crossentropy",metrics=['accuracy'])
classifier.fit(X_train,y_train,batch_size=10,epochs=100,class_weight=class_weight,validation_data=(X_test, y_test))
#clf_svc_rbf.fit(X_train,y_train)
from sklearn.metrics import confusion_matrix,classification_report,roc_auc_score,auc,f1_score
y_pred = classifier.predict(X_test)>0.9
import matplotlib.pyplot as plt
cm = confusion_matrix(y_test,y_pred)
#plt.figure(figsize=(5,5))
#sns.heatmap(cm,annot=True)
#plt.show()
#print(classification_report(y_test,y_pred_clf_svc_rbf))
print(classification_report(y_test, y_pred))
#plot_confusion_matrix(confusion_matrix(y_test, y_pred))
from sklearn.metrics import roc_curve, auc
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred)
roc_auc = auc(false_positive_rate, true_positive_rate)
roc_auc
from sklearn.metrics import roc_curve,roc_auc_score
from sklearn.metrics import auc
fpr , tpr , thresholds = roc_curve ( y_test , y_pred)
auc_keras = auc(fpr, tpr)
print("AUC Score:",auc_keras)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.4f)' % auc_keras)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
#from sklearn.tree import DecisionTreeClassifier
#from sklearn.model_selection import cross_val_score
#dt = DecisionTreeClassifier(class_weight=class_weight)
#scores = cross_val_score(clf, X, y, cv=5)
#print("Accuracy: %0.4f (+/- %0.4f)" % (scores.mean(), scores.std() * 2))
'''
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix,classification_report,roc_auc_score,auc,f1_score
lr = LogisticRegression()
lr.fit(X_train,y_train)
y_pred_logistic = lr.predict(X_test)
import matplotlib.pyplot as plt
cm = confusion_matrix(y_test,y_pred_logistic)
plt.figure(figsize=(5,5))
sns.heatmap(cm,annot=True,linewidths=.3)
plt.show()
print(classification_report(y_test,y_pred_logistic))
from sklearn.metrics import roc_curve, auc
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred_logistic)
roc_auc = auc(false_positive_rate, true_positive_rate)
roc_auc
print(f1_score(y_test, y_pred_logistic,average="macro"))
'''
from sklearn import datasets
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
clf1 = SVC(kernel='rbf', C=1, class_weight=class_weight,random_state=42)
clf2 = LogisticRegression(class_weight=class_weight,random_state=42)
clf3 = RandomForestClassifier(class_weight=class_weight,random_state=42)
clf4 = DecisionTreeClassifier(class_weight=class_weight,random_state=42)
#clf5 = Sequential()
eclf = VotingClassifier( estimators=[('svm', clf1), ('lr', clf2), ('rf', clf3), ('dt',clf4)],
voting='hard')
for clf, label in zip([clf1, clf2, clf3,clf4 ,eclf], ['SVM', 'LR', 'RF','DT', 'Ensemble']):
scores = cross_val_score(clf, X, y, scoring='accuracy', cv=10)
print("Accuracy: %0.4f (+/- %0.4f) [%s]" % (scores.mean(), scores.std(), label))
scores
```
| github_jupyter |
# Summarization with blurr
> blurr is a libray I started that integrates huggingface transformers with the world of fastai v2, giving fastai devs everything they need to train, evaluate, and deploy transformer specific models. In this article, I provide a simple example of how to use blurr's new summarization capabilities to train, evaluate, and deploy a BART summarization model.
*Updated on 08/21/2020 to use fastai 2.0.0 and also demo batch-time padding*.
*Updated on 09/25/2020 to use on the fly batch-time tokenization*.
*Updated on 11/12/2020 with support for fastai >= 2.1.5 and mixed precision*.
*Updated on 12/31/2020 (too much for one sentence, see the docs for more).*
- toc: false
- badges: true
- comments: true
- author: Wayde Gilliam
- categories: [fastai, huggingface, blurr, summarization, text generation]
- image: images/articles/blurr-logo-small.png
- hide: false
- search_exclude: false
- show_tags: true
```
# only run this cell if you are in collab
# !pip install ohmeow-blurr -q
# !pip install datasets -q
# !pip install bert-score -q
import datasets
import pandas as pd
from fastai.text.all import *
from transformers import *
from blurr.data.all import *
from blurr.modeling.all import *
```
## Data Preparation
We're going to use to use the [datasets](https://huggingface.co/datasets) library from huggingface to grab your raw data. This package gives you access to all kinds of NLP related datasets, explanations of each, and various task specific metrics to use in evaluating your model. The best part being everything comes down to you in JSON! This makes it a breeze to get up and running quickly!
We'll just use a subset of the training set to build both our training and validation DataLoaders
```
raw_data = datasets.load_dataset('cnn_dailymail', '3.0.0', split='train[:1%]')
df = pd.DataFrame(raw_data)
df.head()
```
We begin by getting our hugginface objects needed for this task (e.g., the architecture, tokenizer, config, and model). We'll use blurr's `get_hf_objects` helper method here.
```
pretrained_model_name = "facebook/bart-large-cnn"
hf_arch, hf_config, hf_tokenizer, hf_model = BLURR_MODEL_HELPER.get_hf_objects(pretrained_model_name,
model_cls=BartForConditionalGeneration)
hf_arch, type(hf_config), type(hf_tokenizer), type(hf_model)
```
Next we need to build out our DataBlock. Remember tha a DataBlock is a blueprint describing how to move your raw data into something modelable. That blueprint is executed when we pass it a data source, which in our case, will be the DataFrame we created above. We'll use a random subset to get things moving along a bit faster for the demo as well.
Notice that the blurr DataBlock as been dramatically simplified given the shift to on-the-fly batch-time tokenization. All we need is to define a single `HF_Seq2SeqBeforeBatchTransform` instance, optionally passing a list to any of the tokenization arguments to differentiate the values for the input and summary sequences. In addition to specifying a custom max length for the inputs, we can also do the same for the output sequences ... and with the latest release of blurr, we can even customize the text generation by passing in `text_gen_kwargs`.
We pass `noop` as a type transform for our targets because everything is already handled by the batch transform now.
```
text_gen_kwargs = default_text_gen_kwargs(hf_config, hf_model, task='summarization'); text_gen_kwargs
hf_batch_tfm = HF_Seq2SeqBeforeBatchTransform(hf_arch, hf_config, hf_tokenizer, hf_model,
max_length=256, max_tgt_length=130, text_gen_kwargs=text_gen_kwargs)
blocks = (HF_Seq2SeqBlock(before_batch_tfm=hf_batch_tfm), noop)
dblock = DataBlock(blocks=blocks, get_x=ColReader('article'), get_y=ColReader('highlights'), splitter=RandomSplitter())
dls = dblock.dataloaders(df, bs=2)
len(dls.train.items), len(dls.valid.items)
```
It's always a good idea to check out a batch of data and make sure the shapes look right.
```
b = dls.one_batch()
len(b), b[0]['input_ids'].shape, b[1].shape
```
Even better, we can take advantage of blurr's TypeDispatched version of `show_batch` to look at things a bit more intuitively. We pass in the `dls` via the `dataloaders` argument so we can access all tokenization/modeling configuration stored in our batch transform above.
```
dls.show_batch(dataloaders=dls, max_n=2)
```
## Training
We'll prepare our BART model for training by wrapping it in blurr's `HF_BaseModelWrapper` object and using the callback, `HF_BaseModelCallback`, as usual. A new `HF_Seq2SeqMetricsCallback` object allows us to specify Seq2Seq metrics we want to use, things like rouge and bertscore for tasks like summarization as well as metrics such as meteor, bleu, and sacrebleu for translations tasks. Using huggingface's metrics library is as easy as specifying a metrics configuration such as below.
Once we have everything in place, we'll freeze our model so that only the last layer group's parameters of trainable. See [here](https://docs.fast.ai/basic_train.html#Discriminative-layer-training) for our discriminitative learning rates work in fastai.
**Note:** This has been tested with ALOT of other Seq2Seq models; see the docs for more information.
```
seq2seq_metrics = {
'rouge': {
'compute_kwargs': { 'rouge_types': ["rouge1", "rouge2", "rougeL"], 'use_stemmer': True },
'returns': ["rouge1", "rouge2", "rougeL"]
},
'bertscore': {
'compute_kwargs': { 'lang': 'en' },
'returns': ["precision", "recall", "f1"]
}
}
model = HF_BaseModelWrapper(hf_model)
learn_cbs = [HF_BaseModelCallback]
fit_cbs = [HF_Seq2SeqMetricsCallback(custom_metrics=seq2seq_metrics)]
learn = Learner(dls,
model,
opt_func=ranger,
loss_func=CrossEntropyLossFlat(),
cbs=learn_cbs,
splitter=partial(seq2seq_splitter, arch=hf_arch)).to_fp16()
learn.create_opt()
learn.freeze()
```
Still experimenting with how to use fastai's learning rate finder for these kinds of models. If you all have any suggestions or interesting insights to share, please let me know. We're only going to train the frozen model for one epoch for this demo, but feel free to progressively unfreeze the model and train the other layers to see if you can best my results below.
```
learn.lr_find(suggestions=True)
```
It's also not a bad idea to run a batch through your model and make sure the shape of what goes in, and comes out, looks right.
```
b = dls.one_batch()
preds = learn.model(b[0])
len(preds),preds[0], preds[1].shape
learn.fit_one_cycle(1, lr_max=3e-5, cbs=fit_cbs)
```
And now we can look at the generated predictions using our `text_gen_kwargs` above
```
learn.show_results(learner=learn, max_n=2)
```
Even better though, blurr augments the fastai Learner with a `blurr_summarize` method that allows you to use huggingface's `PreTrainedModel.generate` method to create something more human-like.
```
test_article = """
The past 12 months have been the worst for aviation fatalities so far this decade - with the total of number of people killed if airline
crashes reaching 1,050 even before the Air Asia plane vanished. Two incidents involving Malaysia Airlines planes - one over eastern Ukraine and the other in the Indian Ocean - led to the deaths of 537 people, while an Air Algerie crash in Mali killed 116 and TransAsia Airways crash in Taiwan killed a further 49 people. The remaining 456 fatalities were largely in incidents involving small commercial planes or private aircraft operating on behalf of companies, governments or organisations. Despite 2014 having the highest number of fatalities so far this decade, the total number of crashes was in fact the lowest since the first commercial jet airliner took off in 1949 - totalling just 111 across the whole world over the past 12 months. The all-time deadliest year for aviation was 1972 when a staggering 2,429 people were killed in a total of 55 plane crashes - including the crash of Aeroflot Flight 217, which killed 174 people in Russia, and Convair 990 Coronado, which claimed 155 lives in Spain. However this year's total death count of 1,212, including those presumed dead on board the missing Air Asia flight, marks a significant rise on the very low 265 fatalities in 2013 - which led to it being named the safest year in aviation since the end of the Second World War. Scroll down for videos. Deadly: The past 12 months have been the worst for aviation fatalities so far this decade - with the total of number of people killed if airline crashes reaching 1,158 even before the Air Asia plane (pictured) vanished. Fatal: Two incidents involving Malaysia Airlines planes - one over eastern Ukraine (pictured) and the other in the Indian Ocean - led to the deaths of 537 people. Surprising: Despite 2014 having the highest number of fatalities so far this decade, the total number of crashes was in fact the lowest since the first commercial jet airliner took off in 1949. 2014 has been a horrific year for Malaysia-based airlines, with 537 people dying on Malaysia Airlines planes, and a further 162 people missing and feared dead in this week's Air Asia incident. In total more than half the people killed in aviation incidents this year had been flying on board Malaysia-registered planes. In January a total of 12 people lost their lives in five separate incidents, while the same number of crashes in February killed 107.
"""
```
We can override the `text_gen_kwargs` we specified for our `DataLoaders` when we generate text using blurr's `Learner.blurr_generate` method
```
outputs = learn.blurr_generate(test_article, early_stopping=True, num_beams=4, num_return_sequences=3)
for idx, o in enumerate(outputs):
print(f'=== Prediction {idx+1} ===\n{o}\n')
```
What about inference? Easy!
```
learn.metrics = None
learn.export(fname='ft_cnndm_export.pkl')
inf_learn = load_learner(fname='ft_cnndm_export.pkl')
inf_learn.blurr_generate(test_article)
```
## That's it
[blurr](https://ohmeow.github.io/blurr/) supports a number of huggingface transformer model tasks in addition to summarization (e.g., sequence classification , token classification, and question/answering, causal language modeling, and transation). The docs include examples for each of these tasks if you're curious to learn more.
For more information about ohmeow or to get in contact with me, head over to [ohmeow.com](ohmeow.com) for all the details.
Thanks!
```
```
| github_jupyter |
# Importing libraries
```
import nltk
import glob
import os
import numpy as np
import string
import pickle
from gensim.models import Doc2Vec
from gensim.models.doc2vec import LabeledSentence
from tqdm import tqdm
from sklearn import utils
from sklearn.svm import LinearSVC
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from matplotlib import pyplot
from nltk import sent_tokenize
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from collections import Counter
from collections import defaultdict
X_train_text = []
Y_train = []
X_test_text =[]
Y_test =[]
Vocab = {}
VocabFile = "aclImdb/imdb.vocab"
```
# Create Vocabulary Function
```
def CreateVocab():
with open(VocabFile, encoding='latin-1') as f:
words = f.read().splitlines()
stop_words = set(stopwords.words('english'))
i=0
for word in words:
if word not in stop_words:
Vocab[word] = i
i+=1
print(len(Vocab))
```
# Cleaning Data
```
def clean_review(text):
tokens = word_tokenize(text)
tokens = [w.lower() for w in tokens]
table = str.maketrans('', '', string.punctuation)
stripped = [w.translate(table) for w in tokens]
words = [word for word in stripped if word.isalpha()]
stop_words = set(stopwords.words('english'))
words = [w for w in words if not w in stop_words]
return words
```
# Generating Word Matrices
```
def BoWMatrix(docs):
vectorizer = CountVectorizer(binary=True,vocabulary = Vocab)
Doc_Term_matrix = vectorizer.fit_transform(docs)
return Doc_Term_matrix
def TfidfMatrix(docs):
vectorizer = TfidfVectorizer(vocabulary = Vocab,norm = 'l1')
Doc_Term_matrix = vectorizer.fit_transform(docs)
return Doc_Term_matrix
```
# ROC Curve Function
```
def ROC(Y_train, pred1, Y_test, pred2):
fpr1, tpr1, thresholds1 = roc_curve(Y_train, pred1)
pyplot.plot([0, 1], [0, 1], linestyle='--')
pyplot.plot(fpr1, tpr1, marker='.', color='blue', label="Train", linewidth=1.0)
fpr2, tpr2, thresholds2 = roc_curve(Y_test, pred2)
pyplot.plot(fpr2, tpr2, marker='.', color='red', label="Test", linewidth=1.0)
pyplot.legend()
pyplot.show()
def ROC2(X, pred, pred1, pred2):
fpr, tpr, thresholds = roc_curve(X, pred)
pyplot.plot([0, 1], [0, 1], linestyle='--')
pyplot.plot(fpr, tpr, marker='.')
fpr1, tpr1, thresholds1 = roc_curve(X, pred1)
pyplot.plot([0, 1], [0, 1], linestyle='--')
pyplot.plot(fpr1, tpr1, marker='.')
fpr2, tpr2, thresholds2 = roc_curve(X, pred2)
pyplot.plot([0, 1], [0, 1], linestyle='--')
pyplot.plot(fpr2, tpr2, marker='.')
pyplot.show()
```
# Naive Bayes Function
```
def NB(X,Y_train,Xtest,Y_test,mtype):
if mtype == "Bow":
model = BernoulliNB()
elif mtype == "Tfidf":
model = MultinomialNB()
else:
model = GaussianNB()
model.fit(X,Y_train)
pred1 = model.predict(X)
pred2 = model.predict(Xtest)
acc1 = accuracy_score(Y_train,pred1)
acc2 = accuracy_score(Y_test,pred2)
print("NaiveBayes + " + mtype + " Train Accuracy: " + str(acc1*100) + "%")
print("NaiveBayes + " + mtype + " Test Accuracy: " + str(acc2*100) + "%")
prob1 = model.predict_proba(X)
prob1 = prob1[:, 1]
prob2 = model.predict_proba(Xtest)
prob2 = prob2[:, 1]
#ROC(Y_train, pred1, Y_test, pred2)
ROC(Y_train, prob1, Y_test, prob2)
```
# Logistic Regression Function
```
def LR(X,Y_train,Xtest,Y_test,mtype):
model = LogisticRegression()
model.fit(X,Y_train)
pred1 = model.predict(X)
pred2 = model.predict(Xtest)
acc1 = accuracy_score(Y_train,pred1)
acc2 = accuracy_score(Y_test,pred2)
print("LogisticRegression + " + mtype + " Train Accuracy: " + str(acc1*100) + "%")
print("LogisticRegression + " + mtype + " Test Accuracy: " + str(acc2*100) + "%")
prob1 = model.predict_proba(X)
prob1 = prob1[:, 1]
prob2 = model.predict_proba(Xtest)
prob2 = prob2[:, 1]
#ROC(Y_train, pred1, Y_test, pred2)
ROC(Y_train, prob1, Y_test, prob2)
```
# Random Forest Function
```
def RF(X,Y_train,Xtest,Y_test,mtype):
if mtype == "Bow":
n = 400
md = 100
elif mtype == "Tfidf":
n = 400
md = 100
else:
n = 100
md = 10
model = RandomForestClassifier(n_estimators=n, bootstrap=True,
max_depth=md, max_features='auto',
min_samples_leaf=4, min_samples_split=10)
model.fit(X,Y_train)
pred1 = model.predict(X)
pred2 = model.predict(Xtest)
acc1 = accuracy_score(Y_train,pred1)
acc2 = accuracy_score(Y_test,pred2)
print("RandomForest + " + mtype + " Train Accuracy: " + str(acc1*100) + "%")
print("RandomForest + " + mtype + " Test Accuracy: " + str(acc2*100) + "%")
prob1 = model.predict_proba(X)
prob1 = prob1[:, 1]
prob2 = model.predict_proba(Xtest)
prob2 = prob2[:, 1]
#ROC(Y_train, pred1, Y_test, pred2)
ROC(Y_train, prob1, Y_test, prob2)
```
# Support Vector Machine Function
```
def SVM(X,Y_train,Xtest,Y_test,mtype):
model = LinearSVC()
model.fit(X,Y_train)
pred1 = model.predict(X)
pred2 = model.predict(Xtest)
acc1 = accuracy_score(Y_train,pred1)
acc2 = accuracy_score(Y_test,pred2)
print("SVM + " + mtype + " Train Accuracy: " + str(acc1*100) + "%")
print("SVM + " + mtype + " Test Accuracy: " + str(acc2*100) + "%")
ROC(Y_train, pred1, Y_test, pred2)
```
# Forward Feed Neural Network Function
```
def NN(X,Y_train,Xtest,Y_test,mtype):
model = MLPClassifier(hidden_layer_sizes=(10,10),activation='relu',max_iter=200)
model.fit(X,Y_train)
pred1 = model.predict(X)
pred2 = model.predict(Xtest)
acc1 = accuracy_score(Y_train,pred1)
acc2 = accuracy_score(Y_test,pred2)
print("FFN + " + mtype + " Train Accuracy: " + str(acc1*100) + "%")
print("FFN + " + mtype + " Test Accuracy: " + str(acc2*100) + "%")
prob1 = model.predict_proba(X)
prob1 = prob1[:, 1]
prob2 = model.predict_proba(Xtest)
prob2 = prob2[:, 1]
#ROC(Y_train, pred1, Y_test, pred2)
ROC(Y_train, prob1, Y_test, prob2)
```
# Loading Data
```
path1 = 'aclImdb/train/pos/*.txt'
path2 = 'aclImdb/train/neg/*.txt'
path3 = 'aclImdb/test/pos/*.txt'
path4 = 'aclImdb/test/neg/*.txt'
files1 = glob.glob(path1)
files2 = glob.glob(path2)
files3 = glob.glob(path3)
files4 = glob.glob(path4)
#Positive labels
for i,filename in enumerate(files1):
f = open(filename,"r+", encoding='latin-1')
text = f.read()
f.close()
X_train_text.append(text)
Y_train.append(1)
#Neg labels
for j,filename in enumerate(files2):
f = open(filename,"r+", encoding='latin-1')
text = f.read()
f.close()
X_train_text.append(text)
Y_train.append(0)
#Test labels +
for k,filename in enumerate(files3):
f = open(filename,"r+", encoding='latin-1')
text = f.read()
f.close()
X_test_text.append(text)
Y_test.append(1)
#Test labels +
for l,filename in enumerate(files4):
f = open(filename,"r+", encoding='latin-1')
text = f.read()
f.close()
X_test_text.append(text)
Y_test.append(0)
CreateVocab();
```
# Generating Word Matrix for Test & Train Data
```
def Getbowvec(X_train_text,Y_train,X_test_text,Y_test):
X = BoWMatrix(X_train_text)
Xtest = BoWMatrix(X_test_text)
return X,Xtest
def Gettfidfvec(X_train_text,Y_train,X_test_text,Y_test):
X = TfidfMatrix(X_train_text)
Xtest = TfidfMatrix(X_test_text)
return X,Xtest
```
# Doc2Vec Representation
```
'''
def LabelRev(reviews,label_string):
result = []
prefix = label_string
for i, t in enumerate(reviews):
# print(t)
result.append(LabeledSentence(t, [prefix + '_%s' % i]))
return result
LabelledXtrain = LabelRev(X_train_text,"review")
LabelledXtest = LabelRev(X_test_text,"test")
LabelledData = LabelledXtrain + LabelledXtest
modeld2v = Doc2Vec(dm=1, min_count=2, alpha=0.065, min_alpha=0.065)
modeld2v.build_vocab([x for x in tqdm(LabelledData)])
print("Training the Doc2Vec Model.....")
for epoch in range(50):
print("epoch : ",epoch)
modeld2v.train(utils.shuffle([x for x in tqdm(LabelledData)]), total_examples=len(LabelledData), epochs=1)
modeld2v.alpha -= 0.002
modeld2v.min_alpha = modeld2v.alpha
print("Saving Doc2Vec1 Model....")
modeld2v.save('doc2vec1.model')
#print("Saving Doc2Vec Model....")
#modeld2v.save('doc2vec.model')
'''
def Doc2vec(X_train_text,Y_train,X_test_text,Y_test):
model = Doc2Vec.load('doc2vec.model')
#model = Doc2Vec.load('doc2vec1.model')
X = []
Xtest =[]
for i,l in enumerate(X_train_text):
temp = "review" + "_" + str(i)
X.append(model.docvecs[temp])
for i,l in enumerate(X_test_text):
temp = "test" + "_" + str(i)
Xtest.append(model.docvecs[temp])
return X,Xtest
print("Bag of Words is being built...")
X,Xtest = Getbowvec(X_train_text,Y_train,X_test_text,Y_test)
print("Tf-idf is being built...")
X1,Xtest1 = Gettfidfvec(X_train_text,Y_train,X_test_text,Y_test)
print("Doc2Vec is being built...")
X2,Xtest2 = Doc2vec(X_train_text,Y_train,X_test_text,Y_test)
len(X[0])
```
# Applying Classification Algorithms
```
print("Naive Bayes:")
NB(X,Y_train,Xtest,Y_test,"Bow")
NB(X1,Y_train,Xtest1,Y_test,"Tfidf")
NB(X2,Y_train,Xtest2,Y_test,"Doc2Vec")
print("Logistic Regression:")
LR(X,Y_train,Xtest,Y_test,"Bow")
LR(X1,Y_train,Xtest1,Y_test,"Tfidf")
LR(X2,Y_train,Xtest2,Y_test,"Doc2Vec")
print("Random Forest:")
RF(X,Y_train,Xtest,Y_test,"Bow")
RF(X1,Y_train,Xtest1,Y_test,"Tfidf")
RF(X2,Y_train,Xtest2,Y_test,"Doc2Vec")
print("SVM:")
SVM(X,Y_train,Xtest,Y_test,"Bow")
SVM(X1,Y_train,Xtest1,Y_test,"Tfidf")
SVM(X2,Y_train,Xtest2,Y_test,"Doc2Vec")
print("Neural Networks:")
NN(X,Y_train,Xtest,Y_test,"Bow")
NN(X1,Y_train,Xtest1,Y_test,"Tfidf")
NN(X2,Y_train,Xtest2,Y_test,"Doc2Vec")
```
| github_jupyter |
# Lab 2: Object-Oriented Python
## Overview
After have covered rules, definitions, and semantics, we'll be playing around with actual classes, writing a fair chunk of code and building several classes to solve a variety of problems.
Recall our starting definitions:
- An *object* has identity
- A *name* is a reference to an object
- A *namespace* is an associative mapping from names to objects
- An *attribute* is any name following a dot ('.')
## Course class
### Basic Class
Let’s create a class to represent courses!
A course will have three attributes to start: a
1. department (like `"AI"` `"CHEM"`),
2. a course code (like `"42"` or `"92SI"`),
3. and a title (like `"IAP"`).
```Python
class Course:
def __init__(self, department, code, title):
self.department = department
self.code = code
self.title = title
```
You can assume that all arguments to this constructor will be strings.
Running the following code cell will create a class object `Course` and print some information about it.
*Note: If you change the content of this class definition, you will need to re-execute the code cell for it to have any effect. Any instance objects of the old class object will not be automatically updated, so you may need to rerun instantiations of this class object as well.*
```
class Course:
def __init__(self, department, code, title):
self.department = department
self.code = code
self.title = title
print(Course)
print(Course.mro())
print(Course.__init__)
```
We create an instance of the class by instantiating the class object, supplying some arguments.
```Python
iap = Course("AI", "91256", "IAP: Introduction to Algorithms and Programming")
```
Print out the three attributes of the `unbo_python` instance object.
```
iap = Course("AI", "91256", "IAP: Introduction to Algorithms and Programming")
print(iap.department) # Print out the department
print(iap.code) # Print out the code
print(iap.title) # Print out the title
```
### Inheritance
Let's explore inheritance by creating a `AICourse` class that takes an additional parameter `recorded` that defaults to `False`.
```
class AICourse(Course):
def __init__(self, department, code, title, recorded=False):
super().__init__(department, code, title)
self.is_recorded = recorded
```
The `super()` concretely lets us treat the `self` object as an instance object of the immediate superclass (as measured by MRO), so we can call the superclass's `__init__` method.
We can instantiate our new class:
```Python
a = Course("AI", "91254", "Image Processing and Computer Vision")
b = AICourse("AI", "91247", "Cognition and Neuroscience")
x = AICourse("AI", "91247X", "Cognition and Neuroscience", recorded=True)
print(a.code) # => "91254"
print(b.code) # => "91247"
```
Read through the following statements and try to predict their output.
```Python
type(a)
isinstance(a, Course)
isinstance(b, Course)
isinstance(x, Course)
isinstance(x, AICourse)
issubclass(x, AICourse)
issubclass(Course, AICourse)
type(a) == type(b)
type(b) == type(x)
a == b
b == x
```
```
a = Course("AI", "91254", "Image Processing and Computer Vision")
b = AICourse("AI", "91247", "Cognition and Neuroscience")
x = AICourse("AI", "91247X", "Cognition and Neuroscience", recorded=True)
print("1.", type(a))
print("2.", isinstance(a, Course))
print("3.", isinstance(b, Course))
print("4.", isinstance(x, Course))
print("5.", isinstance(x, AICourse))
print("6.", issubclass(Course, AICourse))
print("7.", issubclass(AICourse, Course))
print("8.", type(a) == type(b))
print("9.", type(b) == type(x))
print("10.", a == b)
print("11.", b == x)
```
### Additional Attributes
Let's add more functionality to the `Course` class!
* Add a attribute `students` to the instances of the `Course` class that tracks whether students are present. Initially, students should be an empty set.
* Create a method `mark_attendance(*students)` that takes a variadic number of `students` and marks them as present.
* Create a method `is_present(student)` that takes a student’s name as a parameter and returns `True` if the student is present and `False` otherwise.
```
class Course:
def __init__(self, department, code, title):
self.department = department
self.code = code
self.title = title
self.students = {}
def mark_attendance(self, student):
if student in self.students:
self.students[student] += 1
else:
self.students[student] = 1
def is_present(self, student):
return student in self.students
```
### Implementing Prerequisites
Now, we'll focus on `AICourse`. We want to implement functionality to determine if one computer science course is a prerequisite of another. In our implementation, we will assume that the ordering for courses is determined first by the numeric part of the course code: for example, `140` comes before `255`. If there is a tie, the ordering is determined by the default string ordering of the letters that follow. For example, `91254 > 91247`. After implementing, you should be able to see:
```Python
>>> ai91245 = Course("AI", "91254", "Image Processing and Computer Vision")
>>> ai91247 = AICourse("AI", "91247", "Cognition and Neuroscience")
>>> ai91247 > ai91245
True
```
To accomplish this, you will need to implement a magic method `__le__` that will add functionality to determine if a course is a prerequisite for another course. Read up on [total ordering](https://docs.python.org/3/library/functools.html#functools.total_ordering) to figure out what `__le__` should return based on the argument you pass in.
To give a few hints on how to add this piece of functionality might be implemented, consider how you might extract the actual `int` number from the course code attribute.
Additionally, you should implement a `__eq__` on `Course`s. Two classes are equivalent if they are in the same department and have the same course code: the course title doesn't matter here.
```
class Course:
def __init__(self, department, code, title):
self.department = department
self.code = code
self.title = title
self.students = {}
def mark_attendance(self, student):
if student in self.students:
self.students[student] += 1
else:
self.students[student] = 1
def is_present(self, student):
return student in self.students
def __le__(self, other):
mycode = int(self.code)
othercode = int(other.code)
return mycode < othercode
def __eq__(self, other):
mycode = int(self.code)
othercode = int(other.code)
mydepartment = self.department
otherdepartment = other.department
return (mycode == othercode) and (mydepartment == otherdepartment)
c1 = Course(...)
c1.mark_atte
student1 = "Mark"
```
#### Sorting
Now that we've written a `__le__` method and an `__eq__` method, we've implemented everything we need to speak about an "ordering" of `Course`s.
##### Let Python do all the rest (Optional)
Using the [`functools.total_ordering` decorator](https://docs.python.org/3/library/functools.html#functools.total_ordering), get back to the Course class definition and "decorate" it by adding `@total_ordering` before the very class definition, so that all of the comparison methods are implemented.
Then, you should be able to run:
```
# Let's make ai91245 an AI course
ai91245 = AICourse("AI", "91254", "Image Processing and Computer Vision")
ai91247 = AICourse("AI", "91247", "Cognition and Neuroscience")
ai91762 = AICourse("AI", "107", "Combinatorial Decision Making and Optimization")
ai91249 = AICourse("AI", "110", "Machine Learning and Deep Learning")
courses = [ai91247, ai91245, ai91762, ai91249]
courses.sort()
courses # => [ai91245, ai91247, ai91249, ai91762]
```
### Instructors (optional)
Allow the class to take a splat argument `instructors` that will take any number of strings and store them as a list of instructors.
Modify the way you track attendance in the `Course` class to map a Python date object (you can use the `datetime` module) to a data structure tracking what students are there on that day.
```
class CourseWithInstructors:
pass
```
### Catalog (optional)
Implement a class called `CourseCatalog` that is constructed from a list of `Course`s. Write a method for the `CourseCatalog` which returns a list of courses in a given department. Additionally, write a method for `CourseCatalog` that returns all courses that contain a given piece of search text in their title.
Feel free to implement any other interesting methods you'd like.
```
class CourseCatalog:
def __init__(self, courses):
pass
def courses_by_department(self, department_name):
pass
def courses_by_search_term(self, search_snippet):
pass
```
## Inheritance
Consider the following code:
```Python
"""Examples of Single Inheritance"""
class Transportation:
wheels = 0
def __init__(self):
self.wheels = -1
def travel_one(self):
print("Travelling on generic transportation")
def travel(self, distance):
for _ in range(distance):
self.travel_one()
def is_car(self):
return self.wheels == 4
class Bike(Transportation):
def travel_one(self):
print("Biking one km")
class Car(Transportation):
wheels = 4
def travel_one(self):
print("Driving one km")
def make_sound(self):
print("VROOM")
class Ferrari(Car):
pass
t = Transportation()
b = Bike()
c = Car()
f = Ferrari()
```
Predict the outcome of each of the following lines of code.
```Python
isinstance(t, Transportation)
isinstance(b, Bike)
isinstance(b, Transportation)
isinstance(b, Car)
isinstance(b, t)
isinstance(c, Car)
isinstance(c, Transportation)
isinstance(f, Ferrari)
isinstance(f, Car)
isinstance(f, Transportation)
issubclass(Bike, Transportation)
issubclass(Car, Transportation)
issubclass(Ferrari, Car)
issubclass(Ferrari, Transportation)
issubclass(Transportation, Transportation)
b.travel(5)
c.is_car()
f.is_car()
b.is_car()
b.make_sound()
c.travel(10)
f.travel(4)
```
```
class Transportation:
wheels = 0
def __init__(self):
self.wheels = -1
def travel_one(self):
print("Travelling on generic transportation")
def travel(self, distance):
for _ in range(distance):
self.travel_one()
def is_car(self):
return self.wheels == 4
class Bike(Transportation):
wheels = 2
def travel_one(self):
print("Biking one km")
class Car(Transportation):
wheels = 4
def travel_one(self):
print("Driving one km")
def make_sound(self):
print("VROOM")
class Ferrari(Car):
pass
t = Transportation()
b = Bike()
c = Car()
f = Ferrari()
print("1.", isinstance(t, Transportation))
print("2.", isinstance(b, Bike))
print("3.", isinstance(b, Transportation))
print("4.", isinstance(b, Car))
print("5.", isinstance(b, type(Car)))
print("6.", isinstance(c, Car))
print("7.", isinstance(c, Transportation))
print("8.", isinstance(f, Ferrari))
print("9.", isinstance(f, Car))
print("10.", isinstance(f, Transportation))
print("11.", issubclass(Bike, Transportation))
print("12.", issubclass(Car, Transportation))
print("13.", issubclass(Ferrari, Car))
print("14.", issubclass(Ferrari, Transportation))
print("15.", issubclass(Transportation, Transportation))
b.travel(5)
print("16.", c.is_car()) # => c.wheels ?
print("17.", f.is_car()) # => f.wheels ?
print("18.", b.is_car()) # => b.wheels ?
# b.make_sound()
c.travel(10)
f.travel(4)
```
## SimpleGraph
In this part, you'll build the implementation for a `SimpleGraph` class in Python.
In particular, you will need to define a `Vertex` class, an `Edge` class, and a `SimpleGraph` class. The specification is as follows:
A `Vertex` has attributes:
* `name`, a string representing the label of the vertex.
* `edges`, a set representing edges outbound from this vertex to its neighbors
A new Vertex should be initialized with an optional `name`, which defaults to `""`, and should be initialized with an empty edge set.
An `Edge` has attributes:
* `start`, a `Vertex` representing the start point of the edge.
* `end`, a `Vertex` representing the end point of the edge.
* `cost`, a `float` (used for graph algorithms) representing the weight of the edge.
* `visited`, a `bool` (used for graph algorithms) representing whether this edge has been visited before.
Note that for our purposes, an `Edge` is directed.
An `Edge` requires a `start` and `end` vertex in order to be instantiated. `cost` should default to 1, and `visited` should default to `False`, but both should be able to be set via an initializer.
A `SimpleGraph` has attributes
* `verts`, a collection of `Vertex`s (you need to decide the collection type)
* `edges`, a collection of `Edge`s (you need to decide the collection type)
as well as several methods:
* `graph.add_vertex(v)`
* `graph.add_edge(v_1, v_2)`
* `graph.contains_vertex(v)`
* `graph.contains_edge(v_1, v_2)`
* `graph.get_neighbors(v)`
* `graph.is_empty()`
* `graph.size()`
* `graph.remove_vertex(v)`
* `graph.remove_edge(v_1, v_2)`
* `graph.is_neighbor(v1, v2)`
* `graph.is_reachable(v1, v2) # Use any algorithm you like`
* `graph.clear_all()`
The actual implementation details are up to you.
*Note: debugging will significantly easier if you write `__str__` or `__repr__` methods on your custom classes.*
```
class Vertex:
pass
class Edge:
pass
class SimpleGraph:
pass
```
### Challenge: Graph Algorithms
If you're feeling up to the challenge, and you have sufficient time, implement other graph algorithms, including those covered in ai91247/X, using your SimpleGraph. The point isn't to check whether you still know your graph algorithms - rather, these algorithms will serve to test the correctness of your graph implementation. The particulars are up to you.
As some suggestions:
* Longest path
* D'ijkstras Algorithm
* A*
* Max Flow
* K-Clique
* Largest Connected Component
* is_bipartite
* hamiltonian_path_exists
```
graph = SimpleGraph()
# Your extension code here
```
### Challenge: Using Magic Methods
See if you can rewrite the `SimpleGraph` class using magic methods to emulate the behavior and operators of standard Python. In particular,
```
graph[v] # returns neighbors of v
graph[v] = v_2 # Insert an edge from v to v2
len(graph)
# etc
```
## Timed Key-Value Store (challenge)
Let's build an interesting data structure straight out of an interview programming challenge from [Stripe](https://stripe.com/). This is more of an algorithms challenge than a Python challenge, but we hope you're still interested in tackling it.
At a high-level, we'll be building a key-value store (think `dict` or Java's `HashMap`) that has a `get` method that takes an optional second parameter as a `time` object in Python to return the most recent value before that period in time. If no key-value pair was added to the map before that period in time, return `None`.
For consistency’s sake, let’s call this class `TimedKVStore` and put it into a file called `kv_store.py`
You’ll need some sort of `time` object to track when key-value pairs are getting added to this map. Consider using [the `time` module](https://docs.python.org/3/library/time.html).
To give you an idea of how this class works, this is what should happen after you implement `TimedKVStore`.
```Python
d = TimedKVStore()
t0 = time.time()
d.put("1", 1)
t1 = time.time()
d.put("1", 1.1)
d.get("1") # => 1.1
d.get("1", t1) # => 1
d.get("1", t0) # => None
```
```
class TimedKVStore:
pass
d = TimedKVStore()
t0 = time.time()
d.put("1", 1)
t1 = time.time()
d.put("1", 1.1)
print(d.get("1")) # => 1.1
print(d.get("1", t1)) # => 1
print(d.get("1", t0)) # => None
```
### Remove (challenge)
Implement a method on a `TimedKVStore` to `remove(key)` that takes a key and removes that entire key from the key-value store.
Write another `remove(key, time)` method that takes a key and removes all memory of values before that time method.
## Bloom Filter (challenge)
A bloom filter is a fascinating data structure that support insertion and probabilistic set membership. Read up on Wikipedia!
Write a class `BloomFilter` to implement a bloom filter data structure. Override the `__contains__` method so that membership can be tested with `x in bloom_filter`.
```
class BloomFilter:
pass
```
## Silencer Context Manager (challenge)
In some cases, you may want to suppress the output a given code block. Maybe it's untrusted code, or maybe it's littered with `print`s that you don't want to comment out. We can use the context manager syntax in Python to define a class that serves as a context manager. We want to use this as:
```Python
with Silencer():
noisy_code()
```
Our class will look something like
```Python
class Silencer:
def __init__(self):
pass
def __enter__(self):
pass
def __exit__(self, *exc):
pass
```
The `__enter__` method is called when the with block is entered, and `__exit__` is called when leaving the block, with any relevant information about an active exception passed in. Write the `__enter__` method to redirect standard output and standard error to `stringio.StringIO()` objects to capture the output, and make sure that `__exit__` restored the saved stdout and stderr. What would a `__str__` method on a `Silencer` object look like?
Recall that the with statement in Python is *almost* implemented as:
```Python
with open(filename) as f:
raw = f.read()
# is (almost) equivalent to
f = open(filename)
f.__enter__()
try:
raw = f.read()
finally:
f.__exit__() # Closes the file
```
```
class Silencer:
pass
```
## Magic Methods
### Reading
Python provides an enormous number of special methods that a class can override to interoperator with builtin Python operations. You can skim through an [approximate visual list](http://diveintopython3.problemsolving.io/special-method-names.html) from Dive into Python3, or a [more verbose explanation](https://rszalski.github.io/magicmethods/), or the [complete Python documentation](https://docs.python.org/3/reference/datamodel.html#specialnames) on special methods. Fair warning, there are a lot of them, so it's probably better to skim than to really take a deep dive, unless you're loving this stuff.
### Writing (Polynomial Class)
We will write a `Polynomial` class that acts like a number. As a a reminder, a [polynomial](https://en.wikipedia.org/wiki/Polynomial) is a mathematical object that looks like $1 + x + x^2$ or $4 - 10x + x^3$ or $-4 - 2x^{10}$. A mathematical polynomial can be evaluated at a given value of $x$. For example, if $f(x) = 1 + x + x^2$, then $f(5) = 1 + 5 + 5^2 = 1 + 5 + 25 = 31$.
Polynomials are also added componentwise: If $f(x) = 1 + 4x + 4x^3$ and $g(x) = 2 + 3x^2 + 5x^3$, then $(f + g)(x) = (1 + 2) + 4x + 3x^2 + (4 + 5)x^3 = 3 + 4 + 3x^2 + 9x^3$.
Construct a polynomial with a variadic list of coefficients: the zeroth argument is the coordinate of the $x^0$'s place, the first argument is the coordinate of the $x^1$'s place, and so on. For example, `f = Polynomial(1, 3, 5)` should construct a `Polynomial` representing $1 + 3x + 5x^2$.
You will need to override the addition special method (`__add__`) and the callable special method (`__call__`).
You should be able to emulate the following code:
```Python
f = Polynomial(1, 5, 10)
g = Polynomial(1, 3, 5)
print(f(5)) # => Invokes `f.__call__(5)`
print(g(2)) # => Invokes `g.__call__(2)`
h = f + g # => Invokes `f.__add__(g)`
print(h(3)) # => Invokes `h.__call__(3)`
```
Lastly, implement a method to convert a `Polynomial` to an informal string representation. For example, the polynomial `Polynomial(1, 3, 5)` should be represented by the string `"1 * x^0 + 3 * x^1 + 5 * x^2"`.
```
class Polynomial:
def __init__(self):
pass
def __call__(self, x):
"""Implement `self(x)`."""
pass
def __add__(self, other):
"""Implement `self + other`."""
pass
def __str__(self):
"""Implement `str(x)`."""
pass
```
#### Polynomial Extensions (optional)
If you are looking for more, implement additional operations on our `Polynomial` class. You may want to implement `__sub__`, `__mul__`, and `__div__`.
You can also implement more complicated mathematical operations, such as `f.derivative()`, which returns a new function that is the derivative of `f`, or `.zeros()`, which returns a collection of the function's zeros.
If you need even more, write a `classmethod` to construct a polynomial from a string representation of it. You should be able to write:
```
f = Polynomial.parse("1 * x^0 + 3 * x^1 + 5 * x^2")
```
#### Challenge (`MultivariatePolynomial`)
Write a class called `MultivariatePolynomial` that represents a polynomial in many variables. For example, $f(x, y, z) = 4xy + 10x^2z - 5x^3yz + y^4z^3$ is a polynomial in three variables.
How would you provide coefficients to the constructor? How would you define the arguments to the callable? How would you implement the mathematical operations efficiently?
| github_jupyter |
## KITTI Object Detection finetuning
### This notebook is used to lunch the finetuning of FPN on KITTI object detection benchmark, the code fetches COCO weights for weight initialization
```
data_path = "../datasets/KITTI/data_object_image_2/training"
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
import numpy as np
import cv2
import random
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog
from detectron2.modeling import build_model
from detectron2.evaluation import COCOEvaluator,PascalVOCDetectionEvaluator
import matplotlib.pyplot as plt
import torch.tensor as tensor
from detectron2.data import build_detection_test_loader
from detectron2.evaluation import inference_on_dataset
import torch
from detectron2.structures.instances import Instances
from detectron2.modeling import build_model
%matplotlib inline
```
## Dataset Parsing
```
import os
import numpy as np
import json
from detectron2.structures import BoxMode
def get_kitti_dicts(img_dir):
dataset_dicts = []
with open('../datasets/KITTI/kitti_train.txt') as f:
for line in f:
record = {}
image_path = os.path.join(img_dir, 'image_2/%s.png'%line.replace('\n',''))
height, width = cv2.imread(image_path).shape[:2]
record["file_name"] = image_path
record["image_id"] = int(line)
record["height"] = height
record["width"] = width
objs = []
ann_path = os.path.join(img_dir,'label_2/%s.txt'%line.replace('\n',''))
with open(ann_path) as ann_file:
for ann_line in ann_file:
line_items = ann_line.split(' ')
if(line_items[0]=='Car'):
class_id=2
elif(line_items[0]=='Pedestrian'):
class_id=0
elif(line_items[0]=='Cyclist'):
class_id=1
else:
continue
obj = {'bbox':[np.round(float(line_items[4])),np.round(float(line_items[5])),
np.round(float(line_items[6])),np.round(float(line_items[7]))],"category_id": class_id,"iscrowd": 0,"bbox_mode": BoxMode.XYXY_ABS}
objs.append(obj)
record["annotations"] = objs
dataset_dicts.append(record)
items+=1
return dataset_dicts
def get_kitti_val(img_dir):
dataset_dicts = []
items = 0
with open('kitti_val.txt') as f:
for line in f:
record = {}
image_path = os.path.join(img_dir, 'image_2/%s.png'%line.replace('\n','').zfill(6))
height, width = cv2.imread(image_path).shape[:2]
record["file_name"] = image_path
record["image_id"] = int(line)
record["height"] = height
record["width"] = width
objs = []
ann_path = os.path.join(img_dir,'label_2/%s.txt'%line.replace('\n','').zfill(6))
with open(ann_path) as ann_file:
for ann_line in ann_file:
line_items = ann_line.split(' ')
if(line_items[0]=='Car'):
class_id=2
elif(line_items[0]=='Pedestrian'):
class_id=0
elif(line_items[0]=='Cyclist'):
class_id=1
else:
continue
obj = {'bbox':[np.round(float(line_items[4])),np.round(float(line_items[5])),
np.round(float(line_items[6])),np.round(float(line_items[7]))],"category_id": class_id,"iscrowd": 0,"bbox_mode": BoxMode.XYXY_ABS}
objs.append(obj)
record["annotations"] = objs
dataset_dicts.append(record)
items+=1
return dataset_dicts
from detectron2.data import DatasetCatalog, MetadataCatalog
for d in ["train", "val"]:
DatasetCatalog.register("kitti_" + d, lambda d=d: get_kitti_dicts(data_path))
```
## Training Parameters
```
from detectron2.engine import DefaultTrainer
from detectron2.config import get_cfg
import os
cfg = get_cfg())
cfg.merge_from_file("../configs/COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml")
cfg.DATASETS.TRAIN = ("kitti_train",)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 2
#load coco weights
cfg.MODEL.WEIGHTS="https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_50_FPN_3x/137849458/model_final_280758.pkl"
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.0025 # pick a good LR
cfg.SOLVER.MAX_ITER = 20000
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512 #(default: 512)
cfg.OUTPUT_DIR='../models/KITTI/KITTI_DET'
```
### Initialize the trainer and load the dataset
```
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg,False)
```
### Begin Training
```
trainer.resume_or_load(resume=False)
trainer.train()
```
| github_jupyter |
# SP LIME
## Regression explainer with boston housing prices dataset
```
from sklearn.datasets import load_boston
import sklearn.ensemble
import sklearn.linear_model
import sklearn.model_selection
import numpy as np
from sklearn.metrics import r2_score
np.random.seed(1)
#load example dataset
boston = load_boston()
#print a description of the variables
print(boston.DESCR)
#train a regressor
rf = sklearn.ensemble.RandomForestRegressor(n_estimators=1000)
train, test, labels_train, labels_test = sklearn.model_selection.train_test_split(boston.data, boston.target, train_size=0.80, test_size=0.20)
rf.fit(train, labels_train);
#train a linear regressor
lr = sklearn.linear_model.LinearRegression()
lr.fit(train,labels_train)
#print the R^2 score of the random forest
print("Random Forest R^2 Score: " +str(round(r2_score(rf.predict(test),labels_test),3)))
print("Linear Regression R^2 Score: " +str(round(r2_score(lr.predict(test),labels_test),3)))
# import lime tools
import lime
import lime.lime_tabular
# generate an "explainer" object
categorical_features = np.argwhere(np.array([len(set(boston.data[:,x])) for x in range(boston.data.shape[1])]) <= 10).flatten()
explainer = lime.lime_tabular.LimeTabularExplainer(train, feature_names=boston.feature_names, class_names=['price'], categorical_features=categorical_features, verbose=False, mode='regression',discretize_continuous=False)
#generate an explanation
i = 13
exp = explainer.explain_instance(test[i], rf.predict, num_features=14)
%matplotlib inline
fig = exp.as_pyplot_figure();
print("Input feature names: ")
print(boston.feature_names)
print('\n')
print("Input feature values: ")
print(test[i])
print('\n')
print("Predicted: ")
print(rf.predict(test)[i])
```
# SP-LIME pick step
### Maximize the 'coverage' function:
$c(V,W,I) = \sum_{j=1}^{d^{\prime}}{\mathbb{1}_{[\exists i \in V : W_{ij}>0]}I_j}$
$W = \text{Explanation Matrix, } n\times d^{\prime}$
$V = \text{Set of chosen explanations}$
$I = \text{Global feature importance vector, } I_j = \sqrt{\sum_i{|W_{ij}|}}$
```
import lime
import warnings
from lime import submodular_pick
sp_obj = submodular_pick.SubmodularPick(explainer, train, rf.predict, sample_size=20, num_features=14, num_exps_desired=5)
[exp.as_pyplot_figure() for exp in sp_obj.sp_explanations];
import pandas as pd
W=pd.DataFrame([dict(this.as_list()) for this in sp_obj.explanations])
W.head()
im=W.hist('NOX',bins=20)
```
## Text explainer using the newsgroups
```
# run the text explainer example notebook, up to single explanation
import sklearn
import numpy as np
import sklearn
import sklearn.ensemble
import sklearn.metrics
# from __future__ import print_function
from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism', 'soc.religion.christian']
newsgroups_train = fetch_20newsgroups(subset='train', categories=categories)
newsgroups_test = fetch_20newsgroups(subset='test', categories=categories)
class_names = ['atheism', 'christian']
vectorizer = sklearn.feature_extraction.text.TfidfVectorizer(lowercase=False)
train_vectors = vectorizer.fit_transform(newsgroups_train.data)
test_vectors = vectorizer.transform(newsgroups_test.data)
rf = sklearn.ensemble.RandomForestClassifier(n_estimators=500)
rf.fit(train_vectors, newsgroups_train.target)
pred = rf.predict(test_vectors)
sklearn.metrics.f1_score(newsgroups_test.target, pred, average='binary')
from lime import lime_text
from sklearn.pipeline import make_pipeline
c = make_pipeline(vectorizer, rf)
from lime.lime_text import LimeTextExplainer
explainer = LimeTextExplainer(class_names=class_names)
idx = 83
exp = explainer.explain_instance(newsgroups_test.data[idx], c.predict_proba, num_features=6)
print('Document id: %d' % idx)
print('Probability(christian) =', c.predict_proba([newsgroups_test.data[idx]])[0,1])
print('True class: %s' % class_names[newsgroups_test.target[idx]])
sp_obj = submodular_pick.SubmodularPick(explainer, newsgroups_test.data, c.predict_proba, sample_size=2, num_features=6,num_exps_desired=2)
[exp.as_pyplot_figure(label=exp.available_labels()[0]) for exp in sp_obj.sp_explanations];
from sklearn.datasets import load_iris
iris=load_iris()
from sklearn.model_selection import train_test_split as tts
Xtrain,Xtest,ytrain,ytest=tts(iris.data,iris.target,test_size=.2)
from sklearn.ensemble import RandomForestClassifier
rf=RandomForestClassifier()
rf.fit(Xtrain,ytrain)
rf.score(Xtest,ytest)
explainer = lime.lime_tabular.LimeTabularExplainer(Xtrain,
feature_names=iris.feature_names,
class_names=iris.target_names,
verbose=False,
mode='classification',
discretize_continuous=False)
exp=explainer.explain_instance(Xtrain[i],rf.predict_proba,top_labels=3)
exp.available_labels()
sp_obj = submodular_pick.SubmodularPick(data=Xtrain,explainer=explainer,num_exps_desired=5,predict_fn=rf.predict_proba, sample_size=20, num_features=4, top_labels=3)
import pandas as pd
df=pd.DataFrame({})
for this_label in range(3):
dfl=[]
for i,exp in enumerate(sp_obj.sp_explanations):
l=exp.as_list(label=this_label)
l.append(("exp number",i))
dfl.append(dict(l))
dftest=pd.DataFrame(dfl)
df=df.append(pd.DataFrame(dfl,index=[iris.target_names[this_label] for i in range(len(sp_obj.sp_explanations))]))
df
```
| github_jupyter |
## Dependencies
```
import random, os, warnings, math, glob
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras import Model
from transformers import TFAutoModelForSequenceClassification, TFAutoModel, AutoTokenizer
def seed_everything(seed=0):
random.seed(seed)
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'
seed = 0
seed_everything(seed)
warnings.filterwarnings('ignore')
pd.set_option('display.max_colwidth', 150)
```
### Hardware configuration
```
# TPU or GPU detection
# Detect hardware, return appropriate distribution strategy
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
print(f'Running on TPU {tpu.master()}')
except ValueError:
tpu = None
if tpu:
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
else:
strategy = tf.distribute.get_strategy()
AUTO = tf.data.experimental.AUTOTUNE
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
```
# Load data
```
test_filepath = '/kaggle/input/commonlitreadabilityprize/test.csv'
test = pd.read_csv(test_filepath)
print(f'Test samples: {len(test)}')
display(test.head())
```
# Model parameters
```
BATCH_SIZE = 8 * REPLICAS
SEQ_LEN = 256
BASE_MODEL = '/kaggle/input/huggingface-roberta/roberta-base/'
```
## Auxiliary functions
```
# Datasets utility functions
def custom_standardization(text):
text = text.lower() # if encoder is uncased
text = text.strip()
return text
def sample_target(features, target):
mean, stddev = target
sampled_target = tf.random.normal([], mean=tf.cast(mean, dtype=tf.float32),
stddev=tf.cast(stddev, dtype=tf.float32), dtype=tf.float32)
return (features, sampled_target)
def get_dataset(pandas_df, tokenizer, labeled=True, ordered=False, repeated=False,
is_sampled=False, batch_size=32, seq_len=128):
"""
Return a Tensorflow dataset ready for training or inference.
"""
text = [custom_standardization(text) for text in pandas_df['excerpt']]
# Tokenize inputs
tokenized_inputs = tokenizer(text, max_length=seq_len, truncation=True,
padding='max_length', return_tensors='tf')
if labeled:
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': tokenized_inputs['input_ids'],
'attention_mask': tokenized_inputs['attention_mask']},
(pandas_df['target'], pandas_df['standard_error'])))
if is_sampled:
dataset = dataset.map(sample_target, num_parallel_calls=tf.data.AUTOTUNE)
else:
dataset = tf.data.Dataset.from_tensor_slices({'input_ids': tokenized_inputs['input_ids'],
'attention_mask': tokenized_inputs['attention_mask']})
if repeated:
dataset = dataset.repeat()
if not ordered:
dataset = dataset.shuffle(1024)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(tf.data.AUTOTUNE)
return dataset
!ls /kaggle/input/
model_path_list = glob.glob('/kaggle/input/8-commonlit-roberta-base-seq-256-ep-50/*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
```
# Model
```
def model_fn(encoder, seq_len=256):
input_ids = L.Input(shape=(seq_len,), dtype=tf.int32, name='input_ids')
input_attention_mask = L.Input(shape=(seq_len,), dtype=tf.int32, name='attention_mask')
outputs = encoder({'input_ids': input_ids,
'attention_mask': input_attention_mask})
last_hidden_state = outputs['last_hidden_state']
x = L.GlobalAveragePooling1D()(last_hidden_state)
output = L.Dense(1, name='output')(x)
model = Model(inputs=[input_ids, input_attention_mask], outputs=output)
return model
with strategy.scope():
encoder = TFAutoModel.from_pretrained(BASE_MODEL)
model = model_fn(encoder, SEQ_LEN)
model.summary()
```
# Test set predictions
```
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
test_pred = []
for model_path in model_path_list:
print(model_path)
if tpu: tf.tpu.experimental.initialize_tpu_system(tpu)
K.clear_session()
model.load_weights(model_path)
# Test predictions
test_ds = get_dataset(test, tokenizer, labeled=False, ordered=True, batch_size=BATCH_SIZE, seq_len=SEQ_LEN)
x_test = test_ds.map(lambda sample: sample)
test_pred.append(model.predict(x_test))
```
# Test set predictions
```
submission = test[['id']]
submission['target'] = np.mean(test_pred, axis=0)
submission.to_csv('submission.csv', index=False)
display(submission.head(10))
```
| github_jupyter |
**This notebook is an exercise in the [Feature Engineering](https://www.kaggle.com/learn/feature-engineering) course. You can reference the tutorial at [this link](https://www.kaggle.com/ryanholbrook/target-encoding).**
---
# Introduction #
In this exercise, you'll apply target encoding to features in the [*Ames*](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data) dataset.
Run this cell to set everything up!
```
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.feature_engineering_new.ex6 import *
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import warnings
from category_encoders import MEstimateEncoder
from sklearn.model_selection import cross_val_score
from xgboost import XGBRegressor
# Set Matplotlib defaults
plt.style.use("seaborn-whitegrid")
plt.rc("figure", autolayout=True)
plt.rc(
"axes",
labelweight="bold",
labelsize="large",
titleweight="bold",
titlesize=14,
titlepad=10,
)
warnings.filterwarnings('ignore')
def score_dataset(X, y, model=XGBRegressor()):
# Label encoding for categoricals
for colname in X.select_dtypes(["category", "object"]):
X[colname], _ = X[colname].factorize()
# Metric for Housing competition is RMSLE (Root Mean Squared Log Error)
score = cross_val_score(
model, X, y, cv=5, scoring="neg_mean_squared_log_error",
)
score = -1 * score.mean()
score = np.sqrt(score)
return score
df = pd.read_csv("../input/fe-course-data/ames.csv")
```
-------------------------------------------------------------------------------
First you'll need to choose which features you want to apply a target encoding to. Categorical features with a large number of categories are often good candidates. Run this cell to see how many categories each categorical feature in the *Ames* dataset has.
```
df.select_dtypes(["object"]).nunique()
```
We talked about how the M-estimate encoding uses smoothing to improve estimates for rare categories. To see how many times a category occurs in the dataset, you can use the `value_counts` method. This cell shows the counts for `SaleType`, but you might want to consider others as well.
```
df["SaleType"].value_counts()
```
# 1) Choose Features for Encoding
Which features did you identify for target encoding? After you've thought about your answer, run the next cell for some discussion.
```
# View the solution (Run this cell to receive credit!)
q_1.check()
```
-------------------------------------------------------------------------------
Now you'll apply a target encoding to your choice of feature. As we discussed in the tutorial, to avoid overfitting, we need to fit the encoder on data heldout from the training set. Run this cell to create the encoding and training splits:
```
# Encoding split
X_encode = df.sample(frac=0.20, random_state=0)
y_encode = X_encode.pop("SalePrice")
# Training split
X_pretrain = df.drop(X_encode.index)
y_train = X_pretrain.pop("SalePrice")
```
# 2) Apply M-Estimate Encoding
Apply a target encoding to your choice of categorical features. Also choose a value for the smoothing parameter `m` (any value is okay for a correct answer).
```
# YOUR CODE HERE: Create the MEstimateEncoder
# Choose a set of features to encode and a value for m
encoder = MEstimateEncoder(cols=['Neighborhood'],m=0.5)
# Fit the encoder on the encoding split
encoder.fit(X_encode, y_encode)
# Encode the training split
X_train = encoder.transform(X_pretrain, y_train)
# Check your answer
q_2.check()
# Lines below will give you a hint or solution code
#q_2.hint()
#q_2.solution()
```
If you'd like to see how the encoded feature compares to the target, you can run this cell:
```
encoder.cols
feature = encoder.cols
plt.figure(dpi=90)
ax = sns.distplot(y_train, kde=True, hist=False)
ax = sns.distplot(X_train[feature], color='r', ax=ax, hist=True, kde=False, norm_hist=True)
ax.set_xlabel("SalePrice");
```
From the distribution plots, does it seem like the encoding is informative?
And this cell will show you the score of the encoded set compared to the original set:
```
X = df.copy()
y = X.pop("SalePrice")
score_base = score_dataset(X, y)
score_new = score_dataset(X_train, y_train)
print(f"Baseline Score: {score_base:.4f} RMSLE")
print(f"Score with Encoding: {score_new:.4f} RMSLE")
```
Do you think that target encoding was worthwhile in this case? Depending on which feature or features you chose, you may have ended up with a score significantly worse than the baseline. In that case, it's likely the extra information gained by the encoding couldn't make up for the loss of data used for the encoding.
-------------------------------------------------------------------------------
In this question, you'll explore the problem of overfitting with target encodings. This will illustrate this importance of training fitting target encoders on data held-out from the training set.
So let's see what happens when we fit the encoder and the model on the *same* dataset. To emphasize how dramatic the overfitting can be, we'll mean-encode a feature that should have no relationship with `SalePrice`, a count: `0, 1, 2, 3, 4, 5, ...`.
```
# Try experimenting with the smoothing parameter m
# Try 0, 1, 5, 50
m = 50
X = df.copy()
y = X.pop('SalePrice')
# Create an uninformative feature
X["Count"] = range(len(X))
X["Count"][1] = 0 # actually need one duplicate value to circumvent error-checking in MEstimateEncoder
# fit and transform on the same dataset
encoder = MEstimateEncoder(cols="Count", m=m)
X = encoder.fit_transform(X, y)
# Results
score = score_dataset(X, y)
print(f"Score: {score:.4f} RMSLE")
```
Almost a perfect score!
```
plt.figure(dpi=90)
ax = sns.distplot(y, kde=True, hist=False)
ax = sns.distplot(X["Count"], color='r', ax=ax, hist=True, kde=False, norm_hist=True)
ax.set_xlabel("SalePrice");
```
And the distributions are almost exactly the same, too.
# 3) Overfitting with Target Encoders
Based on your understanding of how mean-encoding works, can you explain how XGBoost was able to get an almost a perfect fit after mean-encoding the count feature?
```
# View the solution (Run this cell to receive credit!)
q_3.check()
# Uncomment this if you'd like a hint before seeing the answer
#q_3.hint()
```
# The End #
That's it for *Feature Engineering*! We hope you enjoyed your time with us.
Now, are you ready to try out your new skills? Now would be a great time to join our [Housing Prices](https://www.kaggle.com/c/house-prices-advanced-regression-techniques) Getting Started competition. We've even prepared a [Bonus Lesson]() that collects all the work we've done together into a starter notebook.
# References #
Here are some great resources you might like to consult for more information. They all played a part in shaping this course:
- *The Art of Feature Engineering*, a book by Pablo Duboue.
- *An Empirical Analysis of Feature Engineering for Predictive Modeling*, an article by Jeff Heaton.
- *Feature Engineering for Machine Learning*, a book by Alice Zheng and Amanda Casari. The tutorial on clustering was inspired by this excellent book.
- *Feature Engineering and Selection*, a book by Max Kuhn and Kjell Johnson.
---
*Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/221677) to chat with other Learners.*
| github_jupyter |
# Practice: Basic Statistics I: Averages
For this practice, let's use the Boston dataset.
```
# Import the numpy package so that we can use the method mean to calculate averages
import numpy as np
# Import the load_boston method
from sklearn.datasets import load_boston
# Import pandas, so that we can work with the data frame version of the Boston data
import pandas as pd
# Load the Boston data
boston = load_boston()
# This will provide the characteristics for the Boston dataset
print(boston.DESCR)
# Here, I'm including the prices of Boston's houses, which is boston['target'], as a column with the other
# features in the Boston dataset.
boston_data = np.concatenate((boston['data'], pd.DataFrame(boston['target'])), axis = 1)
# Convert the Boston data to a data frame format, so that it's easier to view and process
boston_df = pd.DataFrame(boston_updated, columns = np.concatenate((boston['feature_names'], 'MEDV'), axis = None))
boston_df
# Determine the mean of each feature
averages_column = np.mean(boston_df, axis = 0)
print(averages_column)
# Determine the mean of each row
averages_row = np.mean(boston_df, axis = 1)
print(averages_row)
```
So we can determine the averages by row, but should we do this? Why or why not?
**Answer:** It's very hard to interpret a these values, because taking an average across different features does not make sense.
Let's put together what you have learned about averages and subsetting to do the next problems.
We will determine the average price for houses along the Charles River and that for houses NOT along the river.
```
# Use the query method to define a subset of boston_df that only include houses are along the river (CHAS = 1).
along_river = boston_df.query('CHAS == 1')
along_river
```
What do you notice about the CHAS column?
**Answer:** It's all 1.0! This means that we successfully subsetting all houses that are along the Charles River. Great work!
```
# Now determine the average price for these houses. 'MEDV' is the column name for the prices.
averages_price_along_river = np.mean(along_river['MEDV'])
averages_price_along_river
```
Now try determining the average for houses NOT along the River.
```
# Determine the average price for houses that are NOT along the Charles River (when CHAS = 0).
not_along_river = boston_df.query('CHAS == 0')
averages_price_not_along_river = np.mean(not_along_river['MEDV'])
averages_price_not_along_river
```
Good work! You're becoming an expert in subsetting and determining averages on subsetted data. This will be integral for your capstone projects and future careers as data scientists!
| github_jupyter |
## Homework: Multilingual Embedding-based Machine Translation (7 points)
**In this homework** **<font color='red'>YOU</font>** will make machine translation system without using parallel corpora, alignment, attention, 100500 depth super-cool recurrent neural network and all that kind superstuff.
But even without parallel corpora this system can be good enough (hopefully).
For our system we choose two kindred Slavic languages: Ukrainian and Russian.
### Feel the difference!
(_синій кіт_ vs. _синій кит_)

### Frament of the Swadesh list for some slavic languages
The Swadesh list is a lexicostatistical stuff. It's named after American linguist Morris Swadesh and contains basic lexis. This list are used to define subgroupings of languages, its relatedness.
So we can see some kind of word invariance for different Slavic languages.
| Russian | Belorussian | Ukrainian | Polish | Czech | Bulgarian |
|-----------------|--------------------------|-------------------------|--------------------|-------------------------------|-----------------------|
| женщина | жанчына, кабета, баба | жінка | kobieta | žena | жена |
| мужчина | мужчына | чоловік, мужчина | mężczyzna | muž | мъж |
| человек | чалавек | людина, чоловік | człowiek | člověk | човек |
| ребёнок, дитя | дзіця, дзіцёнак, немаўля | дитина, дитя | dziecko | dítě | дете |
| жена | жонка | дружина, жінка | żona | žena, manželka, choť | съпруга, жена |
| муж | муж, гаспадар | чоловiк, муж | mąż | muž, manžel, choť | съпруг, мъж |
| мать, мама | маці, матка | мати, матір, неня, мама | matka | matka, máma, 'стар.' mateř | майка |
| отец, тятя | бацька, тата | батько, тато, татусь | ojciec | otec | баща, татко |
| много | шмат, багата | багато | wiele | mnoho, hodně | много |
| несколько | некалькі, колькі | декілька, кілька | kilka | několik, pár, trocha | няколко |
| другой, иной | іншы | інший | inny | druhý, jiný | друг |
| зверь, животное | жывёла, звер, істота | тварина, звір | zwierzę | zvíře | животно |
| рыба | рыба | риба | ryba | ryba | риба |
| птица | птушка | птах, птиця | ptak | pták | птица |
| собака, пёс | сабака | собака, пес | pies | pes | куче, пес |
| вошь | вош | воша | wesz | veš | въшка |
| змея, гад | змяя | змія, гад | wąż | had | змия |
| червь, червяк | чарвяк | хробак, черв'як | robak | červ | червей |
| дерево | дрэва | дерево | drzewo | strom, dřevo | дърво |
| лес | лес | ліс | las | les | гора, лес |
| палка | кій, палка | палиця | patyk, pręt, pałka | hůl, klacek, prut, kůl, pálka | палка, пръчка, бастун |
But the context distribution of these languages demonstrates even more invariance. And we can use this fact for our for our purposes.
## Data
```
import tqdm
import gensim
import numpy as np
from gensim.models import KeyedVectors
```
Download embeddings here:
* [cc.uk.300.vec.zip](https://yadi.sk/d/9CAeNsJiInoyUA)
* [cc.ru.300.vec.zip](https://yadi.sk/d/3yG0-M4M8fypeQ)
Load embeddings for ukrainian and russian.
```
uk_emb = KeyedVectors.load_word2vec_format("data/cc.uk.300.vec")
ru_emb = KeyedVectors.load_word2vec_format("data/cc.ru.300.vec")
ru_emb.most_similar([ru_emb["август"]], topn=10)
uk_emb.most_similar([uk_emb["серпень"]])
ru_emb.most_similar([uk_emb["серпень"]])
```
Load small dictionaries for correspoinding words pairs as trainset and testset.
```
def load_word_pairs(filename):
uk_ru_pairs = []
uk_vectors = []
ru_vectors = []
with open(filename, "r") as inpf:
for line in inpf:
uk, ru = line.rstrip().split("\t")
if uk not in uk_emb or ru not in ru_emb:
continue
uk_ru_pairs.append((uk, ru))
uk_vectors.append(uk_emb[uk])
ru_vectors.append(ru_emb[ru])
return uk_ru_pairs, np.array(uk_vectors), np.array(ru_vectors)
uk_ru_train, X_train, Y_train = load_word_pairs("ukr_rus.train.txt")
uk_ru_test, X_test, Y_test = load_word_pairs("ukr_rus.test.txt")
```
## Embedding space mapping
Let $x_i \in \mathrm{R}^d$ be the distributed representation of word $i$ in the source language, and $y_i \in \mathrm{R}^d$ is the vector representation of its translation. Our purpose is to learn such linear transform $W$ that minimizes euclidian distance between $Wx_i$ and $y_i$ for some subset of word embeddings. Thus we can formulate so-called Procrustes problem:
$$W^*= \arg\min_W \sum_{i=1}^n||Wx_i - y_i||_2$$
or
$$W^*= \arg\min_W ||WX - Y||_F$$
where $||*||_F$ - Frobenius norm.
In Greek mythology, Procrustes or "the stretcher" was a rogue smith and bandit from Attica who attacked people by stretching them or cutting off their legs, so as to force them to fit the size of an iron bed. We make same bad things with source embedding space. Our Procrustean bed is target embedding space.


But wait...$W^*= \arg\min_W \sum_{i=1}^n||Wx_i - y_i||_2$ looks like simple multiple linear regression (without intercept fit). So let's code.
```
from sklearn.linear_model import LinearRegression
mapping = LinearRegression(fit_intercept=False)
mapping.fit(X_train,Y_train)
```
Let's take a look at neigbours of the vector of word _"серпень"_ (_"август"_ in Russian) after linear transform.
```
august = mapping.predict(uk_emb["серпень"].reshape(1, -1))
ru_emb.most_similar(august)
```
We can see that neighbourhood of this embedding cosists of different months, but right variant is on the ninth place.
As quality measure we will use precision top-1, top-5 and top-10 (for each transformed Ukrainian embedding we count how many right target pairs are found in top N nearest neighbours in Russian embedding space).
```
def precision(pairs, mapped_vectors, topn=1):
"""
:args:
pairs = list of right word pairs [(uk_word_0, ru_word_0), ...]
mapped_vectors = list of embeddings after mapping from source embedding space to destination embedding space
topn = the number of nearest neighbours in destination embedding space to choose from
:returns:
precision_val, float number, total number of words for those we can find right translation at top K.
"""
assert len(pairs) == len(mapped_vectors)
num_matches = 0
for i, (_, ru) in tqdm.tqdm(enumerate(pairs), total=len(pairs)):
possible = [x[0] for x in ru_emb.most_similar(mapped_vectors[i], topn = topn)]
if ru in possible:
num_matches +=1
precision_val = num_matches / len(pairs)
return precision_val
assert precision([("серпень", "август")], august, topn=5) == 0.0
assert precision([("серпень", "август")], august, topn=9) == 1.0
assert precision([("серпень", "август")], august, topn=10) == 1.0
assert precision(uk_ru_test, X_test) == 0.0
assert precision(uk_ru_test, Y_test) == 1.0
precision_top1 = precision(uk_ru_test, mapping.predict(X_test), 1)
precision_top5 = precision(uk_ru_test, mapping.predict(X_test), 5)
assert precision_top1 >= 0.635
assert precision_top5 >= 0.813
```
## Making it better (orthogonal Procrustean problem)
It can be shown (see original paper) that a self-consistent linear mapping between semantic spaces should be orthogonal.
We can restrict transform $W$ to be orthogonal. Then we will solve next problem:
$$W^*= \arg\min_W ||WX - Y||_F \text{, where: } W^TW = I$$
$$I \text{- identity matrix}$$
Instead of making yet another regression problem we can find optimal orthogonal transformation using singular value decomposition. It turns out that optimal transformation $W^*$ can be expressed via SVD components:
$$X^TY=U\Sigma V^T\text{, singular value decompostion}$$
$$W^*=UV^T$$
```
def learn_transform(X_train, Y_train):
"""
:returns: W* : float matrix[emb_dim x emb_dim] as defined in formulae above
"""
u, _, vt = np.linalg.svd(X_train.T@Y_train)
return u@vt
W = learn_transform(X_train, Y_train)
ru_emb.most_similar([np.matmul(uk_emb["серпень"], W)])
assert precision(uk_ru_test, np.matmul(X_test, W)) >= 0.653
assert precision(uk_ru_test, np.matmul(X_test, W), 5) >= 0.824
```
## UK-RU Translator
Now we are ready to make simple word-based translator: for earch word in source language in shared embedding space we find the nearest in target language.
```
with open("fairy_tale.txt", "r") as inpf:
uk_sentences = [line.rstrip().lower() for line in inpf]
def translate(sentence):
"""
:args:
sentence - sentence in Ukrainian (str)
:returns:
translation - sentence in Russian (str)
* find ukrainian embedding for each word in sentence
* transform ukrainian embedding vector
* find nearest russian word and replace
"""
# YOUR CODE HERE
assert translate(".") == "."
assert translate("1 , 3") == "1 , 3"
assert translate("кіт зловив мишу") == "кот поймал мышку"
for sentence in uk_sentences:
print("src: {}\ndst: {}\n".format(sentence, translate(sentence)))
```
Not so bad, right? We can easily improve translation using language model and not one but several nearest neighbours in shared embedding space. But next time.
## Would you like to learn more?
### Articles:
* [Exploiting Similarities among Languages for Machine Translation](https://arxiv.org/pdf/1309.4168) - entry point for multilingual embedding studies by Tomas Mikolov (the author of W2V)
* [Offline bilingual word vectors, orthogonal transformations and the inverted softmax](https://arxiv.org/pdf/1702.03859) - orthogonal transform for unsupervised MT
* [Word Translation Without Parallel Data](https://arxiv.org/pdf/1710.04087)
* [Loss in Translation: Learning Bilingual Word Mapping with a Retrieval Criterion](https://arxiv.org/pdf/1804.07745)
* [Unsupervised Alignment of Embeddings with Wasserstein Procrustes](https://arxiv.org/pdf/1805.11222)
### Repos (with ready-to-use multilingual embeddings):
* https://github.com/facebookresearch/MUSE
* https://github.com/Babylonpartners/fastText_multilingual -
| github_jupyter |
# 第2章 スカラー移流方程式(数値計算法の基礎)
## 2.2 [3] 空間微分項に対する1次精度風上差分の利用
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
(1) $\Delta t = 0.05, \Delta x = 0.1$
初期化
```
c = 1
dt = 0.05
dx = 0.1
jmax = 21
nmax = 6
x = np.linspace(0, dx * (jmax - 1), jmax)
q = np.zeros(jmax)
for j in range(jmax):
if (j < jmax / 2):
q[j] = 1
else:
q[j] = 0
```
メインループ(計算+可視化)
```
plt.figure(figsize=(7,7), dpi=100) # グラフのサイズ
plt.rcParams["font.size"] = 22 # グラフの文字サイズ
# 初期分布の可視化
plt.plot(x, q, marker='o', lw=2, label='n=0')
for n in range(1, nmax + 1):
qold = q.copy()
for j in range(1, jmax-1):
q[j] = qold[j] - dt * c * (qold[j] - qold[j - 1]) / dx # 式(2.9)
# 各ステップの可視化
if n % 2 == 0:
plt.plot(x, q, marker='o', lw=2, label=f'n={n}')
# グラフの後処理
plt.grid(color='black', linestyle='dashed', linewidth=0.5)
plt.xlim([0, 2.0])
plt.ylim([0, 1.2])
plt.xlabel('x')
plt.ylabel('q')
plt.legend()
plt.show()
```
(2) $\Delta t = 0.1, \Delta x = 0.1$ (省略. (1)のdt, dxを変更してみよう)
(3) $\Delta t = 0.2, \Delta x = 0.1$ (省略. (1)のdt, dxを変更してみよう)
(4) $\Delta t = 0.025, \Delta x = 0.05$
```
c = 1
dt = 0.025
dx = 0.05
jmax = 20 * 2 + 1
nmax = 6 * 2
x = np.linspace(0, dx * (jmax - 1), jmax)
q = np.zeros(jmax)
for j in range(jmax):
if (j < jmax / 2):
q[j] = 1
else:
q[j] = 0
plt.figure(figsize=(7,7), dpi=100) # グラフのサイズ
plt.rcParams["font.size"] = 22 # グラフの文字サイズ
# 初期分布の可視化
plt.plot(x, q, marker='o', lw=2, label='n=0')
for n in range(1, nmax + 1):
qold = q.copy()
for j in range(1, jmax-1):
q[j] = qold[j] - dt * c * (qold[j] - qold[j - 1]) / dx # 式(2.9)
# 各ステップの可視化
if n % (2 * 2) == 0:
plt.plot(x, q, marker='o', lw=2, label=f'n={n}')
# グラフの後処理
plt.grid(color='black', linestyle='dashed', linewidth=0.5)
plt.xlim([0, 2.0])
plt.ylim([0, 1.2])
plt.xlabel('x')
plt.ylabel('q')
plt.legend()
plt.show()
```
(5) $\Delta t = 0.01, \Delta x = 0.02$
```
c = 1
dt = 0.01
dx = 0.02
jmax = 20 * 5 + 1
nmax = 6 * 5
x = np.linspace(0, dx * (jmax - 1), jmax)
q = np.zeros(jmax)
for j in range(jmax):
if (j < jmax / 2):
q[j] = 1
else:
q[j] = 0
plt.figure(figsize=(7,7), dpi=100) # グラフのサイズ
plt.rcParams["font.size"] = 22 # グラフの文字サイズ
# 初期分布の可視化
plt.plot(x, q, marker='o', lw=2, label='n=0')
for n in range(1, nmax + 1):
qold = q.copy()
for j in range(1, jmax-1):
q[j] = qold[j] - dt * c * (qold[j] - qold[j - 1]) / dx # 式(2.9)
# 各ステップの可視化
if n % (2 * 5) == 0:
plt.plot(x, q, marker='o', lw=2, label=f'n={n}')
# グラフの後処理
plt.grid(color='black', linestyle='dashed', linewidth=0.5)
plt.xlim([0, 2.0])
plt.ylim([0, 1.2])
plt.xlabel('x')
plt.ylabel('q')
plt.legend()
plt.show()
```
| github_jupyter |
```
from functools import wraps
import time
def show_args(function):
@wraps(function)
def wrapper(*args, **kwargs):
print('hi from decorator - args:')
print(args)
result = function(*args, **kwargs)
print('hi again from decorator - kwargs:')
print(kwargs)
return result
# return wrapper as a decorated function
return wrapper
@show_args
def get_profile(name, active=True, *sports, **awards):
print('\n\thi from the get_profile function\n')
get_profile('bob', True, 'basketball', 'soccer',
pythonista='special honor of the community', topcoder='2017 code camp')
```
### Using @wraps
```
def timeit(func):
'''Decorator to time a function'''
@wraps(func)
def wrapper(*args, **kwargs):
# before calling the decorated function
print('== starting timer')
start = time.time()
# call the decorated function
func(*args, **kwargs)
# after calling the decorated function
end = time.time()
print(f'== {func.__name__} took {int(end-start)} seconds to complete')
return wrapper
@timeit
def generate_report():
'''Function to generate revenue report'''
time.sleep(2)
print('(actual function) Done, report links ...')
generate_report()
```
### stacking decorators
```
def timeit(func):
'''Decorator to time a function'''
@wraps(func)
def wrapper(*args, **kwargs):
# before calling the decorated function
print('== starting timer')
start = time.time()
# call the decorated function
func(*args, **kwargs)
# after calling the decorated function
end = time.time()
print(f'== {func.__name__} took {int(end-start)} seconds to complete')
return wrapper
def print_args(func):
'''Decorator to print function arguments'''
@wraps(func)
def wrapper(*args, **kwargs):
# before
print()
print('*** args:')
for arg in args:
print(f'- {arg}')
print('**** kwargs:')
for k, v in kwargs.items():
print(f'- {k}: {v}')
print()
# call func
func(*args, **kwargs)
return wrapper
def generate_report(*months, **parameters):
time.sleep(2)
print('(actual function) Done, report links ...')
@timeit
@print_args
def generate_report(*months, **parameters):
time.sleep(2)
print('(actual function) Done, report links ...')
parameters = dict(split_geos=True, include_suborgs=False, tax_rate=33)
generate_report('October', 'November', 'December', **parameters)
```
### Passing arguments to a decorator
Another powerful capability of decs is the ability to pass arguments to them like normal functions, afterall they're functions too. Let's write a simple decorator to return a noun in a format:
```
def noun(i):
def tag(func):
def wrapper(name):
return "My {0} is {1}".format(i, func(name))
return wrapper
return tag
@noun("name")
def say_something(something):
return something
print(say_something('Ant'))
@noun("age")
def say_something(something):
return something
print(say_something(44))
def noun(i):
def tag(func):
def wrapper(name):
return "<{0}>{1}</{0}>".format(i, func(name),i)
return wrapper
return tag
@noun("p")
@noun("strong")
def say_something(something):
return something
# print(say_something('Coding with PyBites!'))
print(say_something('abc'))
def make_html(i):
#@wraps(element)
def tag(func):
def wrapper(*args):
return "<{0}>{1}</{0}>".format(i, func(*args), i)
return wrapper
return tag
@make_html("p")
@make_html("strong")
def get_text(text='I can code with PyBites'):
return text
print(get_text('Some random text here'))
# how do I get default text to print though?
print(get_text)
print(get_text('text'))
print(get_text())
@make_html('p')
@make_html('strong')
def get_text(text='I code with PyBites'):
return text
from functools import wraps
def make_html(element):
pass
from functools import wraps
def exponential_backoff(func):
@wraps(func)
def function_wrapper(*args, **kwargs):
pass
return function_wrapper
@exponential_backoff
def test():
pass
print(test) # <function exponential_backoff.<locals>.function_wrapper at 0x7fcc343a4268>
# uncomment `@wraps(func)` line:
print(test) # <function test at 0x7fcc343a4400>
```
```
@exponential_backoff()
def test():
pass```
equals to:
```
def test():
pass
test = exponential_backoff()(test)```
| github_jupyter |
```
import os
import sys
from pathlib import Path
import pandas as pd
import numpy as np
import torch as T
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
sns.set()
def make_sparse(x):
nz = np.where(np.logical_not(np.isclose(x.numpy(), 0)))
sparse_x = T.sparse_coo_tensor(nz, x[nz], size=x.size(), dtype=T.float32)
return sparse_x
class SparseTensorDataset(Dataset):
def __init__(self, X, Y):
assert len(X) == len(Y)
self.X = X
self.Y = Y
def __len__(self):
return len(self.X)
def __getitem__(self, idx):
x = self.X[idx]
y = self.Y[idx]
sparse_x = make_sparse(x)
return (sparse_x, y)
x = T.rand(10) * T.randint(0, 2, size=(10,))
y = T.rand(10) * T.randint(0, 2, size=(10,))
T.allclose(
make_sparse(x + y).coalesce().values(),
(make_sparse(x) + make_sparse(y)).coalesce().values()
)
class Net(nn.Module):
def __init__(self, input_size, output_size):
super(Net, self).__init__()
self.seq = nn.Sequential(
# nn.Linear(input_size, input_size),
# nn.ReLU(inplace=True),
nn.Linear(input_size, output_size),
)
def forward(self, sx):
return T.sigmoid(self.seq(sx))
```
### 1. Using `SparseTensor` out-of-the-box
This will raise:
```
NotImplementedError: Cannot access storage of SparseTensorImpl
```
```
N = 10
M = 32
X = T.rand(N, M)
Y = T.randint(0, 2, size=(N,))
ds = SparseTensorDataset(X, Y)
dl = DataLoader(ds, batch_size=4, shuffle=False, pin_memory=True, num_workers=2)
for batch_idx, (sx, y) in enumerate(dl):
print(sx, y)
break
```
### 2. Using `SparseTensor` without multiprocessing and memory pinning
```
N = 1024 * 20
M = 8
X = T.rand(N, M) * T.randint(0, 2, size=(N, M))
Y = (X.sum(dim=-1) < 2).to(T.float32)
ds = SparseTensorDataset(X, Y)
dl = DataLoader(ds, batch_size=256, shuffle=False, pin_memory=False, num_workers=0)
net = Net(input_size=M, output_size=1)
loss_crit = nn.BCELoss()
optimizer = T.optim.Adam(filter(lambda p: p.requires_grad, net.parameters()), lr=0.001)
scheduler = T.optim.lr_scheduler.StepLR(optimizer, step_size=30, gamma=0.5)
avg_loss = 0
net.cuda()
for epoch_idx in range(1, 80 + 1):
for batch_idx, (sx, y) in enumerate(dl, start=1):
sx, y = sx.cuda(), y.cuda()
# print(f"{sx=}")
# print('-' * 128)
# print(f"{y=}")
# print('-' * 128)
y_pred = net(sx)
# print(f'{y_pred=}')
# print('-' * 128)
optimizer.zero_grad()
loss = loss_crit(y_pred.to(T.float32), y.unsqueeze(1).to(T.float32))
loss.backward()
optimizer.step()
avg_loss += loss.item() / len(dl)
# ---
info = [
f"{epoch_idx=:2d}", f"lr={scheduler.get_last_lr()}", f"avg_loss={loss.item():.5f}",
f"avg w={net.seq[0].weight.mean().item():.5f}", f"bias={net.seq[0].bias.item():.5f}",
]
print(' | '.join(info), end='\r')
scheduler.step()
avg_loss = 0
x = T.rand(1, M) * T.randint(0, 2, size=(1, M))
true = (x.sum(dim=-1) < 2).item()
net(x.cuda()), net(make_sparse(x).cuda()), true
```
| github_jupyter |
```
import nltk
from nltk import word_tokenize, pos_tag
from nltk.corpus import indian
X3= nltk.corpus.indian
X3_marathi_sent = X3.tagged_sents('marathi.pos')
marathi_numbers = [chr(0x0966), chr(0x0967), chr(0x0968), chr(0x0969), chr(0x096A),
chr(0x096B), chr(0x096C), chr(0x096D), chr(0x096E), chr(0x096F)]
print("Marathi numbers",marathi_numbers)
def isNumberMarathi(word):
isNum = True
for i in list(word):
if i not in marathi_numbers:
isNum = False
break;
return isNum
def marathi_features(sentence, index):
return {
'word': sentence[index],
'is_first': index == 0,
'is_last': index == len(sentence) - 1,
'prefix-1': sentence[index][0] if sentence[index] != '' else '',
'prefix-2': sentence[index][:2] if sentence[index] != '' else '',
'prefix-3': sentence[index][:3] if sentence[index] != '' else '',
'suffix-1': sentence[index][-1] if sentence[index] != '' else '',
'suffix-2': sentence[index][-2:] if sentence[index] != '' else '',
'suffix-3': sentence[index][-3:] if sentence[index] != '' else '',
'prev_word': '' if index == 0 else sentence[index - 1],
'next_word': '' if index == len(sentence) - 1 else sentence[index + 1],
'has_hyphen': '-' in sentence[index],
'is_numeric': sentence[index].isdigit() or isNumberMarathi(sentence[index])
}
import pprint
pprint.pprint(marathi_features(['महाराष्ट्र', 'अध्यक्ष', 'यशवंत', 'होते'], 1))
def untag(tagged_sentence):
return [w for w, t in tagged_sentence]
cutoff = int(.75 * len(X3_marathi_sent))
training_sentences = X3_marathi_sent[:cutoff]
test_sentences = X3_marathi_sent[cutoff:]
print(len(training_sentences))
print(len(test_sentences))
def transform_to_dataset(tagged_sentences):
X, y = [], []
for tagged in tagged_sentences:
for index in range(len(tagged)):
X.append(marathi_features(untag(tagged), index))
y.append(tagged[index][1])
return X, y
X, y = transform_to_dataset(training_sentences)
from sklearn.naive_bayes import GaussianNB
from sklearn.feature_extraction import DictVectorizer
from sklearn.pipeline import Pipeline
size=10000
clf = Pipeline([
('vectorizer', DictVectorizer(sparse=False)),
('classifier', GaussianNB())
])
clf.fit(X[:size], y[:size])
print('training OK')
X_test, y_test = transform_to_dataset(test_sentences)
print("Accuracy:", clf.score(X_test, y_test))
def pos_tag(sentence):
print('checking...')
tagged_sentence = []
tags = clf.predict([marathi_features(sentence, index) for index in range(len(sentence))])
return zip(sentence, tags)
import platform
print(platform.python_version())
print(list(pos_tag(word_tokenize('मोठ्या देशांमध्ये, या सारख्या गोष्टी लहान असतात.'))))
```
| github_jupyter |
### 實作簡單的乘法與加法層反向傳播
```
# coding: utf-8
class MulLayer:
def __init__(self):
self.x = None
self.y = None
def forward(self, x, y):
self.x = x
self.y = y
out = x * y
return out
def backward(self, dout):
dx = dout * self.y
dy = dout * self.x
return dx, dy
class AddLayer:
def __init__(self): # 不須特別初始化??
pass
def forward(self, x, y):
out = x + y
return out
def backward(self, dout):
dx = dout * 1
dy = dout * 1
return dx, dy
```
#### 超市買蘋果
- 乘法層實作
```
apple = 100
apple_num = 2
tax = 1.1
# layer
mul_apple_layer = MulLayer()
mul_tax_layer = MulLayer()
# forward 正向傳播取得蘋果價格
apple_price = mul_apple_layer.forward(apple, apple_num)
apple_taxed = mul_tax_layer.forward(apple_price, tax)
print(f'apple_taxed : {apple_taxed: >5.1f}\n')
# backward 反向傳播
'''
backward() 的輸入引數是 [對應正向傳播輸出變數的微分值]
forward的輸出變數: apple_price
backward的引數: apple_price微分值。(常數像的微分值為1)
'''
dprice = 1 #把 apple_price的微分值設為引數
dapple_price , d_tax = mul_tax_layer.backward(dprice)
print(f'dapple_price : {dapple_price: >5.1f}')
print(f'd_tax : {d_tax: >5.1f}\n')
dapple , dapple_num = mul_apple_layer.backward(dapple_price)
print(f'dapple : {dapple: >5.1f}')
print(f'dapple_num : {dapple_num: >5.1f}')
print(f'MulLayer\t: {MulLayer()}')
print(f'mul_apple_layer : {mul_apple_layer}')
print(f'mul_tax_layer\t: {mul_tax_layer}')
```
#### 超市買蘋果
- 加法層實作
```
# coding: utf-8
from layer_naive import *
apple = 100
apple_num = 2
orange = 150
orange_num = 3
tax = 1.1
# layer
mul_apple_layer = MulLayer()
mul_orange_layer = MulLayer()
add_apple_orange_layer = AddLayer()
mul_tax_layer = MulLayer()
# forward
apple_price = mul_apple_layer.forward(apple, apple_num) # (1)
orange_price = mul_orange_layer.forward(orange, orange_num) # (2)
all_price = add_apple_orange_layer.forward(apple_price, orange_price) # (3)
price_taxed = mul_tax_layer.forward(all_price, tax) # (4)
# backward
dprice_taxed = 1
dall_price, dtax = mul_tax_layer.backward(dprice_taxed) # (4)
dapple_price, dorange_price = add_apple_orange_layer.backward(dall_price) # (3)
dorange, dorange_num = mul_orange_layer.backward(dorange_price) # (2)
dapple, dapple_num = mul_apple_layer.backward(dapple_price) # (1)
print("price_taxed\t:", int(price_taxed))
print("dApple\t\t:", dapple)
print("dApple_num\t:", int(dapple_num))
print("dOrange\t\t:", round(dorange,1))
print("dOrange_num\t:", int(dorange_num))
print("dTax\t\t:", dtax)
```
### 實作活化函數的反向傳播
#### Relu的反向傳播
```
# 為何指定 x <=0? 2; x >0 呢?
class Relu:
def __init__(self):
self.mask = None
def forward(self, x):
self.mask = (x <= 0)
out = x.copy()
out[self.mask] = 0
return out
def backward(self, dout):
dout[self.mask] = 0
dx = dout
return dx
import numpy as np
relu_layer = Relu()
x = np.array([10])
# x = 10
x_f = relu_layer.forward(x)
print(x_f)
dx_f = np.array([1])
df = relu_layer.backward(dx_f)
df
'''
if x >0 => y = x , dx = 1
x <=0 => y = 0 , dx = 0
'''
```
#### Sigmoid的反向傳播
```
class Sigmoid:
def __init__(self):
self.out = None
def forward(self, x):
out = sigmoid(x)
self.out = out
return out
def backward(self, dout):
dx = dout * (1.0 - self.out) * self.out
return dx
```
### Affine層
- 利用神經網路的正向傳播,計算矩陣乘積在幾何學中,稱作[仿射轉換]
```
class Affine:
def __init__(self, W, b):
self.W =W
self.b = b
self.x = None
self.original_x_shape = None
# 重み・バイアスパラメータの微分
# 權重W 與 B偏權值的微分
self.dW = None
self.db = None
def forward(self, x):
# テンソル対応 Tensor對應(維度對應)
self.original_x_shape = x.shape
x = x.reshape(x.shape[0], -1)
self.x = x
out = np.dot(self.x, self.W) + self.b
return out
def backward(self, dout):
dx = np.dot(dout, self.W.T)
self.dW = np.dot(self.x.T, dout)
self.db = np.sum(dout, axis=0)
dx = dx.reshape(*self.original_x_shape) # 入力データの形状に戻す(テンソル対応) 將資料維度轉為與輸入的X相同
return dx
```
### Softmax-with-loss層
```
class SoftmaxWithLoss:
def __init__(self):
self.loss = None
self.y = None # softmaxの出力
self.t = None # 教師データ
def forward(self, x, t):
self.t = t
self.y = softmax(x)
self.loss = cross_entropy_error(self.y, self.t)
return self.loss
def backward(self, dout=1):
batch_size = self.t.shape[0]
if self.t.size == self.y.size: # 資料以one-hot-vector編碼的情況
dx = (self.y - self.t) / batch_size
else:
dx = self.y.copy()
dx[np.arange(batch_size), self.t] -= 1
dx = dx / batch_size
return dx
```
| github_jupyter |
> # Using fast "kmc kmer counter" to create kmers and count them per sequence.
> ***
> kmc is an external tool (exe app) that needs to be downloaded.
> It is a very efficient tool for data preprocessing, famous in bio-informatics.
> * link to download https://github.com/refresh-bio/KMC
> ## Attention!
> The ***kmer-counter.exe*** and ***kmc-tools.exe*** need to be in the same directory as the files resulting from splitting otherwise it doesn't work (limitation of the app itself, poor implementation or documentation)
***
> ## Requirements for this notebook to work:
> - kmer-counter.exe
> - kmc-tools.exe
> - put the multi fasta file in the same working directory
```
import os
import sys
import argparse
import subprocess as sp
import timeit
import glob
```
## Define the kmer range: low, up
```
# Set the range of k-mers to create: lower and upper.
low = 5
up = 5
#get working directory path
seqPath = os.getcwd()
print(seqPath)
#set the path for KMC counter executable
counterPath = seqPath + r"\kmer_counter.exe" #r"D:\DataSet\MULTI\bow\4mer\kmer_counter.exe"
#set fasta file path
file_path = seqPath#r'D:\DataSet\MULTI\bow\4mer'
```
# 1. Split Single Multi-sequence $FASTA$ file into multiple Single-sequence $FASTA$ Files
```
def split():
with open(seqPath + r'\sequences.fasta', "r") as fa:
lines=fa.read().split('>')
lines = lines[1:]
lines=['>'+ seq for seq in lines]
for name in lines:
#Extracting sequence Id to use it for file name
file_name=name.split('\n')[0][1:]
out_file=open(file_path+"/"+file_name+".fasta", "a")
out_file.write(name)
out_file.close()
# uncomment the call split below to activate split fasta file into multiple sequences:
# split()
```
# 2. Create kmers by looping all the $FASTA$ files
> - Don't forget to include the kmc_counter.exe and Kmc_tools.exe in the same directory as the fasta files otherwise it doesn't work
> - "D:\DataSet\MULTI\split test\raw count\kmer_counter.exe" -k10 -fm D:\DataSet\MULTI\split test\raw count\Synthetic_sequence_0.fasta k10seq0 "D:\DataSet\MULTI\split test\raw count\"
```
%%timeit
def count():
for k in range(low, up+1):
for seqNb in range(60000):
createKmers = [counterPath,"-k" + str(k), "-fm","-r", "-b", "Synthetic_sequence_" + str(seqNb) + ".fasta", "kmers-"+str(k)+"-seqNb-" + str(seqNb) , seqPath]
# print(createKmers)
sp.run(createKmers)
# count()
```
## 2.1 Delete $FASTA$ Files to free disk
```
def deleteFasta():
for k in range(low, up+1):
for seqNb in range(60000):
deleteFasta = ['del',"/f","/q",'Synthetic_sequence_*.*']
sp.call(deleteRaw,shell=True)
sp.call(deleteFasta,shell=True)
# delete()
```
# 3. Transform $ *.kmc\_* $ raw files into $ *.txt $ files containing kmers & counts
##### kmc_tools.exe transform
```
def transform():
for k in range(low, up+1):
for seqNb in range(60000):
transformKmer = ["kmc_tools.exe", "transform", seqPath + r"\kmers-"+str(k)+"-seqNb-" + str(seqNb), "dump",seqPath + r"\kmers-"+str(k)+"-seqNb-" + str(seqNb) +".txt"]
sp.run(transformKmer)
# print(transformKmer)
# transform()
```
# 4. Delete $ *.kmc\_* $ Files
```
def delete():
for k in range(low, up+1):
for seqNb in range(60000):
deleteRaw = ['del',"/f","/q",'*.kmc*']
sp.call(deleteRaw,shell=True)
# delete()
```
| github_jupyter |
```
import pysam
import time
import pickle
import os
from collections import defaultdict
#==========================================================================================================
fa_folder = "/oak/stanford/groups/arend/Xin/AssemblyProj/reference_align_2/Software_Xin/Aquila/source" #The path of the folder which contains your fasta files
fa_name = "ref.fa" #The fasta file you want to process
out_dir = "/oak/stanford/groups/arend/Xin/LiuYC/test3" #The output folder direction (if not exist, the script will create one)
start = 21 #The start chromosome
end = 21 #The end chromosome.
kmer_len = 100 #Length of the kmers generated
mapq_thres = 20 #MAPQ filter threshold (20 recommended)
bowtie_thread = 20 #The number of parallel search threads bowtie2 would use
#==========================================================================================================
```
Configure variables above to meet your requirements, and run the following blocks one by one.
NOTICE: You need to install Bowtie2 before running this script.
```
def QualStr (kmer_len):
#Generate a fake quality string, default k = 100
qual = ''
for i in range(kmer_len):
qual = qual+'~'
return qual
```
The QualStr function generates quality string for later use in generating .fq files.
```
def GetGenome (fa_folder,fa_name):
genome ={}# key = chrnum ; item = chrseq
chro = []
with open(fa_folder+fa_name) as f:
for line in f:
if line.startswith('>'):
if chro:
genome[chrnum] = ''.join(chro)
chro = []
fw.close()
chrnum = line[1:].split(' ')[0].strip('\n')
fw = open(fa_folder+chrnum+'.fa','w')
fw.write(line)
else:
chro.append(line.strip('\n'))
fw.write(line)
genome[chrnum] = ''.join(chro)
chro = []
return genome # key : chrnum (chr21) value : chrseq
```
GetGenome function splits the input fasta file by chromosome name, in other words, it generates some new fasta files and each of them only contains ONE chromosome. Meanwhile, GetGenome saves sequence information into the genome dictionary (key:chromosome name;value:chromosome sequence).
Although this function assumes that the input fasta file contains multiple chromosomes, it is pretty okey if it only has one.
```
def GenerateFqFile (chrseq,chrnum,qual,kmer_len):
with open(chrnum+'/'+chrnum+'.fq','w') as fw:
for n in range(len(chrseq)+1-kmer_len):
seq = chrseq[n:n+kmer_len]
if 'N' not in seq:
fw.write('@%s\n%s\n+\n%s\n'%(n+1,seq,qual))
```
The fastq file content would be like:
@123456
ATCGGTAC...
+
~~~~~~~~...
(123456 is the kmer position)
and saved as chrnum.fq eg:chr21.fq
```
def Bowtie2Map (chrnum,fa_folder,bowtie_thread):
thread = str(bowtie_thread)
os.chdir(chrnum+'/')
os.mkdir('./ref'+chrnum)
os.chdir('./ref'+chrnum)
#index folder is refchrnum eg:refchr21
index_build = os.popen('bowtie2-build --threads '+thread +' '+fa_folder+chrnum+'.fa'+' ref'+chrnum,'r')
print(index_build.read())
map_result = os.popen('bowtie2 -p '+thread+' -x '+'ref'+chrnum+' -U '+'../'+chrnum+'.fq '+'-S ../'+chrnum+'.sam','r')
print(map_result.read())
#output is chrnum.sam eg:chr21.sam
os.chdir('..')
out1 = os.popen('samtools view -bS '+chrnum+'.sam'+' > '+chrnum+'.bam')
print(out1.read())
out2 = os.popen('samtools sort '+chrnum+'.bam -o '+chrnum+'.sorted.bam')
print(out2.read())
out3 = os.popen('samtools view -H '+chrnum+'.sorted.bam > header.sam')
print(out3.read())
out4 = os.popen('samtools view -F 4 '+chrnum+'.sorted.bam | grep -v "XS:" | cat header.sam - | samtools view -b - > unique'+chrnum+'.bam')
print(out4.read())
out5 = os.popen('rm header.sam')
print(out5.read())
out6 = os.popen('samtools index unique'+chrnum+'.bam')
print(out6.read())
os.chdir('..')
```
Bowtie2Map uses Bowtie2 to get mapping result, and saves unique mapped reads to uniquechrnum.bam (eg:uniquechr1.bam).
CAUTION: Make sure you have already installed samtools and Bowtie2 package!
```
def Filter(chrnum,mapq_thres,kmer_len):
filtered = []
bamfile = pysam.AlignmentFile(chrnum+'/'+'unique'+chrnum+'.bam',"rb")
for read in bamfile:
if int(read.mapq) >= mapq_thres and not (read.is_reverse):
filtered.append([int(read.pos),int(read.pos)+kmer_len-1])
return filtered
```
Filter function filters out reads which are reverse or MAPQ lower than threshold
```
def Merge(chrnum,filtered):
start = 0
with open(chrnum+'/'+'merged.bed',"w") as fm:
for line in filtered:
if start == 0:
start,end = line[0],line[1]
elif line[0] > end+1:
fm.write("%s\t%s\t%s\n"%(chrnum,start,end))
start,end = line[0],line[1]
else:
end = line[1]
fm.write("%s\t%s\t%s\n"%(chrnum,start,end))
```
Merge function merges those overlapping unique mapped kmers into bigger blocks
```
def Get_uniqness(chrnum):
uniq_map = defaultdict(int)
with open(chrnum+'/'+"merged.bed","r") as f:
with open(chrnum+'/'+"500merged.bed","w") as fw:
for line in f:
data = line.rsplit()
_start = int(data[1])
_end = int(data[2])
block_len = _end - _start
if block_len >= 500:
use_start = _start + 10
use_end = _end - 10
fw.write('%s\t%s\t%s\n'%(chrnum,use_start, use_end))
for step in range(use_start, use_end+1):
uniq_map[step] = 1
pickle.dump(uniq_map,open("Uniqness_map/"+"uniq_map_"+chrnum+".p","wb"))
```
Get_uniqness function uses the merge results to get unique mapped keys and saves them into a pickle file, which is the final output.
```
def run(fa_folder,out_dir,chrseq,chrnum,kmer_len,qual,mapq_thres,bowtie_thread):
print("Starting to process "+chrnum)
t = time.time()
os.mkdir(chrnum)
GenerateFqFile (chrseq,chrnum,qual,kmer_len)
print(chrnum,":Generate .fq DONE")
Bowtie2Map (chrnum,fa_folder,bowtie_thread)
print(chrnum,":Bowtie2 mapping DONE")
filtered = Filter(chrnum,mapq_thres,kmer_len)
print(chrnum,":MAPQ filter DONE")
Merge(chrnum,filtered)
print(chrnum,":Merge DONE")
Get_uniqness(chrnum)
print(chrnum,":Get uniqness DONE")
#-----------------------------------------------------------------------------------------------
out7 = os.popen('rm -R '+chrnum+'/')#These two lines delete the intermediate results.
print(out7.read()) #If you want to keep those results, comment out these two.
#-----------------------------------------------------------------------------------------------
print(chrnum,"DONE! Time used:", time.time()-t)
fa_folder = fa_folder+"/"
out_dir = out_dir+"/"
chr_start = start - 1
chr_end = end
qual = QualStr(kmer_len)
Genome = GetGenome(fa_folder,fa_name)
chr_list = list(Genome.keys())
if not os.path.exists(out_dir):
os.mkdir(out_dir)
os.chdir(out_dir)
if not os.path.exists('Uniqness_map'):
os.mkdir('Uniqness_map')
for chrnum in chr_list[chr_start:chr_end]:
chrseq = Genome[chrnum]
run(fa_folder,out_dir,chrseq,chrnum,kmer_len,qual,mapq_thres,bowtie_thread)
for chrnum in chr_list:
os.popen('rm '+fa_folder+chrnum+'.fa')
print("ALL DONE")
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import os
os.getcwd()
import sys, glob, shutil
os.chdir(os.path.dirname(os.getcwd()))
os.getcwd()
import cv2
import numpy as np
import matplotlib.pyplot as plt
import keras
import os
import time
import pickle
import tensorflow as tf
from src import models
from src.utils.image import read_image_bgr, preprocess_image, resize_image
from src.utils.visualization import draw_box, draw_caption
from src.utils.colors import label_color
model = models.load_model('data/inference.h5', backbone_name='resnet50')
labels_to_names = {0: 'crack', 1: 'wrinkle'}
c = 0
im_path = 'data/images/'
def draw_text(coordinates, image_array, text, color, x_offset=0, y_offset=0,font_scale=2, thickness=2):
x, y = coordinates[:2]
cv2.putText(image_array, text, (x + x_offset, y + y_offset),cv2.FONT_HERSHEY_SIMPLEX,
font_scale, color, thickness, cv2.LINE_AA)
file = os.listdir(im_path)[69]
file_path = os.path.join(im_path, file)
image = read_image_bgr(file_path)
# copy to draw on
draw = image.copy()
draw = cv2.cvtColor(draw, cv2.COLOR_BGR2RGB)
# preprocess image for network
image = preprocess_image(image)
image, scale = resize_image(image)
# process image
start = time.time()
boxes, scores, labels = model.predict_on_batch(np.expand_dims(image, axis=0))
print("processing time: ", time.time() - start)
# correct for image scale
boxes /= scale
# visualize detections
for box, score, label in zip(boxes[0], scores[0], labels[0]):
# scores are sorted so we can break
if score >= 0.50 and label in range(0, 2):
color = label_color(label)
b = box.astype(int)
draw_box(draw, b, color=color)
caption = "{} {:.3f}".format(labels_to_names[label], score)
print(caption, b)
cv2.putText(draw, caption, (box[0], box[1]), cv2.FONT_HERSHEY_SIMPLEX, 5, (255, 255, 0), 5, cv2.LINE_8)
plt.figure(figsize=(15, 15))
plt.axis('off')
plt.imshow(draw)
plt.show()
cv2.imwrite('example_test_defect.png', draw)
def predict_save(file_path, model, save=False):
image = read_image_bgr(file_path)
# copy to draw on
draw = image.copy()
draw = cv2.cvtColor(draw, cv2.COLOR_BGR2RGB)
# preprocess image for network
image = preprocess_image(image)
image, scale = resize_image(image)
# process image
start = time.time()
boxes, scores, labels = model.predict_on_batch(np.expand_dims(image, axis=0))
print("processing time: ", time.time() - start)
# correct for image scale
boxes /= scale
# visualize detections
for box, score, label in zip(boxes[0], scores[0], labels[0]):
# scores are sorted so we can break
if score >= 0.50 and label in range(0, 2):
color = label_color(label)
b = box.astype(int)
draw_box(draw, b, color=color)
caption = "{} {:.3f}".format(labels_to_names[label], score)
# draw_caption(draw, b, caption)
print(caption, b)
# draw_text(box, draw, labels_to_names[label], [255,255,0], 0, -45, 1, 1)
cv2.putText(draw, caption, (box[0], box[1]), cv2.FONT_HERSHEY_SIMPLEX, 5, (255, 255, 0), 5, cv2.LINE_8)
plt.figure(figsize=(15, 15))
plt.axis('off')
plt.imshow(draw)
plt.show()
filename, _ = os.path.basename(filepath).split(".")
dirname = os.path.dirname(filepath)
dest = os.path.join(dirname, filename+"_"+labels_to_names[label]+".jpg")
if save:
cv2.imwrite(dest, draw)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/yukinaga/object_detection/blob/main/section_3/03_exercise.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# 演習
RetinaNetで、物体の領域を出力する`regression_head`も訓練対象に加えてみましょう。
モデルを構築するコードに、追記を行なってください。
## 各設定
```
import torch
from torch.utils.data import DataLoader
import torchvision
import torchvision.transforms as transforms
from torchvision.utils import draw_bounding_boxes
import numpy as np
import matplotlib.pyplot as plt
import math
# インデックスを物体名に変換
index2name = [
"person",
"bird",
"cat",
"cow",
"dog",
"horse",
"sheep",
"aeroplane",
"bicycle",
"boat",
"bus",
"car",
"motorbike",
"train",
"bottle",
"chair",
"diningtable",
"pottedplant",
"sofa",
"tvmonitor",
]
print(index2name)
# 物体名をインデックスに変換
name2index = {}
for i in range(len(index2name)):
name2index[index2name[i]] = i
print(name2index)
```
## ターゲットを整える関数
```
def arrange_target(target):
objects = target["annotation"]["object"]
box_dics = [obj["bndbox"] for obj in objects]
box_keys = ["xmin", "ymin", "xmax", "ymax"]
# バウンディングボックス
boxes = []
for box_dic in box_dics:
box = [int(box_dic[key]) for key in box_keys]
boxes.append(box)
boxes = torch.tensor(boxes)
# 物体名
labels = [name2index[obj["name"]] for obj in objects] # 物体名はインデックスに変換
labels = torch.tensor(labels)
dic = {"boxes":boxes, "labels":labels}
return dic
```
## データセットの読み込み
```
dataset_train=torchvision.datasets.VOCDetection(root="./VOCDetection/2012",
year="2012",image_set="train",
download=True,
transform=transforms.ToTensor(),
target_transform=transforms.Lambda(arrange_target)
)
dataset_test=torchvision.datasets.VOCDetection(root="./VOCDetection/2012",
year="2012",image_set="val",
download=True,
transform=transforms.ToTensor(),
target_transform=transforms.Lambda(arrange_target)
)
```
## DataLoaderの設定
```
data_loader_train = DataLoader(dataset_train, batch_size=1, shuffle=True)
data_loader_test = DataLoader(dataset_test, batch_size=1, shuffle=True)
```
## ターゲットの表示
```
def show_boxes(image, boxes, names):
drawn_boxes = draw_bounding_boxes(image, boxes, labels=names)
plt.figure(figsize = (16,16))
plt.imshow(np.transpose(drawn_boxes, (1, 2, 0))) # チャンネルを一番後ろに
plt.tick_params(labelbottom=False, labelleft=False, bottom=False, left=False) # ラベルとメモリを非表示に
plt.show()
dataiter = iter(data_loader_train) # イテレータ
image, target = dataiter.next() # バッチを取り出す
print(target)
image = image[0]
image = (image*255).to(torch.uint8) # draw_bounding_boxes関数の入力は0-255
boxes = target["boxes"][0]
labels = target["labels"][0]
names = [index2name[label.item()] for label in labels]
show_boxes(image, boxes, names)
```
# モデルの構築
以下のセルのコードに追記を行い、物体領域の座標を出力する`regression_head`のパラメータも訓練可能にしましょう。
PyTorchの公式ドキュメントに記載されている、RetinaNetのコードを参考にしましょう。
https://pytorch.org/vision/stable/_modules/torchvision/models/detection/retinanet.html#retinanet_resnet50_fpn
```
model = torchvision.models.detection.retinanet_resnet50_fpn(pretrained=True)
num_classes=len(index2name)+1 # 分類数: 背景も含めて分類するため1を加える
num_anchors = model.head.classification_head.num_anchors # アンカーの数
# 分類数を設定
model.head.classification_head.num_classes = num_classes
# 分類結果を出力する層の入れ替え
cls_logits = torch.nn.Conv2d(256, num_anchors*num_classes, kernel_size=3, stride=1, padding=1)
torch.nn.init.normal_(cls_logits.weight, std=0.01) # RetinaNetClassificationHeadクラスより
torch.nn.init.constant_(cls_logits.bias, -math.log((1 - 0.01) / 0.01)) # RetinaNetClassificationHeadクラスより
model.head.classification_head.cls_logits = cls_logits # 層の入れ替え
# 全てのパラメータを更新不可に
for p in model.parameters():
p.requires_grad = False
# classification_headのパラメータを更新可能に
for p in model.head.classification_head.parameters():
p.requires_grad = True
# regression_headのパラメータを更新可能に
# ------- 以下にコードを書く -------
# ------- ここまで -------
model.cuda() # GPU対応
```
## 訓練
```
# 最適化アルゴリズム
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.001, momentum=0.9)
model.train() # 訓練モード
epochs = 3
for epoch in range(epochs):
for i, (image, target) in enumerate(data_loader_train):
image = image.cuda() # GPU対応
boxes = target["boxes"][0].cuda()
labels = target["labels"][0].cuda()
target = [{"boxes":boxes, "labels":labels}] # ターゲットは辞書を要素に持つリスト
loss_dic = model(image, target)
loss = sum(loss for loss in loss_dic.values()) # 誤差の合計を計算
optimizer.zero_grad()
loss.backward()
optimizer.step()
if i%100 == 0: # 100回ごとに経過を表示
print("epoch:", epoch, "iteration:", i, "loss:", loss.item())
```
## 訓練したモデルの使用
```
dataiter = iter(data_loader_test) # イテレータ
image, target = dataiter.next() # バッチを取り出す
image = image.cuda() # GPU対応
model.eval()
predictions = model(image)
print(predictions)
image = (image[0]*255).to(torch.uint8).cpu() # draw_bounding_boxes関数の入力は0-255
boxes = predictions[0]["boxes"].cpu()
labels = predictions[0]["labels"].cpu().detach().numpy()
labels = np.where(labels>=len(index2name), 0, labels) # ラベルが範囲外の場合は0に
names = [index2name[label.item()] for label in labels]
print(names)
show_boxes(image, boxes, names)
```
## スコアによる選別
```
boxes = []
names = []
for i, box in enumerate(predictions[0]["boxes"]):
score = predictions[0]["scores"][i].cpu().detach().numpy()
if score > 0.5: # スコアが0.5より大きいものを抜き出す
boxes.append(box.cpu().tolist())
label = predictions[0]["labels"][i].item()
if label >= len(index2name): # ラベルが範囲外の場合は0に
label = 0
name = index2name[label]
names.append(name)
boxes = torch.tensor(boxes)
show_boxes(image, boxes, names)
```
# 解答例
以下は、どうしても手がかりがないときのみ参考にしましょう。
```
model = torchvision.models.detection.retinanet_resnet50_fpn(pretrained=True)
num_classes=len(index2name)+1 # 分類数: 背景も含めて分類するため1を加える
num_anchors = model.head.classification_head.num_anchors # アンカーの数
# 分類数を設定
model.head.classification_head.num_classes = num_classes
# 分類結果を出力する層の入れ替え
cls_logits = torch.nn.Conv2d(256, num_anchors*num_classes, kernel_size=3, stride=1, padding=1)
torch.nn.init.normal_(cls_logits.weight, std=0.01) # RetinaNetClassificationHeadクラスより
torch.nn.init.constant_(cls_logits.bias, -math.log((1 - 0.01) / 0.01)) # RetinaNetClassificationHeadクラスより
model.head.classification_head.cls_logits = cls_logits # 層の入れ替え
# 全てのパラメータを更新不可に
for p in model.parameters():
p.requires_grad = False
# classification_headのパラメータを更新可能に
for p in model.head.classification_head.parameters():
p.requires_grad = True
# regression_headのパラメータを更新可能に
# ------- 以下にコードを書く -------
for p in model.head.regression_head.parameters():
p.requires_grad = True
# ------- ここまで -------
model.cuda() # GPU対応
```
| github_jupyter |
```
from keras.datasets import mnist
import numpy as np
(x_train, _), (x_test, _) = mnist.load_data()
print(x_train.shape, x_test.shape)
# alternative reshape
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1)) # adapt this if using `channels_first` image data format
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1)) # adapt this if using `channels_first` image data format
print(x_train.shape)
print(x_test.shape)
noise_factor = 0.5
x_train_noisy = x_train + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_train.shape)
x_test_noisy = x_test + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_test.shape)
x_train_noisy = np.clip(x_train_noisy, 0., 1.)
x_test_noisy = np.clip(x_test_noisy, 0., 1.)
import matplotlib.pyplot as plt
n = 10
plt.figure(figsize=(20, 2))
for i in range(n):
ax = plt.subplot(1, n, i + 1)
plt.imshow(x_test_noisy[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras import backend as K
input_img = Input(shape=(28, 28, 1))
x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# at this point the representation is (4, 4, 8) i.e. 128-dimensional
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
autoencoder.fit(x_train_noisy, x_train,
epochs=50,
batch_size=128,
shuffle=True,
validation_data=(x_test_noisy, x_test))
import matplotlib.pyplot as plt
decoded_imgs = autoencoder.predict(x_test_noisy)
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test_noisy[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/DannyML-DSC/Hash-analytics/blob/master/Pandas.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#authenticatiopn script in gcp
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!apt-get install software-properties-common
!apt-get install -y -qq software-properties-common module-init-tools
!apt-get install -y -qq python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
import pandas as pd
import numpy as np
obj = pd.Series([5,10,15,20])
print(obj)
print(obj.values)
print(obj.index)
#converting numpy array to Series
A = np.array([1,2,3,4])
series = pd.Series(A)
print(series)
#custom indexes
#we can also assign custom indices to data in a series
series = pd.Series(A, index=[10,20,30,40])
print(series)
#printing values according toindex
print("this is the value attached to index 10", series[10])
print("this is the value attached to index 20", series[20])
#conditional statements can also be used in indices to query data
print(series[series >= 5])
#boolean conditions
print(10 in series) #this line check if 10 is in index of the series
#we can also convert series to dict
print(series.to_dict())
#pd.isnull is a boolean operator used to check for null values in the array
print(pd.isnull(series))
#this line sums up the null values in our data and returns them a 1
series.isnull().sum()
#addition os series
A = pd.Series([1,2,3,4], index=[0,1,2,3])
B = pd.Series([5,6,7,8], index = [0,1,2,3])
print(A+B) #this sums up the value in the specified series and prints them as one series.
#renaming series
A.name = "A_array"
A.index.name = "A_indices"
print(A)
#Working with dataframes
#data = pd.read_clipboard()
#print(data)
print(data.columns)
print(data['date']) #this prints out a specific column in our environment
#specifying teh specific column to be seen
print(data(columns=["date","name","Gender"]))
#indexes
s1 = pd.Series([10,20,30,40], index=[0,1,2,3])
print(s1)
index_s1 = s1.index
print(index_s1)
#negative indices
print (index_s1[-1:])
print(index_s1[:1])
#we can also create new indices using re-index
new_series = s1.reindex([5,6,7,8])
#using fillvalues
#fillvlaue will check if a new index is been created previously andtyhen fill it in with the digit parsed
new_series = s1.reindex([5,6,7,8,9], fill_vlue=10) #this will go ahead and add the value 10 to the series to avoid errors.
#creating new dataframes using randn
df = pd.DataFrame(randn(25).reshape(5,5), index= [0,1,2,3,4], columns = ['name','gender','age','time','height'])
#an interestin gfact about reindexing is that the.ix method performs the operation i just one line.
#pd.
#when using the drop, the value to be dropped must be parsed with the corresponding index value
s1.drop('10', inplace= True)
#dropna and fill na
s1.dropna(axis=1, inplace=True)
s1.fillna(axis=1, inplace=True)
#matyematical operations on dataframes
s2 = pd.DataFrame([4,5,6,7])
s3 = pd.DataFrame([0,1,2,3])
s4 = (s2 + s3)
print(s4)
#other statistical operations on dataframes
s1.sum()
s1.max()
s1.idxmax()
s1.cumsum()
s1.describe()
```
| github_jupyter |
# ZSL target-shift: synthetic data generation
<br>
```
import pickle
import numpy as np
import os, sys
import csv
```
-----------
### Synthetic data generation.
```
num_feat = 5
num_attr = 2
cov_matrix = np.eye(num_feat) # a common cov mat for all distributions.
means = []
for i in range(num_attr):
x = np.random.multivariate_normal(np.zeros(num_feat), cov_matrix)
x = np.array([x, x+np.ones(num_feat)])
means.append(x)
# distance between means can be used to make classification task easy or hard. Larger the distances easier the task
def mean_alteration(means, d1=1, d2=1):
# ratio: ratio between the means of second task to first task
dis1 = np.linalg.norm(means[0][0] - means[0][1])
dis2 = np.linalg.norm(means[1][0] - means[1][1])
means[0][1] = means[0][0] + d1*(means[0][1] - means[0][0])/dis1
means[1][1] = means[1][0] + d2*(means[1][1] - means[1][0])/dis2
return means
means_alt = mean_alteration(means, 1.5, 1.5)
# Co-occurence of classes. This ratio varies for train and test set
prob0 = 0.5
def get_conditional_prob(corr_prob):
cond_mat = np.zeros((num_attr, num_attr))
cond_mat[0,0] = corr_prob
cond_mat[0,1] = 1 - cond_mat[0,0]
cond_mat[1,0] = 1 - cond_mat[0,0]
cond_mat[1,1] = cond_mat[0,0]
return cond_mat
cond_best = 0.5 * np.ones((num_attr, num_attr))
def gen_data_cond(num_data, cond_prob, means):
X = []
Y = []
prob0 = 0.5
for i in range(num_data):
y0 = int(np.random.rand() < prob0)
y1 = int(np.random.rand() < cond_prob[y0, 1])
Y.append([y0, y1])
x0 = np.random.multivariate_normal(means[0][y0], np.eye(num_feat), 1)
x1 = np.random.multivariate_normal(means[1][y1], np.eye(num_feat), 1)
X.append(np.append(x0, x1))
X = np.array(X)
Y = np.array(Y)
return X, Y
# Xtrain, Ytrain = gen_data_cond(1000, cond_train, means_alt)
# Xval, Yval = gen_data_cond(1000, cond_train, means_alt)
# Xtest, Ytest = gen_data_cond(1000, cond_test, means_alt)
Xbest, Ybest = gen_data_cond(1000, cond_best, means_alt)
num_train = 1000
dp = 1.5
da = 1.5
corr_train = 0.8
corr_test = 0.5
test_data_size = 50000
# training set
dfilename = 'train_data_n-' + str(num_train) + '_cor-' + str(corr_train) + '_dp-' + str(dp) + '_da-' + str(da) + '.pckl'
means_alt = mean_alteration(means, dp, da)
cond_test = get_conditional_prob(corr_train)
Xtrain, Ytrain = gen_data_cond(num_train, cond_test, means_alt)
with open('../synthetic_data/' + dfilename, 'w') as fp:
pickle.dump({'X':Xtrain, 'Y':Ytrain}, fp)
print(dfilename)
# test set with different correlation between attributes
for corr_test in [0.1, 0.2, 0.3, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9]:
# test data
dfilename = 'test_data_' + '_cor-' + str(corr_test) + 'dp-' + str(dp) + '_da-' + str(da) + '.pckl'
if dfilename not in os.listdir('../synthetic_data/'):
means_alt = mean_alteration(means, dp, da)
cond_test = get_conditional_prob(corr_test)
Xtest, Ytest = gen_data_cond(test_data_size, cond_test, means_alt)
pickle.dump({'X':Xtest, 'Y':Ytest}, open('../synthetic_data/' + dfilename, 'w'))
print(dfilename)
```
| github_jupyter |
```
#question 2
def add_chars(w1, w2):
"""
Return a string containing the characters you need to add to w1 to get w2.
You may assume that w1 is a subsequence of w2.
>>> add_chars("owl", "howl")
'h'
>>> add_chars("want", "wanton")
'on'
>>> add_chars("rat", "radiate")
'diae'
>>> add_chars("a", "prepare")
'prepre'
>>> add_chars("resin", "recursion")
'curo'
>>> add_chars("fin", "effusion")
'efuso'
>>> add_chars("coy", "cacophony")
'acphon'
>>> from construct_check import check
>>> check(LAB_SOURCE_FILE, 'add_chars',
... ['For', 'While', 'Set', 'SetComp']) # Must use recursion
True
"""
"*** YOUR CODE HERE ***"
if len(w1)==0:
return w2
else:
return add_chars(w1[1:], w2.replace(w1[0], '', 1))
add_chars("resin", "recursion")
# Tree ADT
def tree(label, branches=[]):
"""Construct a tree with the given label value and a list of branches."""
for branch in branches:
assert is_tree(branch), 'branches must be trees'
return [label] + list(branches)
def label(tree):
"""Return the label value of a tree."""
return tree[0]
def branches(tree):
"""Return the list of branches of the given tree."""
return tree[1:]
def is_tree(tree):
"""Returns True if the given tree is a tree, and False otherwise."""
if type(tree) != list or len(tree) < 1:
return False
for branch in branches(tree):
if not is_tree(branch):
return False
return True
def is_leaf(tree):
"""Returns True if the given tree's list of branches is empty, and False
otherwise.
"""
return not branches(tree)
def print_tree(t, indent=0):
"""Print a representation of this tree in which each node is
indented by two spaces times its depth from the root.
>>> print_tree(tree(1))
1
>>> print_tree(tree(1, [tree(2)]))
1
2
>>> numbers = tree(1, [tree(2), tree(3, [tree(4), tree(5)]), tree(6, [tree(7)])])
>>> print_tree(numbers)
1
2
3
4
5
6
7
"""
print(' ' * indent + str(label(t)))
for b in branches(t):
print_tree(b, indent + 1)
def copy_tree(t):
"""Returns a copy of t. Only for testing purposes.
>>> t = tree(5)
>>> copy = copy_tree(t)
>>> t = tree(6)
>>> print_tree(copy)
5
"""
return tree(label(t), [copy_tree(b) for b in branches(t)])
#question 3
def acorn_finder(t):
"""Returns True if t contains a node with the value 'acorn' and
False otherwise.
>>> scrat = tree('acorn')
>>> acorn_finder(scrat)
True
>>> sproul = tree('roots', [tree('branch1', [tree('leaf'), tree('acorn')]), tree('branch2')])
>>> acorn_finder(sproul)
True
>>> numbers = tree(1, [tree(2), tree(3, [tree(4), tree(5)]), tree(6, [tree(7)])])
>>> acorn_finder(numbers)
False
>>> t = tree(1, [tree('acorn',[tree('not acorn')])])
>>> acorn_finder(t)
True
"""
'*** YOUR CODE HERE ***'
if label(t)== 'acorn':
return True
for b in branches(t):
if acorn_finder(b):
return True
return False
numbers = tree(1, [tree(2), tree(3, [tree(4), tree(5)]), tree(6, [tree(7)])])
acorn_finder(numbers)
#tree additional question 1
def sprout_leaves(t, vals):
"""Sprout new leaves containing the data in vals at each leaf in
the original tree t and return the resulting tree.
>>> t1 = tree(1, [tree(2), tree(3)])
>>> print_tree(t1)
1
2
3
>>> new1 = sprout_leaves(t1, [4, 5])
>>> print_tree(new1)
1
2
4
5
3
4
5
>>> t2 = tree(1, [tree(2, [tree(3)])])
>>> print_tree(t2)
1
2
3
>>> new2 = sprout_leaves(t2, [6, 1, 2])
>>> print_tree(new2)
1
2
3
6
1
2
"""
"*** YOUR CODE HERE ***"
if is_leaf(t):
return tree(label(t), [tree(val) for val in vals])
return tree(label(t), [sprout_leaves(b, vals) for b in branches(t)])
t1 = tree(1, [tree(2), tree(3)])
#print_tree(t1)
new1 = sprout_leaves(t1, [4, 5])
print_tree(new1)
#tree additional question 2
def add_trees(t1, t2):
"""
>>> numbers = tree(1,
... [tree(2,
... [tree(3),
... tree(4)]),
... tree(5,
... [tree(6,
... [tree(7)]),
... tree(8)])])
>>> print_tree(add_trees(numbers, numbers))
2
4
6
8
10
12
14
16
>>> print_tree(add_trees(tree(2), tree(3, [tree(4), tree(5)])))
5
4
5
>>> print_tree(add_trees(tree(2, [tree(3)]), tree(2, [tree(3), tree(4)])))
4
6
4
>>> print_tree(add_trees(tree(2, [tree(3, [tree(4), tree(5)])]), \
tree(2, [tree(3, [tree(4)]), tree(5)])))
4
6
8
5
5
"""
"*** YOUR CODE HERE ***"
if not t1:
return t2
if not t2:
return t1
new_label = label(t1)+label(t2)
b1, b2 = branches(t1), branches(t2)
length1, length2 = len(b1), len (b2)
if length1 <= length2:
b1 += [None for _ in range(length1, length2)]
else:
b2 += [None for _ in range(length2, length1)]
return tree(new_label, [add_trees(child1, child2) for child1, child2 in zip(b1, b2)])
print_tree(add_trees(tree(2), tree(3, [tree(4), tree(5)])))
```
| github_jupyter |
### Лекция 7. Исключения
https://en.cppreference.com/w/cpp/language/exceptions
https://en.cppreference.com/w/cpp/error
https://apprize.info/c/professional/13.html
<br />
##### Зачем нужны исключения
Для обработки исключительных ситуаций.
Как вариант - обработка ошибок.
<br />
###### Как пользоваться исключениями
Нужно определиться с двумя точками в программе:
1. Момент детекции ошибки
2. Момент обработки ошибки
В момент возникновения ошибки бросаем (`throw`) любой объект, в который вкладываем описание исключительной ситуации:
```c++
double my_sqrt(double x)
{
if (x < 0)
throw std::invalid_argument("sqrt of negative doubles can not be represented in terms of real numbers");
...
}
```
В вызывающем коде оборачиваем бросающий блок в `try`-`catch`:
*(обратить внимание как брошенное исключение будет обрабатываться)*
```c++
void run_dialogue()
{
std::cout << "enter x: ";
double x;
std::cin >> x;
std::cout << "sqrt(x) = " << my_sqrt(x) << std::endl;
}
int main()
{
try
{
run_dialogue();
}
catch(const std::invalid_argument& e)
{
std::cout << "invalid argument: " << e.what() << std::endl;
return 1;
}
catch(const std::exception& e)
{
std::cout << "common exception: " << e.what() << std::endl;
return 1;
}
return 0;
}
```
**Замечание**: бросать можно объекты любого типа, но рекомендуется бросать именно специальные объекты-исключения для читабельности.
**Вопрос:** `std::invalid_argument` - наследник `std::exception`. Что будет, если поменять блоки-обработчики местами?
<br />
##### Стратегии обработки ошибок
Есть как минимум три стратегии обработки ошибок:
* через коды возврата
* через исключения
* сделать вид, что ошибок не существует
Типичная реализация функции, когда ошибки обрабатываются через коды возврата:
```c++
int get_youngest_student(std::string& student_name)
{
int err_code = 0;
// вытащить всех студентов из БД (пример: проброс ошибки)
std::vector<Student> students;
err_code = fetch_students(students);
if (err_code != ErrCode::OK)
return err_code;
// найти самого молодого (пример: ручное управление)
auto youngest_it = std::min_element(students.begin(),
students.end(),
[](const auto& lhs, const auto& rhs){
return lhs.age < rhs.age;
});
if (youngest_it == students.end())
return ErrCode::NoStudents;
// вытащить из базы его имя (пример: частичная обработка)
err_code = fetch_student_name(youngest_it->id, student_name);
if (err_code != ErrCode::OK)
if (err_code == ErrCode::NoObject)
return ErrCode::CorruptedDatabase;
else
return err_code;
return ErrCode::OK;
}
```
Типичная реализация в случае использования исключений:
```c++
std::string get_youngest_student()
{
// вытащить всех студентов из БД (пример: проброс ошибки)
std::vector<Student> students = fetch_students();
// найти самого молодого (пример: ручное управление)
auto youngest_it = std::min_element(students.begin(),
students.end(),
[](const auto& lhs, const auto& rhs){
return lhs.age < rhs.age;
});
if (youngest_it == students.end())
throw std::runtime_error("students set is empty");
// вытащить из базы его имя (пример: частичная обработка)
try
{
return fetch_student_name(youngest_it->id);
}
catch(const MyDBExceptions::NoObjectException& exception)
{
throw MyDBExceptions::CorruptedDatabase();
}
}
```
Типичная реализация в случае игнорирования ошибок
```c++
std::string get_youngest_student()
{
// вытащить всех студентов из БД (пример: проброс ошибки)
std::vector<Student> students = fetch_students(); // не кидает исключений,
// никак не узнать, что проблемы с доступом к базе
// найти самого молодого (пример: ручное управление)
auto youngest_it = std::min_element(students.begin(),
students.end(),
[](const auto& lhs, const auto& rhs){
return lhs.age < rhs.age;
});
if (youngest_it == students.end())
return "UNK"; // не отделить ситуацию, когда нет студентов в базе вообще
// от ситуации, когда в базе имя UNK у студента
// вытащить из базы его имя (пример: частичная обработка)
return fetch_student_name(youngest_it->id); // # не кидает исключений
// не отделить ситуацию, когда в таблице имён пропущен студент
// от ситуации, когда студент есть с именем UNK
}
```
**На практике: разумный баланс между детализацией ошибок и сложностью программы.**
<br />
##### как бросать и как ловить исключения, размотка стека
В момент исключительной ситуации:
```c++
double my_sqrt(double x)
{
if (x < 0)
throw std::invalid_argument("sqrt can not be calculated for negatives in terms of double");
...;
}
```
Далее начинается размотка стека и поиск соответствующего catch-блока:
(объяснить на примере, показать 3 ошибки в коде кроме "oooops")
<details>
<summary>Подсказка</summary>
<p>
1. утечка `logger`
2. `front` без проверок
3. порядок обработчиков
</p>
</details>
```c++
double get_radius_of_first_polyline_point()
{
auto* logger = new Logger();
std::vector<Point> polyline = make_polyline();
Point p = polyline.front();
logger->log("point is " + std::to_string(p.x) + " " + std::to_string(p.y));
double r = my_sqrt(p.x * p.x - p.y * p.y); // ooops
logger->log("front radius is " + std::to_string(r));
delete logger;
return r;
}
void func()
{
try
{
std::cout << get_radius_of_first_polyline_point() << std::endl;
}
catch (const std::exception& e)
{
std::cout << "unknown exception: " << e.what() << std::endl;
}
catch (const std::invalid_argument& e)
{
std::cout << "aren't we trying to calculate sqrt of negatve? " << e.what() << std::endl;
}
catch (...) // you should never do that
{
std::cout << "what?" << std::endl;
}
}
```
__Вопрос__:
* какие операции в коде могут кинуть исключение?
* Как в `catch (const std::invalid_argument& e)` отличить подсчёт корня из отрицательного числа от других `std::invalid_argument`?
* Что будет в таком варианте?
```c++
catch (const std::invalid_argument e)
{
std::cout << "aren't we trying to calculate sqrt of negatve? " << e.what() << std::endl;
}
```
* а в таком?
```c++
catch (std::invalid_argument& e)
{
std::cout << "aren't we trying to calculate sqrt of negatve? " << e.what() << std::endl;
}
```
<br />
##### noexcept
Если функция не бросает исключений, желательно пометить её `noexcept`.
Вызов такой функции будет чуть дешевле и объём бинарного файла чуть меньше (не нужно генерировать кода поддержки исключений).
Что будет если `noexcept` - функция попытается бросить исключение?
```c++
int get_sum(const std::vector<int>& v) noexcept
{
return std::reduce(v.begin(), v.end());
}
int get_min(const std::vector<int>& v) noexcept
{
if (v.empty())
throw std::invalid_argument("can not find minimum in empty sequence");
return *std::min_element(v.begin(), v.end());
}
```
<details>
<summary>Ответ</summary>
Вызов std::terminate. Дальше продолжать выполнение программы нельзя, т.к. внешнему коду пообещали, что функция исключений не бросает.
При этом, если компилятор не может доказать, что тело функции не бросает исключений, он генерирует try-catch блок на всю функцию с std::terminate в catch.
</details>
<br />
##### стандартные и собственные классы исключений
Для стандартных исключений в С++ выделен базовый класс `std::exception`:
```c++
class exception
{
public:
exception() noexcept;
virtual ~exception() noexcept;
exception(const exception&) noexcept;
exception& operator=(const exception&) noexcept;
virtual const char* what() const noexcept;
};
```
Остальные стандартные исключения наследуются от него:

Как бросать стандартные исключения:
```c++
// проверить на ноль перед делением
if (den == 0)
throw std::runtime_error("unexpected integer division by zero");
// проверить аргументы на корректность
bool binsearch(const int* a, const int n, const int value)
{
if (n < 0)
throw std::invalid_argument("unexpexted negative array size");
...;
}
```
<br />
Зачем свои классы исключений?
* бОльшая детализация ошибки
* возможность добавить информацию к классу исключения
Рекомендации:
* свои классы наследовать от `std::exception`
* если возможно - организовать свои исключения в иерархию (чтобы была возможность ловить общую ошибку библиотеки или более детальные)
* если возможно - предоставить информацию для анализа и восстановления
Рассмотрим пример - вы пишете свой читатель-писатель json-ов (зачем? их миллионы уже!)
```c++
namespace myjson
{
// общий наследник исключений вашей библиотеки парсинга,
// чтобы пользователи могли просто отловить события
// "в этой билиотеке что-то пошло не так"
class MyJsonException : public std::exception {};
// исключение для случая, когда при запросе к полю у объекта
// поле отсутствовало
//
// можно дать обработчику исключения больше информации
// для более разумной обработки ситуации или хотя бы
// более подробного логирования проблемы
class FieldNotFound : public MyJsonException
{
public:
FieldNotFound(std::string parent_name, std::string field_name);
const std::string& parent_name() const noexcept;
const std::string& field_name() const noexcept;
private:
std::string parent_name_;
std::string field_name_;
};
// исключение для ошибок при парсинге json-строки
//
// можно дать больше информации, на каком символе
// обломился парсинг строки
class ParsingException : public MyJsonException
{
public:
ParsingException(int symbol_ix);
int symbol_ix() const noexcept;
private:
int symbol_ix_;
};
// исключение для ошибок при парсинге int-а - сужение |ParsingException|
class IntegerOverflowOnParsing : public ParsingException {};
// и т.д.
} // namespace myjson
```
Опять же, на практике выбирают баланс между детализацией и сложностью.
Чем глубже детализация, тем сложнее программа, но с определённого уровня пользы от большей детализации мало.
<br />
##### исключения в конструкторах и деструкторах
Почему исключения полезны в конструкторах?
Потому что у конструктора нет другого способа сообщить об ошибке!
```c++
std::vector<int> v = {1, 2, 3, 4, 5};
// нет другого (нормального) способа сообщить вызываемому коду,
// что памяти не хватило и вектор не создан, только бросить
// исключение
```
**Пример**: на порядок вызова конструкторов и деструкторов
```c++
class M { ...; };
class B { ...; };
class D : public B {
public:
D() {
throw std::runtime_error("error");
}
private:
M m_;
};
```
Какие конструкторы и деструкторы будут вызваны?
```c++
try {
D d;
}
catch (const std::exception& e) {
}
```
А если так?
```c++
class M { ...; };
class B { ...; };
class D : public B {
public:
D() : D(0) {
throw std::runtime_error("error");
}
D(int x) {}
private:
M m_;
};
```
Что с исключениями из деструкторов?
```c++
class D
{
public:
D() {}
~D() {
throw std::runtime_error("can not free resource");
}
};
```
* Бросать исключения из деструкторов - "плохо"
* По умолчанию деструктор - `noexcept` (если нет специфических проблем с базовыми классами и членами)
* Если при размотке стека из деструктора объекта бросается исключение, программа завершается с `std::terminate` по стандарту: https://en.cppreference.com/w/cpp/language/destructor (раздел Exceptions) "you can not fail to fail"
__Упражение__: чтобы понять, почему деструктору нельзя кидать исключения, попробуйте на досуге представить, как корректно реализовать `resize` у `std::vector<T>`, если `~T` иногда кидает исключение
<br />
##### гарантии при работе с исключениями
* `nothrow` - функция не бросает исключений
```c++
int swap(int& x, int& y)
```
* `strong` - функция отрабатывает как транзакция: если из функции вылетает исключение, состояние программы откатывается на момент как до вызова функции.
```c++
std::vector::push_back
```
* `basic` - если из функции вылетает исключение, программа ещё корректна (инварианты сохранены). Может потребоваться очистка.
```c++
void write_to_csv(const char* filename)
{
std::ofsteam ofs(filename);
ofs << "id,name,age" << std::endl;
std::vector<std::string> names = ...; // bad_alloc
ofs << ...;
}
```
* `no exception guarantee` - если из функции вылетает исключение, молитесь
```
любой production код (история про обработку ошибок в файловых системах)
```
* `exception-neutral` - только для шаблонных компонент - пропускает сквозь себя все исключения, которые кидают шаблонные параметры
```c++
std::min_element(v.begin(), v.end())
```
<br />
##### стоимость исключений
Зависит от реализации.
"В интернете пишут, что" исключения в основных компиляторах реализованы по принципу (бесплатно, пока не вылетел exception, дорого, если вылетел). При этом при выбросе исключений формируются специальные exception frame-ы, осуществляется поиск handler-ов и cleanup-процедур по заранее сгенерированным компилятором таблицам.
Подробнее:
* https://stackoverflow.com/questions/13835817/are-exceptions-in-c-really-slow
* https://mortoray.com/2013/09/12/the-true-cost-of-zero-cost-exceptions/
При этом код обслуживания исключений тоже надо сгенерировать. Статья как в microsoft провели исследование сколько занимает код обслуживания механизма исключений (спойлер: для конкретной билиотеки в районе 26% - зависит от кол-ва исключений, кол-ва бросающих исключения функций, кол-ва вызовов throw, сложности объектов на стеке и т.д.) и как его сократили где-то в 2 раза:
https://devblogs.microsoft.com/cppblog/making-cpp-exception-handling-smaller-x64/
<br />
##### noexcept-move-операции
Пояснить на классическом примере `std::vector::push_back` каким образом объявление move-операций `noexcept` позволяет ускорить программу:

Аналогично допустимы оптимизации при работе с std::any для nothrow move constructible типов<br />
https://en.cppreference.com/w/cpp/utility/any
<br />
##### правила хорошего тона при реализации исключений
* Деструкторы не должны бросать исключений. Можете помечать их `noexcept` или знать, что компилятор во многих случаях автоматически добавляет `noexcept` к деструкторам.
* Реализовывать и помечать move-операции как `noexcept`
* Реализовывать и помечать default constructor как `noexcept` (cppcoreguildelines для скорости)
* `noexcept` everything you can!
* Цитата с cppcoreguidelines: `If you know that your application code cannot respond to an allocation failure, it may be appropriate to add noexcept even on functions that allocate.` Объяснить её смысл про восстановление после ошибки и почему "нелогичность" здесь полезна.
* Пользовательские классы исключений наследовать от `std::exception` или его подклассов
* Ловить исключений по const-ссылкам
* `throw;` вместо `throw e;` из catch-блока, когда нужен rethrow
* Исключения являются частью контракта (или спецификации) функции! Желательно их протестировать.
* Использовать исключения для исключительных ситуаций, а не для естественного потока выполнения.
* Плохой код:
* исключение чтобы прервать цикл
* исключение чтобы сделать особое возвращаемое значение из функции
* Приемлемо:
* исключение чтобы сообщить об ошибке
* исключение чтобы сообщить о нарушении контракта (объяснить, почему это не лучший вариант использования исключений)
* Исключения служат для того чтобы восстановиться после ошибки и продолжить работу:
* пропустить некритичные действия. Пример:
* отобразить телефон организации в информационном листе)
* fallback с восстановлением или откатом:
* memory-consuming алгоритмы
* message box-ы об ошибке (не удалось открыть документ в msword)
* красиво умереть на критических ошибках:
* memory allocation on game start
* некорректная команда в текстовом интерпретаторе
* Глобальный try-catch (плюсы: программа завершается без падений, деструкторы объектов на стеке будут позваны (если нет catch-блока, вызов процедура размотки стека может не выполняться - лазейка стандарта). минус: не будет создан crashdump для анализа проблемы):
```c++
int main() {
try {
...
} catch(const std::exception& e) {
std::cout << "Until your last breath!\n";
std::cout << "ERROR: " << e.what() << std::endl;
return 1;
}
return 0;
}
```
<br />
**Полезные материалы**:
* [C++ Russia: Роман Русяев — Исключения C++ через призму компиляторных оптимизаций.](https://www.youtube.com/watch?v=ItemByR4PRg)
* [CppCon 2018: James McNellis “Unwinding the Stack: Exploring How C++ Exceptions Work on Windows”](https://youtu.be/COEv2kq_Ht8)
| github_jupyter |
```
# fundamentals
import os, sys
import numpy as np
import pandas as pd
from calendar import monthrange, month_name
import scipy.stats as stats
import datetime
import imp
import scipy.io as sio
import pickle as pkl
# plotting libraries and setup
from matplotlib.colors import BoundaryNorm
import matplotlib.pyplot as plt
%matplotlib inline
plt.rc('font', family='serif')
plt.rc('font', size=12)
plt.rc('facecolor', )
# met mast functions and utilities
sys.path.append('../')
import met_funcs as MET
import vis as vis
import utils as utils
# # to read .mat files
# import h5py
datapath = '/Users/nhamilto/Documents/Wake_Dynamics/SiteChar/data/IEC_4/'
monthly_events_files = os.listdir(datapath)
monthly_events_files
EWS_events_files = [file for file in monthly_events_files if 'EWS' in file]
EOG_events_files = [file for file in monthly_events_files if 'EOG' in file]
EDC_events_files = [file for file in monthly_events_files if 'EDC' in file]
ETM_events_files = [file for file in monthly_events_files if 'ETM' in file]
EWS_events = pd.DataFrame()
for file in EWS_events_files:
tmp = pd.read_csv(os.path.join(datapath, file))
EWS_events = pd.concat([EWS_events, tmp])
params = MET.setup_IEC_params()
alpha_pos = np.load(
'/Users/nhamilto/Documents/Wake_Dynamics/SiteChar/data/pos_alpha_limit.npy'
)
alpha_neg = np.load(
'/Users/nhamilto/Documents/Wake_Dynamics/SiteChar/data/neg_alpha_limit.npy'
)
alpha_reference_velocity = np.load(
'/Users/nhamilto/Documents/Wake_Dynamics/SiteChar/data/alpha_reference_velocity.npy'
)
EWSfilt = EWS_events[(EWS_events['alpha_min'].abs() < 10) & (EWS_events['alpha_max'].abs() < 10)]
fig, ax = plt.subplots(figsize=(5,3))
ax.plot(alpha_reference_velocity, alpha_pos, 'k')
ax.plot(alpha_reference_velocity, alpha_neg, 'k')
EWSfilt[(EWS_events['alpha_min'] < EWS_events['alpha_neg_limit'])].plot.scatter('WS_mean', 'alpha_min', ax=ax, color='C1')
EWSfilt[(EWS_events['alpha_max'] > EWS_events['alpha_pos_limit'])].plot.scatter('WS_mean', 'alpha_max', ax=ax, color='C2')
ax.set_xlabel('Hub-Height Velocity [m/s]')
ax.set_ylabel('Shear Exponent [-]')
fig.tight_layout()
# fig.savefig()
EOG_events = pd.DataFrame()
for file in EOG_events_files:
tmp = pd.read_csv(os.path.join(datapath, file))
EOG_events = pd.concat([EOG_events, tmp])
EOG_events.index = pd.DatetimeIndex(EOG_events.index)
hourly_EOG_events = EOG_events.groupby(EOG_events.index.hour).count()
hourly_EOG_events.plot.bar(y='WS_max', color='C1')
monthly_EOG_events = EOG_events.groupby(EOG_events.index.month).count()
monthly_EOG_events.plot.bar(y='WS_max', color='C1')
EWS_events = pd.DataFrame()
for file in EWS_events_files:
tmp = pd.read_csv(os.path.join(datapath, file))
EWS_events = pd.concat([EWS_events, tmp])
```
| github_jupyter |
```
import numpy as np
from sklearn.datasets import load_iris
# Loading the dataset
iris = load_iris()
X_raw = iris['data']
y_raw = iris['target']
# Isolate our examples for our labeled dataset.
n_labeled_examples = X_raw.shape[0]
training_indices = np.random.randint(low=0, high=len(X_raw)+1, size=3)
# Defining the training data
X_training = X_raw[training_indices]
y_training = y_raw[training_indices]
# Isolate the non-training examples we'll be querying.
X_pool = np.delete(X_raw, training_indices, axis=0)
y_pool = np.delete(y_raw, training_indices, axis=0)
from sklearn.decomposition import PCA
# Define our PCA transformer and fit it onto our raw dataset.
pca = PCA(n_components=2)
pca.fit(X=X_raw)
from modAL.models import ActiveLearner
from modAL.batch import uncertainty_batch_sampling
from sklearn.neighbors import KNeighborsClassifier
# Specify our core estimator.
knn = KNeighborsClassifier(n_neighbors=3)
learner = ActiveLearner(
estimator=knn,
query_strategy=uncertainty_batch_sampling,
X_training=X_training, y_training=y_training
)
from modAL.batch import ranked_batch
from modAL.uncertainty import classifier_uncertainty
from sklearn.metrics.pairwise import pairwise_distances
uncertainty = classifier_uncertainty(learner, X_pool)
distance_scores = pairwise_distances(X_pool, X_training, metric='euclidean').min(axis=1)
similarity_scores = 1 / (1 + distance_scores)
alpha = len(X_training)/len(X_raw)
scores = alpha * (1 - similarity_scores) + (1 - alpha) * uncertainty
import matplotlib.pyplot as plt
%matplotlib inline
transformed_pool = pca.transform(X_pool)
transformed_training = pca.transform(X_training)
with plt.style.context('seaborn-white'):
plt.figure(figsize=(8, 8))
plt.scatter(transformed_pool[:, 0], transformed_pool[:, 1], c=scores, cmap='viridis')
plt.colorbar()
plt.scatter(transformed_training[:, 0], transformed_training[:, 1], c='r', s=200, label='labeled')
plt.title('Scores of the first instance')
plt.legend()
query_idx, query_instances = learner.query(X_pool, n_instances=5)
transformed_batch = pca.transform(query_instances)
with plt.style.context('seaborn-white'):
plt.figure(figsize=(8, 8))
plt.scatter(transformed_pool[:, 0], transformed_pool[:, 1], c='0.8', label='unlabeled')
plt.scatter(transformed_training[:, 0], transformed_training[:, 1], c='r', s=100, label='labeled')
plt.scatter(transformed_batch[:, 0], transformed_batch[:, 1], c='k', s=100, label='queried')
plt.title('The instances selected for labeling')
plt.legend()
```
| github_jupyter |
```
import collections
import pickle
import os
import numpy as np
import pandas as pd
import tensorflow as tf
tf.config.experimental.list_physical_devices('GPU')
```
# Utils
```
image_size=(54,128)
def savepickle(fname,*args):
with open(fname+"_pk","wb") as f:
pickle.dump(args,f)
def loadpickle(fname):
with open(fname,"rb") as f:
obj = pickle.load(f)
return obj
def reformat(x):
"""
input: x = array([[array([5.]), 1])
output: x = array([5,0,0,0,0])
"""
x = list(x[0])
#只截取5个,因为模型只输出5个
x = x[:5]
p = len(x)
x = x + [0]*(5-p)
return np.array(x)
def load_data_len_only(rootdir,pk="digitStruct.mat_pk",num_only=True):
#pk_path = os.path.join(rootdir,pk)
image_names,labels = loadpickle(pk)
labels_x_len = labels[:,-1:]
labels_x_len[labels_x_len>5]=6
labels_x_len = labels_x_len.astype(float)
labels_x_len = tf.keras.utils.to_categorical(labels_x_len,7)
image_names = np.array([os.path.join(rootdir,x) for x in image_names])
return image_names,labels_x_len
def load_data(rootdir,pk="digitStruct.mat_pk",num_only=True):
#pk_path = os.path.join(rootdir,pk)
image_names,labels = loadpickle(pk)
labels_x_len = labels[:,-1:]
labels_x_len[labels_x_len>5]=6
labels_x_len = labels_x_len.astype(float)
labels_num = np.apply_along_axis(reformat,1,labels)
labels = np.concatenate((labels_x_len,labels_num),axis=1)
image_names = np.array([os.path.join(rootdir,x) for x in image_names])
if num_only:
labels_num = tf.keras.utils.to_categorical(labels_num,11)
return image_names,labels_num
return image_names,labels
def to_df(x,y):
xdf = pd.DataFrame(x,columns=["filename"])
#这样有问题,应该 dataset = pd.DataFrame({'Column1': data[:, 0], 'Column2': data[:, 1]})
ydf = pd.DataFrame(y,columns=["len","1","2","3","4","5"])
ydf = ydf.astype(int)
return pd.concat([xdf,ydf],axis=1)
def to_one_hot(n_arr,cls_num):
onehot = np.zeros((n_arr.shape[1],cls_num))
onehot[np.arange(n_arr.shape[1]),n_arr]=1
return onehot
def preprocess_image(image):
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.resize(image, image_size)
image /= 255.0 # normalize to [0,1] range
image -= tf.math.reduce_mean(image,keepdims=True)
image /=tf.math.reduce_std(image,keepdims=True)
return image
def load_and_preprocess_image(path):
image = tf.io.read_file(path)
return preprocess_image(image)
def datset_gen(images,lables,batch_size=16,buffer_size=1000):
AUTOTUNE = tf.data.experimental.AUTOTUNE
path_ds = tf.data.Dataset.from_tensor_slices(images)
image_ds = path_ds.map(load_and_preprocess_image, num_parallel_calls=AUTOTUNE)
label_ds = tf.data.Dataset.from_tensor_slices(lables)
img_lab_ds = tf.data.Dataset.zip((image_ds, label_ds))
return img_lab_ds.shuffle(buffer_size=buffer_size).repeat().batch(batch_size)
```
## Models
```
import matplotlib.pyplot as plt
def plt_images(x,ylab=None,is_path=True,num=8):
indices = np.arange(x.shape[0])
n=1
for i in np.random.choice(indices,num):
plt.subplot(4,4,n)
img = x[i]
if is_path:
img = tf.io.read_file(img)
img = tf.image.decode_jpeg(img, channels=3)
plt.imshow(img)
#plt.axis("off")
n+=2
if ylab is not None:
yl = str(ylab[i])
xs = str(img.shape)
plt.text(0,-10,"".join((yl,xs)),ha="left", va="bottom", size="medium",color="red")
plt.show()
import tensorflow as tf
from tensorflow import keras
from functools import partial
import numpy as np
from tensorflow.keras.layers import Flatten,Dense,Activation,MaxPool2D,GlobalAvgPool2D,BatchNormalization
from tensorflow.keras.layers import Input,Conv2D,Lambda,Dropout
from tensorflow.keras import Model
DefaultConv2D = partial(keras.layers.Conv2D, kernel_size=3, strides=1,kernel_initializer='random_uniform',
padding="SAME", use_bias=False)
def test_cnn(input_):
#image_in_vision = Input(shape=(image_size[0],image_size[1],3))
x = BatchNormalization()(input_)
x = DefaultConv2D(64, strides=1, activation='relu')(x)
pre_filter = 64
for filter in [64]*1+[128]*2+[256]*2 :
strides = 1 if filter == pre_filter else 2
x = DefaultConv2D(filter, strides=strides, activation='relu')(x)
x = BatchNormalization()(x)
x = MaxPool2D(pool_size=(2,2), padding="SAME")(x)
#x = Dropout(0.2)(x)
x = Flatten()(x)
x = Dense(1024, activation='relu')(x)
x = Dropout(0.2)(x)
#x = BatchNormalization()(x)
#x = Dense(1024, activation='relu')(x)
h = BatchNormalization()(x)
return h
def svhn_model_1(input_shape=[54,128,3],N=5,class_num=11):
X=Input(shape=input_shape)
y = test_cnn(X)
#y = Dense(192,activation="relu",kernel_initializer='he_normal')(y)
y = Dense(512,activation="relu")(y)
S = Dense(N+2,activation="softmax")(y)
return Model(inputs=X,outputs=S)
def svhn_model_simple(input_shape=[54,128,3],N=5,class_num=11):
X=Input(shape=input_shape)
y = test_cnn(X)
#y = Dense(192,activation="relu",kernel_initializer='he_normal')(y)
#y = Dense(512,activation="relu",kernel_initializer='he_normal')(y)
#S = [ Dense(class_num,activation="softmax",kernel_initializer='he_normal',name="cls_".join(str(n)))(y) for n in range(N) ]
S = []
for n in range(N):
sn = Dense(512,activation="relu",kernel_initializer='he_normal')(y)
sn = Dense(512,activation="relu",kernel_initializer='he_normal')(sn)
sn = Dense(class_num,activation="softmax",kernel_initializer='he_normal',name="cls_".join(str(n)))(sn)
S.append(sn)
S = tf.stack(S,axis=1)
#S = Dense(N+2,activation="softmax",kernel_initializer='he_normal')(y)
return Model(inputs=X,outputs=S)
```
## Train
```
root_path = "/kaggle/input/street-view-house-numbers/"
train_path ="/kaggle/input/svhnpk/train_digitStruct.mat_pk"
test_path = "/kaggle/input/svhnpk/test_digitStruct.mat_pk"
extra_path = "/kaggle/input/svhnpk/extra_digitStruct.mat_pk"
is_num_only=True
X_test,y_test = load_data_len_only(root_path+"test/test/",test_path,num_only=is_num_only)
X_train,y_train = load_data_len_only(root_path+"train/train/",train_path,num_only=is_num_only)
X_extra,y_extra = load_data_len_only(root_path+"extra/extra/",extra_path,num_only=is_num_only)
X_train = np.concatenate([X_train,X_extra])
y_train = np.concatenate([y_train,y_extra])
#y_train = y_train.reshape((-1,5,1))
#y_test = y_test.reshape((-1,5,1))
y_train.shape
batch_size = 16
ds_train = datset_gen(X_train,y_train,batch_size=batch_size,buffer_size=100)
ds_test = datset_gen(X_test,y_test,batch_size=batch_size,buffer_size=100)
print(X_train.shape,y_train.shape)
for x,y in ds_train.take(1):
plt_images(x,None,False)
model_save_file="chk.h5"
tensorboard_cb = keras.callbacks.TensorBoard("run_logdir")
checkpoint_cb = keras.callbacks.ModelCheckpoint(filepath="checkpoint_"+model_save_file, verbose=1)
#model.summary()
loss_cce = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
#loss_cce = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model = svhn_model_1(input_shape=[54,128,3])
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
# - 该模型5个位置,分别计算LOSS
model.compile(optimizer=optimizer,
loss=loss_cce,
metrics=['accuracy'])
num_epoches=15
def scheduler(epoch):
if epoch < 5:
return 0.01
if epoch < 10:
return 0.001
return 0.001 * tf.math.exp(0.1 * (10 - epoch))
lr_cb = tf.keras.callbacks.LearningRateScheduler(scheduler)
with tf.device("/GPU:0"):
history = model.fit_generator(ds_train,
steps_per_epoch=X_train.shape[0]//batch_size,
epochs=num_epoches,
validation_data=ds_test,
validation_steps=100,callbacks=[lr_cb,tensorboard_cb,checkpoint_cb])
weights = model.get_weights()
model.save("model.cnnorg.h5")
savepickle("weitgh_only.h5",weights)
#-- visualize --
import matplotlib.pyplot as plt
#import matplotlib as mpl
#mpl.rcParams['figure.figsize'] = (8, 6)
#mpl.rcParams['axes.grid'] = False
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(num_epoches)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
# class LossHistory(keras.callbacks.Callback):
# def on_train_begin(self, logs={}):
# self.losses = []
# def on_batch_end(self, batch, logs={}):
# self.losses.append(logs.get('loss'))
savepickle("history_only.h5",history)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import os
from sklearn.metrics import mean_squared_error, mean_absolute_error
import matplotlib.pyplot as plt
import pickle
import random
import train
from model import NNModelEx
pd.set_option('display.max_columns', 999)
# For this model, the data preprocessing part is already completed with the exception of scaling.
# so we just need to scale here.
def get_ref_X_y(df):
X_cols = [c for c in df.columns if c.startswith('tc2x_')]
y_cols = [c for c in df.columns if c.startswith('y')]
return (df[X_cols], df[y_cols])
raw_data = {} # loads raw data and stores as a dict cache
def dataset_key(dataset='', validation=False):
return dataset+('test' if validation else 'train')
def load_data(raw, dataset='', validation=False):
'''
Return dataframe matching data set and validation. Dictionary input will be updated.
Parameters
----------
raw : dict
dictionary which caches the dataframes and will be updated accordingly
dataset : str
which dataset to use? valid input includes: empty str for full set, sample_, and secret_
validation : bool
load validation set? if true then use _test, otherwise use _train. Note secret_ doesn't have _train
'''
key = dataset+('test' if validation else 'train')
if key not in raw:
print(f"Loading data to cache for: {key}")
raw[key] = pd.read_pickle(f'./data/{key}.pkl')
return raw[key]
configurations = {
'dataset' : 't3/', # '', 'sample_', 'secret_'
'model_identifier' : "tc2_4",
'model_path' : f"./models",
'model': NNModelEx,
'device' : 'cpu',
'random_seed' : 0,
'lr' : 3e-3,
'weight_decay' : 0.3, #Adam
'max_epochs' : 50000,
'do_validate' : True,
'model_definition' : [
('l', (600,)), ('r', (True,)),
('l', (600,)), ('r', (True,)),
('l', (600,)), ('r', (True,)),
('l', (600,)), ('r', (True,)),
('l', (600,)), ('r', (True,)),
('l', (600,)), ('r', (True,)),
('l', (600,)), ('r', (True,)),
('l', (1,)), ('r', (True,)),
],
'train_params' : {
'batch_size': 10000,
'shuffle': True,
'num_workers': 3,
'pin_memory': True,
},
'test_params' : {
'batch_size': 200000,
'num_workers': 1,
'pin_memory': True,
},
}
%%time
train_df = load_data(raw_data,dataset=configurations['dataset'],validation=False)
test_df = load_data(raw_data,dataset=configurations['dataset'],validation=True)
X_train, y_train = get_ref_X_y(train_df)
X_test, y_test = get_ref_X_y(test_df)
import torch
model, _, _, mean_losses, _ = train.load_model_with_config(configurations)
tl, vl = zip(*mean_losses)
fig,ax = plt.subplots()
ax.plot(tl, label="Training Loss")
ax.plot(vl, label="Validation Loss")
fig.legend()
plt.show()
mean_losses
trained_model = model
y_train_pred = train.predict(trained_model, X_train, y_train, device="cpu") # get predictions for each train
y_train_pred_df = pd.DataFrame(y_train_pred, columns=y_train.columns) # put results into a dataframe
print(f' Train set MAE (L1) loss: {mean_absolute_error(y_train, y_train_pred_df)}')
print(f' Train set MSE (L2) loss: {mean_squared_error(y_train, y_train_pred_df)}')
# random.seed(0)
# sample = random.sample(list(y_train_pred_df.index), 10)
print("Train")
train_res = pd.concat([y_train, y_train_pred_df], axis=1)
train_res.columns = ['Ground Truth', 'Pred']
train_res['binarize'] = (train_res['Pred'] > 0.5).astype(float)
train_res['correct'] = train_res['Ground Truth'] == train_res['binarize']
display(train_res)
train_res[train_res['Ground Truth']==1]['correct'].value_counts()
train_res[train_res['Ground Truth']==0]['correct'].value_counts()
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train, (y_train_pred > 0.5))
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
import matplotlib.pyplot as plt
ir= pd.read_csv("C:\\Users\\dhima\\anaconda3\\6th Week\\Iris.csv")
ir1=ir.copy()
ir1
ir1.isnull().sum()
y_true=ir1['Species']
y_true
```
# Ques 1. Apply PCA and select first two directions to convert the data in to 2D. (Exclude the attribute “Species” for PCA)
```
ir2=ir1.iloc[0:,1:5]
ir2
#from sklearn.preprocessing import StandardScaler
#scaler = StandardScaler()
#ir2_ss=ir2.copy()
#StandardScaler().fit(ir2_ss)
#ir2_ss=scaler.fit_transform(ir2_ss)
#ir2_ss
from sklearn.decomposition import PCA
pca = PCA(2)
df = pca.fit_transform(ir2)
pca.n_components_
df.shape
```
# 2. Apply K-means (K=3) clustering on the reduced data. Plot the data points in these clusters (use different colors for each cluster). Obtain the sum of squared distances of samples to their closest cluster center. (Use kmeans.fit to train the model and kmeans.labels_ to obtaine the cluster labels).
```
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
clus=[2,3,4,5,6,7]
for y,z in enumerate(clus):
kmeans = KMeans(n_clusters=z, max_iter=600, algorithm = 'auto')
#predict the labels of clusters.
label = kmeans.fit_predict(df)
print(label)
# Final locations of the centroid
print(kmeans.cluster_centers_)
#filter rows of original data
#filtered_label0 = df[label == 0]
#filtered_label1 = df[label == 1]
#filtered_label2 = df[label == 2]
#plotting the results
#plt.scatter(filtered_label0[:,0] , filtered_label0[:,1] , color = 'blue')
#plt.scatter(filtered_label1[:,0] , filtered_label1[:,1] , color = 'black')
#plt.scatter(filtered_label2[:,0] , filtered_label2[:,1] , color = 'red')
#Getting the Centroids
centroids = kmeans.cluster_centers_
centroids
#Getting unique labels
u_labels = np.unique(label)
#plotting the results:
for i in u_labels:
plt.scatter(df[label == i , 0] , df[label == i , 1] , label = i)
plt.scatter(centroids[:,0] , centroids[:,1] , color = 'k')
plt.legend()
plt.show()
```
# Obtain the purity score for the Kmeans clustering (K=3).
```
import numpy as np
from sklearn import metrics
def purity_score(y_true, y_pred):
# compute contingency matrix (also called confusion matrix)
contingency_matrix = metrics.cluster.contingency_matrix(y_true, y_pred)
# return purity
return np.sum(np.amax(contingency_matrix, axis=0)) / np.sum(contingency_matrix)
kmeans = KMeans(n_clusters=z, max_iter=600, algorithm = 'auto')
#predict the labels of clusters.
label3 = kmeans.fit_predict(df)
purity_score(y_true,label3)
```
# 3. Build a GMM with 3 components (use GMM.fit) on the reduced data. Use this GMM to cluster the data points (Use GMM.predict). Plot the points in these clusters.
```
from sklearn.mixture import GaussianMixture
gmm = GaussianMixture(n_components=3)
gmm.fit(df)
print(gmm.means_)
print('\n')
print(gmm.covariances_)
for z in range(2,8,1):
gmm = GaussianMixture(n_components=z)
gmm.fit(df)
#predictions from gmm
labels = gmm.predict(df)
frame = pd.DataFrame(df)
frame['cluster'] = labels
frame.columns = ['SepalLengthCm', 'SepalWidthCm', 'cluster']
color=['blue','green','black','red','grey','yellow','cyan']
for k in range(0,z):
data = frame[frame["cluster"]==k]
plt.scatter(data["SepalLengthCm"],data["SepalWidthCm"],c=color[k])
plt.show()
```
# 4. Obtain the purity score for the GMM clustering (K=3).
```
gmm = GaussianMixture(n_components=3)
gmm.fit(df)
#predictions from gmm
labels3 = gmm.predict(df)
purity_score(y_true,labels3)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
credit_df = pd.read_csv('German Credit Data.csv')
credit_df
credit_df.info()
X_features = list(credit_df.columns)
X_features.remove('status')
X_features
encoded_df = pd.get_dummies(credit_df[X_features],drop_first = True)
encoded_df
import statsmodels.api as sm
Y = credit_df.status
X = sm.add_constant(encoded_df)
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,Y,test_size = 0.3,random_state = 42)
logit = sm.Logit(y_train,X_train)
logit_model =logit.fit()
logit_model.summary2()
```
## Models Diagnostics using CHI SQUARE TEST
The model summary suggests that as per the Wald's test, only 8 features are statistically significant at a significant value of alpha= 0.05 as p-values are less than 0.05.
p-value for liklelihood ratio test (almost 0.00) indicates that the overall model is statiscally significant.
```
def get_significant_vars(lm):
var_p_values_df = pd.DataFrame(lm.pvalues)
var_p_values_df['vars'] = var_p_values_df.index
var_p_values_df.columns = ['pvals','vars']
return list(var_p_values_df[var_p_values_df['pvals']<=0.05]['vars'])
significant_vars = get_significant_vars(logit_model)
significant_vars
final_logit = sm.Logit(y_train,sm.add_constant(X_train[significant_vars])).fit()
final_logit.summary2()
```
The negative sign in Coefficient value indicates that as the values of this variable increases, the probability of being a bad credit decreases.
A positive value indicates that the probability of a bad credit increases as the corresponding value of the variable increases.
```
y_pred_df = pd.DataFrame({'actual':y_test,'predicted_prob':final_logit.predict(sm.add_constant(X_test[significant_vars]))})
y_pred_df
```
To understand how many observations the model has classified correctly and how many not, a cut-off probability needs to be assumed.
let the assumption be 0.5 for now.
```
y_pred_df['predicted'] = y_pred_df['predicted_prob'].map(lambda x: 1 if x>0.5 else 0)
y_pred_df.sample(5)
```
## Creating a Confusion Matrix
```
from sklearn import metrics
def draw_cm(actual,predicted):
cm = metrics.confusion_matrix(actual,predicted,[1,0])
sns.heatmap(cm,annot=True,fmt= '.2f',
xticklabels=['Bad Credit','Good Credit'],
yticklabels=['Bad Credit','Good Credit'])
plt.ylabel('True Label')
plt.xlabel('Predicted Label')
plt.show()
draw_cm(y_pred_df['actual'],y_pred_df['predicted'])
```
# Building Decision Tree using Gini Criterion
```
Y = credit_df['status']
X = encoded_df
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,Y,test_size= 0.3,random_state=42)
from sklearn.tree import DecisionTreeClassifier
clf_tree = DecisionTreeClassifier(criterion= 'gini',max_depth = 3)
clf_tree.fit(X_train,y_train)
tree_predict = clf_tree.predict(X_test)
metrics.roc_auc_score(y_test,tree_predict)
```
# Displaying the Tree
```
Y = credit_df.status
X =encoded_df
from sklearn.tree import DecisionTreeClassifier,plot_tree
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,Y,test_size = 0.3,random_state = 42)
```
### Using Gini Criterion
```
clf_tree = DecisionTreeClassifier(criterion='gini',max_depth=3)
clf_tree.fit(X_train,y_train)
tree_predict = clf_tree.predict(X_test)
metrics.roc_auc_score(tree_predict,y_test)
## Displaying the Tree
plt.figure(figsize = (20,10))
plot_tree(clf_tree,feature_names=X.columns)
plt.show()
```
### Using Entropy criterion
```
clf_tree_ent = DecisionTreeClassifier(criterion='entropy',max_depth=3)
clf_tree_ent.fit(X_train,y_train)
tree_predict = clf_tree_ent.predict(X_test)
metrics.roc_auc_score(tree_predict,y_test)
## Displaying the Tree
plt.figure(figsize = (20,10))
plot_tree(clf_tree_ent,feature_names=X.columns)
plt.show()
from sklearn.model_selection import GridSearchCV
tuned_params = [{'criterion':['gini','entropy'],
'max_depth':range(2,10)}]
clf_ = DecisionTreeClassifier()
clf = GridSearchCV(clf_tree,
tuned_params,cv =10,
scoring='roc_auc')
clf.fit(X_train,y_train)
score = clf.best_score_*100
print("Best Score is:",score)
best_params = clf.best_params_
print("Best Params is:",best_params)
```
The tree with gini and max_depth = 4 is the best model. Finally, we can build a model with these params and measure the accuracy of the test.
| github_jupyter |
# Caculation of Barotropic Streamfunction
```
%matplotlib inline
import matplotlib.pyplot as plt
import cosima_cookbook as cc
from mpl_toolkits.basemap import Basemap, shiftgrid
import numpy as np
import pandas as pd
import netCDF4 as nc
from joblib import Memory
memory = Memory(cachedir='/g/data1/v45/cosima-cookbook/', verbose=0)
## Load tx_trans from expt, pick up one year
tmp = cc.get_nc_variable('mom01v5/KDS75_newbathy_JRA_runoff', 'ocean.nc', 'tx_trans',n=3, time_units = 'days since 1901-01-01').sel(time=slice('1910-01','1910-12')).mean('time').sum('st_ocean', skipna=True)
#tmp = -tmp*1.0e-9
tmp = -tmp
bsf_01_avg = tmp.cumsum('yt_ocean')-tmp.sum('yt_ocean')
#bsf_1 = tmp.cumsum('yt_ocean')
#bsf_2 = tmp.sum('yt_ocean')
#bsf_avg = bsf_1-bsf_2
bsf_01_avg.compute()
del(tmp)
## Load tx_trans from expt
tmp = cc.get_nc_variable('access-om2/1deg_jra55_ryf9091_kds75_sss6p5', 'ocean.nc', 'tx_trans',n=0, time_units = 'days since 1700-01-01').sel(time=slice('1890-01','1900-12')).mean('time').sum('st_ocean', skipna=True)
tmp = -tmp*1.0e-9
bsf_10_avg = tmp.cumsum('yt_ocean')-tmp.sum('yt_ocean')
bsf_10_avg.compute()
del(tmp)
lon_01 = cc.get_nc_variable('mom01v5/KDS75_newbathy_JRA_runoff', 'ocean.nc', 'xu_ocean',n=1, time_units = 'days since 1900-01-01')
lat_01 = cc.get_nc_variable('mom01v5/KDS75_newbathy_JRA_runoff', 'ocean.nc', 'yt_ocean',n=1, time_units = 'days since 1900-01-01')
tx_trans_01 = cc.get_nc_variable('mom01v5/KDS75_newbathy_JRA_runoff', 'ocean.nc', 'tx_trans',n=1, time_units = 'days since 1900-01-01').isel(time=0).isel(st_ocean=0)
bsf_01_mask = bsf_01_avg.where(tx_trans_01.notnull())
lon_01 = np.array(lon_01, dtype=int)
bsf_01_mask, lon_01 = shiftgrid(0., bsf_01_mask, lon_01, start=True)
lon_10 = cc.get_nc_variable('access-om2/1deg_jra55_ryf9091_kds75_sss12', 'ocean.nc', 'xu_ocean',n=1, time_units = 'days since 1900-01-01')
lat_10 = cc.get_nc_variable('access-om2/1deg_jra55_ryf9091_kds75_sss12', 'ocean.nc', 'yt_ocean',n=1, time_units = 'days since 1900-01-01')
tx_trans_10 = cc.get_nc_variable('access-om2/1deg_jra55_ryf9091_kds75_sss12', 'ocean.nc', 'tx_trans',n=1, time_units = 'days since 1900-01-01').isel(time=0).isel(st_ocean=0)
bsf_10_mask = bsf_10_avg.where(tx_trans_10.notnull())
lon_10 = np.array(lon_10, dtype=int)
bsf_10_mask, lon_10 = shiftgrid(0., bsf_10_mask, lon_10, start=True)
fig = plt.figure(figsize=(20,8))
clev = np.arange(-120,180,20)
ax = fig.add_subplot(1, 2, 1)
cax = plt.contourf(lon_01, lat_01, bsf_01_mask, levels=clev, cmap=plt.cm.RdYlBu_r, extend='both')
plt.colorbar(cax)
#plt.imshow(cax1,'gray')
#cbar.outline.set_linewidth(1)
plt.title('Barotropic Streamfunction from MOM-SIS-01', fontsize=16)
ax = fig.add_subplot(1, 2, 2)
cax = plt.contourf(lon_10, lat_10, bsf_10_mask, levels=clev, cmap=plt.cm.RdYlBu_r, extend='both')
plt.colorbar(cax)
plt.title('Barotropic Streamfunction from ACCESS-OM2', fontsize=16)
#plt.imshow(cax1,'gray')
plt.savefig('bsf.png')
```
| github_jupyter |
# Comparing Training and Test and Parking and Sensor Datasets
```
import sys
import pandas as pd
import numpy as np
import datetime as dt
import time
import matplotlib.pyplot as plt
sys.path.append('../')
from common import reorder_street_block, process_sensor_dataframe, get_train, \
feat_eng, add_tt_gps, get_parking, get_test, plot_dataset_overlay, \
parking_join_addr, tt_join_nh
%matplotlib inline
```
### Import City data
```
train = get_train()
train = feat_eng(train)
```
### Import Scraped City Data
```
city_stats = pd.read_csv('../ref_data/nh_city_stats.txt',delimiter='|')
city_stats.head()
```
### Import Parking data with Addresses
```
clean_park = parking_join_addr(True)
clean_park.min_join_dist.value_counts()
clean_park.head(25)
plot_dataset_overlay()
```
### Prototyping Below (joining and Mapping Code)
```
from multiprocessing import cpu_count, Pool
# simple example of parallelizing filling nulls
def parallelize(data, func):
cores = cpu_count()
data_split = np.array_split(data, cores)
pool = Pool(cores)
data = np.concatenate(pool.map(func, data_split), axis=0)
pool.close()
pool.join()
return data
def closest_point(park_dist):
output = np.zeros((park_dist.shape[0], 3), dtype=int)
for i, point in enumerate(park_dist):
x,y, id_ = point
dist = np.sqrt(np.power(gpspts.iloc[:,0]-x,2) + np.power(gpspts.iloc[:,1]-y,2))
output[i,:] = (id_,np.argmin(dist),np.min(dist))
return output
def parking_join_addr(force=False):
save_path = DATA_PATH + 'P_parking_clean.feather'
if os.path.isfile(save_path) and force==False:
print('loading cached copy')
join_parking_df = pd.read_feather(save_path)
return join_parking_df
else:
parking_df = get_parking()
park_dist = parking_df.groupby(['lat','lon'])[['datetime']].count().reset_index()[['lat','lon']]
park_dist['id'] =park_dist.index
gps2addr = pd.read_csv('../ref_data/clean_parking_gps2addr.txt', delimiter='|')
keep_cols = ['full_addr','jlat','jlon','nhood','road','zipcode']
gpspts = gps2addr[['lat','lon']]
lkup = parallelize(park_dist.values, closest_point)
lkup_df = pd.DataFrame(lkup)
lkup_df.columns = ['parking_idx','addr_idx','min_join_dist']
tmp = park_dist.merge(lkup_df, how='left', left_index=True, right_on='parking_idx')
tmp = tmp.merge(gps2addr[keep_cols], how='left', left_on='addr_idx', right_index=True)
join_parking_df = parking_df.merge(tmp, how='left', on=['lat','lon'])
join_parking_df.to_feather(save_path)
return join_parking_df
print("loading parking data 1.7M")
parking_df = get_parking()
park_dist = parking_df.groupby(['lat','lon'])[['datetime']].count().reset_index()[['lat','lon']]
park_dist['id'] =park_dist.index
print("loading address data 30K")
gps2addr = pd.read_csv('../ref_data/clean_parking_gps2addr.txt', delimiter='|')
keep_cols = ['full_addr','jlat','jlon','nhood','road','zipcode']
gpspts = gps2addr[['lat','lon']]
x,y,id_= park_dist.iloc[0,:]
dist = np.sqrt(np.power(gpspts.iloc[:,0]-x,2) + np.power(gpspts.iloc[:,1]-y,2))
np.log(dist)
dist = np.sqrt(np.power(gpspts.iloc[:,0]-x,2) + np.power(gpspts.iloc[:,1]-y,2))
join_parking_df
lkup_df = pd.DataFrame(lkup)
lkup_df.columns = ['parking_idx','addr_idx']
tmp = park_dist.merge(lkup_df, how='left', left_index=True, right_on='parking_idx')
keep_cols = ['full_addr','jlat','jlon','nhood','road','zipcode']
tmp = tmp.merge(gps2addr[keep_cols], how='left', left_on='addr_idx', right_index=True)
tmp = parking_df.merge(tmp, how='left', on=['lat','lon'])
tmp.isna().sum()
gpspts = gps2addr[['lat','lon']]
park_dist['id'] =park_dist.index
park_dist.head()
park_dist.shape, gpspts.shape
np.concatenate()
from multiprocessing import cpu_count, Pool
# simple example of parallelizing filling nulls
def parallelize(data, func):
cores = cpu_count()
data_split = np.array_split(data, cores)
pool = Pool(cores)
data = np.concatenate(pool.map(func, data_split), axis=0)
pool.close()
pool.join()
return data
def closest_point(park_dist):
output = np.zeros((park_dist.shape[0], 2), dtype=int)
for i, point in enumerate(park_dist):
x,y, id_ = point
dist = np.sqrt(np.power(gpspts.iloc[:,0]-x,2) + np.power(gpspts.iloc[:,1]-y,2))
output[i,:] = (id_,np.argmin(dist))
return output
lkup
122.465370178
gps2addr[(gps2addr['lon'] <= -122.46537) & (gps2addr['lon'] > -122.4654) ].sort_values('lon')
```
| github_jupyter |
This notebook is part of the $\omega radlib$ documentation: https://docs.wradlib.org.
Copyright (c) $\omega radlib$ developers.
Distributed under the MIT License. See LICENSE.txt for more info.
# Export a dataset in GIS-compatible format
In this notebook, we demonstrate how to export a gridded dataset in GeoTIFF and ESRI ASCII format. This will be exemplified using RADOLAN data from the German Weather Service.
```
import wradlib as wrl
import numpy as np
import warnings
warnings.filterwarnings('ignore')
```
### Step 1: Read the original data
```
# We will export this RADOLAN dataset to a GIS compatible format
wdir = wrl.util.get_wradlib_data_path() + '/radolan/grid/'
filename = 'radolan/misc/raa01-sf_10000-1408102050-dwd---bin.gz'
filename = wrl.util.get_wradlib_data_file(filename)
data_raw, meta = wrl.io.read_radolan_composite(filename)
```
### Step 2: Get the projected coordinates of the RADOLAN grid
```
# This is the RADOLAN projection
proj_osr = wrl.georef.create_osr("dwd-radolan")
# Get projected RADOLAN coordinates for corner definition
xy_raw = wrl.georef.get_radolan_grid(900, 900)
```
### Step 3: Check Origin and Row/Column Order
We know, that `wrl.read_radolan_composite` returns a 2D-array (rows, cols) with the origin in the lower left corner. Same applies to `wrl.georef.get_radolan_grid`. For the next step, we need to flip the data and the coords up-down. The coordinate corner points also need to be adjusted from lower left corner to upper right corner.
```
data, xy = wrl.georef.set_raster_origin(data_raw, xy_raw, 'upper')
```
### Step 4a: Export as GeoTIFF
For RADOLAN grids, this projection will probably not be recognized by
ESRI ArcGIS.
```
# create 3 bands
data = np.stack((data, data+100, data+1000))
ds = wrl.georef.create_raster_dataset(data, xy, projection=proj_osr)
wrl.io.write_raster_dataset(wdir + "geotiff.tif", ds, 'GTiff')
```
### Step 4b: Export as ESRI ASCII file (aka Arc/Info ASCII Grid)
```
# Export to Arc/Info ASCII Grid format (aka ESRI grid)
# It should be possible to import this to most conventional
# GIS software.
# only use first band
proj_esri = proj_osr.Clone()
proj_esri.MorphToESRI()
ds = wrl.georef.create_raster_dataset(data[0], xy, projection=proj_esri)
wrl.io.write_raster_dataset(wdir + "aaigrid.asc", ds, 'AAIGrid', options=['DECIMAL_PRECISION=2'])
```
### Step 5a: Read from GeoTIFF
```
ds1 = wrl.io.open_raster(wdir + "geotiff.tif")
data1, xy1, proj1 = wrl.georef.extract_raster_dataset(ds1, nodata=-9999.)
np.testing.assert_array_equal(data1, data)
np.testing.assert_array_equal(xy1, xy)
```
### Step 5b: Read from ESRI ASCII file (aka Arc/Info ASCII Grid)
```
ds2 = wrl.io.open_raster(wdir + "aaigrid.asc")
data2, xy2, proj2 = wrl.georef.extract_raster_dataset(ds2, nodata=-9999.)
np.testing.assert_array_almost_equal(data2, data[0], decimal=2)
np.testing.assert_array_almost_equal(xy2, xy)
```
| github_jupyter |
# FAO Economic and Employment Stats
Two widgets for the 'People' tab.
- No of people employed full time (```forempl``` x 1000)
- ...of which are female (```femempl``` x 1000)
- Net USD generate by forest ({```usdrev``` - ```usdexp```} x 1000)
- GDP in USD in 2012 (```gdpusd2012``` x 1000) **NOTE: GDP in year=9999**
- Total Population (```totpop1000``` x 1000)
```
#Import Global Metadata etc
%run '0.Importable_Globals.ipynb'
# First, get the FAO data from a carto table
sql = ("SELECT fao.country, fao.forempl, fao.femempl, fao.usdrev, fao.usdexp, "
"fao.gdpusd2012, fao.totpop1000, fao.year "
"FROM table_7_economics_livelihood as fao "
"WHERE fao.year = 2000 or fao.year = 2005 or fao.year = 2010 or fao.year = 9999"
)
account = 'wri-01'
urlCarto = "https://{0}.carto.com/api/v2/sql".format(account)
sql = {"q": sql}
r = requests.get(urlCarto, params=sql)
print(r.url,'\n')
pprint(r.json().get('rows')[0:3])
try:
fao_data = r.json().get('rows')
except:
fao_data = None
```
# Widget 1: "Forestry Sector Employment"
Display a pie chart showing the number of male and female employees employed in the Forestry Sector in a given year as well as a dynamic entence.
On hover the segments of the pie chart should show the number of male or female employees as well as the % of the total.
**If no data for female employees - DO NOT SHOW pie chart!**
User Variables:
- adm0 (see whitelist below, not available for all countries)
- year (2000, 2005, 2010)
### NOTE
Both widgets will use the same requests since it is easier to request all the relevent data in one go.
```
# First, get ALL data from the FAO data from a carto table
sql = ("SELECT fao.country, fao.forempl, fao.femempl, fao.usdrev, fao.usdexp, "
"fao.gdpusd2012, fao.year "
"FROM table_7_economics_livelihood as fao "
"WHERE fao.year = 2000 or fao.year = 2005 or fao.year = 2010 or fao.year = 9999"
)
account = 'wri-01'
urlCarto = "https://{0}.carto.com/api/v2/sql".format(account)
sql = {"q": sql}
r = requests.get(urlCarto, params=sql)
print(r.url,'\n')
try:
fao_data = r.json().get('rows')
except:
fao_data = None
fao_data[0:3]
# Build a whitelist for this widget (not all countries have data!)
empl_whitelist = []
for d in fao_data:
if d.get('iso') not in empl_whitelist:
empl_whitelist.append(d.get('country'))
empl_whitelist[0:3]
adm0 = 'GBR'
year = 2010 #2000, 2005, 2010
# Retrieve data for relevent country by filtering by iso
iso_filter = list(filter(lambda x: x.get('country') == adm0, fao_data))
iso_filter
# Sanitise data. May have empty fields, and scales numbers by 1000
empl_data = []
for d in iso_filter:
if d.get('year') != 9999:
try:
empl_data.append({
'male': (d.get('forempl') - d.get('femempl'))*1000,
'female': d.get('femempl')*1000,
'year': d.get('year')
})
except:
empl_data.append({
'male': d.get('forempl'),
'female': None,
'year': d.get('year')
})
empl_data
# Create a list for male and female data respectively for the user selected year
for i in empl_data:
if i.get('year') == year:
male_data = i.get('male')
female_data = i.get('female')
if female_data:
labels = ['Male', 'Female']
data = [male_data, female_data]
colors = ['lightblue', 'pink']
fig1, ax1 = plt.subplots()
ax1.pie(data, labels=labels, autopct='%1.1f%%',
shadow=False, startangle=90, colors=colors)
ax1.axis('equal')
centre_circle = plt.Circle((0,0),0.75,color='black', fc='white',linewidth=0.5)
fig1 = plt.gcf()
fig1.gca().add_artist(centre_circle)
plt.title(f'Forestry Employment by Gender in {adm0}')
plt.show()
else:
print(f'No data for {adm0} in {year}')
if female_data:
print(f"According to the FAO there were {male_data + female_data} people employed in {iso_to_countries[adm0]}'s ", end="")
print(f"Forestry sector in {year}, of which {female_data} were female.", end="")
else:
print(f"According to the FAO there were {male_data} people employed in {iso_to_countries[adm0]}'s ", end="")
print(f"Forestry sector in {year}.", end="")
```
# Widget 2: "Economic Impact of X's Forestry Sector"
Displays a bar chart and ranked list (side by side) as well as a dynamic sentence.
The bar chart will display revenue and expenditure bars side-by-side, and display 'contribution relative to GDP' on hover.
The ranked list will show countries with similar contributions (and sort by net or % as described below)
User Variables:
- adm0 (see whitelist)
- year (2000, 2005, 2010)
- net contribution in USD or as a % of the country's GDP
Maths:
```
[net contribution (USD) = (revenue - expenditure)\*1000]
[net contribution (%) = 100\*(revenue - expenditure)\*1000/GDP]
```
### NOTE
Both widgets will use the same requests since it is easier to request all the relevent data in one go.
```
# First, get ALL data from the FAO data from a carto table
sql = ("SELECT fao.country, fao.forempl, fao.femempl, fao.usdrev, fao.usdexp, "
"fao.gdpusd2012, fao.year "
"FROM table_7_economics_livelihood as fao "
"WHERE fao.year = 2000 or fao.year = 2005 or fao.year = 2010 or fao.year = 9999"
)
account = 'wri-01'
urlCarto = "https://{0}.carto.com/api/v2/sql".format(account)
sql = {"q": sql}
r = requests.get(urlCarto, params=sql)
print(r.url,'\n')
try:
fao_data = r.json().get('rows')
except:
fao_data = None
fao_data[0:3]
#Sanitise data. Note that some revenue, expenditure, and GDP
# values from the table may come back as None, 0 or empty strings...
#Hence we have to acount for all of these!
econ_data = []
gdp = []
#Get GDP of each country (found in element with 'year' = 9999)
for d in fao_data:
if d.get('gdpusd2012') and d.get('gdpusd2012') != '-9999' and d.get('year') == 9999:
gdp.append({
'gdp': float(d.get('gdpusd2012')),
'iso': d.get('country')
})
#Build data structure
for d in fao_data:
if d.get('year') != 9999:
for g in gdp:
if g.get('iso') == d.get('country'):
tmp_gdp = g.get('gdp')
break
if d.get('usdrev') and d.get('usdrev') != '' and d.get('usdexp') and d.get('usdexp') != '':
net = (d.get('usdrev') - int(d.get('usdexp')))*1000
econ_data.append({
'iso': d.get('country'),
'rev': d.get('usdrev')*1000,
'exp': int(d.get('usdexp'))*1000,
'net_usd': net,
'gdp': tmp_gdp,
'net_perc': 100*net/tmp_gdp,
'year': d.get('year')
})
econ_data[0:3]
```
### Get available Countries and Build Whitelist
```
# Build whitelist of countries with the data we want to analyse
econ_whitelist = []
for e in econ_data:
if e.get('iso') not in econ_whitelist:
econ_whitelist.append(e.get('iso'))
econ_whitelist[0:3]
```
# Do Ranking (*using functional python!*)
```
adm0 = 'BRA'
year = 2010 #2000, 2005, 2010
#Filter the data for year of interest
# NOTE: IF year equals 2010 ignore Lebanon (LBN) - mistake in data!
if year == 2010:
in_year = list(filter(lambda x: x.get('year') == year and x.get('iso') != 'LBN', econ_data))
else:
in_year = list(filter(lambda x: x.get('year') == year, econ_data))
in_year[0:3]
```
### Net Revenue in USD
```
# Order by net revenue ('net_usd')
rank_list_net = sorted(in_year, key=lambda k: k['net_usd'], reverse=True)
rank_list_net[0:3]
# Get country's rank and print adjacent values ('net_usd' and 'iso' in this case)
rank = 1
for i in rank_list_net:
if i.get('iso') == adm0:
print('RANK =', rank)
break
else:
rank += 1
if rank == 1:
bottom_bound = -1
upper_bound = 4
elif rank == 2:
bottom_bound = 2
upper_bound = 3
elif rank == len(rank_list_net):
bottom_bound = 5
upper_bound = -1
elif rank == len(rank_list_net)-1:
bottom_bound = 4
upper_bound = 0
else:
bottom_bound = 3
upper_bound = 2
rank_list_net[rank-bottom_bound:rank+upper_bound]
```
### Net Revenue as a percentage of Nations GDP
```
# Order by net revenue per GDP ('net_perc')
rank_list_perc = sorted(in_year, key=lambda k: k['net_perc'], reverse=True)
rank_list_perc[0:3]
# Get country's rank and print adjacent values ('net_perc' and 'iso' in this case)
rank = 1
for i in rank_list_perc:
if i.get('iso') == adm0:
print('RANK =',rank)
break
else:
rank += 1
if rank == 1:
bottom_bound = -1
upper_bound = 4
elif rank == 2:
bottom_bound = 2
upper_bound = 3
elif rank == len(rank_list_perc):
bottom_bound = 5
upper_bound = -1
elif rank == len(rank_list_perc)-1:
bottom_bound = 4
upper_bound = 0
else:
bottom_bound = 3
upper_bound = 2
rank_list_perc[rank-bottom_bound:rank+upper_bound]
```
# Graph and Dynamic Sentence
```
# Get data for iso and year of interest
iso_and_year = list(filter(lambda x: x.get('year') == year and x.get('iso') == adm0, econ_data))
iso_and_year[0:3]
# Graph
bars = ['Revenue', 'Expenditure']
colors = ['blue','red']
width = 0.35
fig, ax = plt.subplots()
rects1 = ax.bar(bars, [iso_and_year[0].get('rev'), iso_and_year[0].get('exp')], color=colors)
# add some text for labels, title and axes ticks
ax.set_ylabel('USD')
ax.set_title(f'Forestry Revenue vs Expenditure for {adm0} in {year}')
plt.show()
# Dynamic Sentence
print(f"According to the FAO the forestry sector contributed a net ", end="")
print(f"{iso_and_year[0].get('net_usd')/1e9} billion USD to the economy in {year}, ", end="")
print(f"which is approximately {iso_and_year[0].get('net_perc')}% of {iso_to_countries[adm0]}'s GDP.", end="")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/African-Quant/FOREX_RelativeStrengthOscillator/blob/main/Oanda_RelativeStrength_NJ.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Installation
!pip install git+https://github.com/yhilpisch/tpqoa.git --upgrade --quiet
!pip install pykalman --quiet
!pip install --upgrade mplfinance --quiet
#@title Imports
import tpqoa
import numpy as np
import pandas as pd
from pykalman import KalmanFilter
%matplotlib inline
from pylab import mpl, plt
plt.style.use('seaborn')
mpl.rcParams['savefig.dpi'] = 300
mpl.rcParams['font.family'] = 'serif'
from datetime import date, timedelta
import warnings
warnings.filterwarnings("ignore")
#@title Oanda API
path = '/content/drive/MyDrive/Oanda_Algo/pyalgo.cfg'
api = tpqoa.tpqoa(path)
#@title Symbols/Currency Pairs
def symbolsList():
symbols = []
syms = api.get_instruments()
for x in syms:
symbols.append(x[1])
return symbols
symbols = symbolsList()
pairs = ['AUD_CAD', 'AUD_CHF', 'AUD_JPY', 'AUD_NZD', 'AUD_USD', 'CAD_CHF',
'CAD_JPY', 'CHF_JPY', 'EUR_AUD', 'EUR_CAD', 'EUR_CHF', 'EUR_GBP',
'EUR_JPY', 'EUR_NZD', 'EUR_USD', 'GBP_AUD', 'GBP_CAD', 'GBP_CHF',
'GBP_JPY', 'GBP_NZD', 'GBP_USD', 'NZD_CAD', 'NZD_CHF', 'NZD_JPY',
'NZD_USD', 'USD_CAD', 'USD_CHF', 'USD_JPY',]
#@title getData(instr, gran = 'D', td=1000)
def getData(instr, gran = 'D', td=1000):
start = f"{date.today() - timedelta(td)}"
end = f"{date.today() - timedelta(1)}"
granularity = gran
price = 'M' # price: string one of 'A' (ask), 'B' (bid) or 'M' (middle)
data = api.get_history(instr, start, end, granularity, price)
data.drop(['complete'], axis=1, inplace=True)
data.reset_index(inplace=True)
data.rename(columns = {'time':'Date','o':'Open','c': 'Close', 'h':'High', 'l': 'Low'}, inplace = True)
data.set_index('Date', inplace=True)
return data
#@title Indexes
def USD_Index():
'''Creating a USD Index from a basket of instruments
denominated in dollars
'''
USD = ['EUR_USD', 'GBP_USD', 'USD_CAD', 'USD_CHF',
'USD_JPY', 'AUD_USD', 'NZD_USD']
df = pd.DataFrame()
for i in USD:
data = getData(i).ffill(axis='rows')
# setting the Dollar as the base
if '_USD' == i[-4:]:
data[f'{i}'] = (data['Close'])**(-1)
else:
data[f'{i}'] = data['Close']
df = pd.concat([df, data.loc[:,f'{i}']], axis=1)
df['US_index'] = 1
for i in range(len(USD)):
df['US_index'] *= df[USD[i]]
return ((df['US_index'])**(1/(len(USD)))).to_frame()
def EURO_Index():
'''Creating a EUR Index from a basket of instruments
denominated in EUROs
'''
EUR = ['EUR_USD', 'EUR_GBP', 'EUR_JPY', 'EUR_CHF', 'EUR_CAD',
'EUR_AUD', 'EUR_NZD']
df = pd.DataFrame()
for i in EUR:
data = getData(i).ffill(axis='rows')
data[f'{i}'] = data['Close']
df = pd.concat([df, data.loc[:,f'{i}']], axis=1)
df['EUR_index'] = 1
for i in range(len(EUR)):
df['EUR_index'] *= df[EUR[i]]
return ((df['EUR_index'])**(1/(len(EUR)))).to_frame()
def GBP_Index():
'''Creating a GBP Index from a basket of instruments
denominated in Pound Sterling
'''
GBP = ['GBP_USD', 'EUR_GBP', 'GBP_JPY', 'GBP_CHF', 'GBP_CAD',
'GBP_AUD', 'GBP_NZD']
df = pd.DataFrame()
for i in GBP:
data = getData(i).ffill(axis='rows')
# setting the Dollar as the base
if '_GBP' == i[-4:]:
data[f'{i}'] = (data['Close'])**(-1)
else:
data[f'{i}'] = data['Close']
df = pd.concat([df, data.loc[:,f'{i}']], axis=1)
df['GBP_index'] = 1
for i in range(len(GBP)):
df['GBP_index'] *= df[GBP[i]]
return ((df['GBP_index'])**(1/(len(GBP)))).to_frame()
def CHF_Index():
'''Creating a CHF Index from a basket of instruments
denominated in Swiss Francs
'''
CHF = ['CHF_JPY', 'EUR_CHF', 'GBP_CHF', 'USD_CHF', 'CAD_CHF',
'AUD_CHF', 'NZD_CHF']
df = pd.DataFrame()
for i in CHF:
data = getData(i).ffill(axis='rows')
# setting the Dollar as the base
if '_CHF' == i[-4:]:
data[f'{i}'] = (data['Close'])**(-1)
else:
data[f'{i}'] = data['Close']
df = pd.concat([df, data.loc[:,f'{i}']], axis=1)
df['CHF_index'] = 1
for i in range(len(CHF)):
df['CHF_index'] *= df[CHF[i]]
return ((df['CHF_index'])**(1/(len(CHF)))).to_frame()
def CAD_Index():
'''Creating a CAD Index from a basket of instruments
denominated in Canadian Dollars
'''
CAD = ['CAD_JPY', 'EUR_CAD', 'GBP_CAD', 'USD_CAD', 'CAD_CHF',
'AUD_CAD', 'NZD_CAD']
df = pd.DataFrame()
for i in CAD:
data = getData(i).ffill(axis='rows')
# setting the Dollar as the base
if '_CAD' == i[-4:]:
data[f'{i}'] = (data['Close'])**(-1)
else:
data[f'{i}'] = data['Close']
df = pd.concat([df, data.loc[:,f'{i}']], axis=1)
df['CAD_index'] = 1
for i in range(len(CAD)):
df['CAD_index'] *= df[CAD[i]]
return ((df['CAD_index'])**(1/(len(CAD)))).to_frame()
def JPY_Index():
'''Creating a JPY Index from a basket of instruments
denominated in Swiss Francs
'''
JPY = ['CAD_JPY', 'EUR_JPY', 'GBP_JPY', 'USD_JPY', 'CHF_JPY',
'AUD_JPY', 'NZD_JPY']
df = pd.DataFrame()
for i in JPY:
data = getData(i).ffill(axis='rows')
# setting the Japanese Yen as the base
data[f'{i}'] = (data['Close'])**(-1)
df = pd.concat([df, data.loc[:,f'{i}']], axis=1)
df['JPY_index'] = 1
for i in range(len(JPY)):
df['JPY_index'] *= df[JPY[i]]
return ((df['JPY_index'])**(1/(len(JPY)))).to_frame()
def AUD_Index():
'''Creating a AUD Index from a basket of instruments
denominated in Australian Dollar
'''
AUD = ['AUD_JPY', 'EUR_AUD', 'GBP_AUD', 'AUD_USD', 'AUD_CAD',
'AUD_CHF', 'AUD_NZD']
df = pd.DataFrame()
for i in AUD:
data = getData(i).ffill(axis='rows')
# setting the Aussie Dollar as the base
if '_AUD' == i[-4:]:
data[f'{i}'] = (data['Close'])**(-1)
else:
data[f'{i}'] = data['Close']
df = pd.concat([df, data.loc[:,f'{i}']], axis=1)
df['AUD_index'] = 1
for i in range(len(AUD)):
df['AUD_index'] *= df[AUD[i]]
return ((df['AUD_index'])**(1/(len(AUD)))).to_frame()
def NZD_Index():
'''Creating a NZD Index from a basket of instruments
denominated in New Zealand Dollar
'''
NZD = ['NZD_JPY', 'EUR_NZD', 'GBP_NZD', 'NZD_USD', 'NZD_CAD',
'NZD_CHF', 'AUD_NZD']
df = pd.DataFrame()
for i in NZD:
data = getData(i).ffill(axis='rows')
# setting the Dollar as the base
if '_NZD' == i[-4:]:
data[f'{i}'] = (data['Close'])**(-1)
else:
data[f'{i}'] = data['Close']
df = pd.concat([df, data.loc[:,f'{i}']], axis=1)
df['NZD_index'] = 1
for i in range(len(NZD)):
df['NZD_index'] *= df[ NZD[i]]
return ((df['NZD_index'])**(1/(len(NZD)))).to_frame()
def eSuperRCS(df):
"""
This code computes the super smoother introduced by John Ehlers
"""
spr = df.to_frame().copy()
# HighPass filter cyclic components whose periods are shorter than 48 bars
alpha1 = (np.cos(0.707*2*np.pi/48) + np.sin(0.707*2*np.pi/48) - 1)/np.cos(0.707*2*np.pi/48)
hiPass = pd.DataFrame(None, index=spr.index, columns=['filtered'])
for i in range(len(spr)):
if i < 3:
hiPass.iloc[i, 0] = spr.iat[i, 0]
else:
hiPass.iloc[i, 0] = ((1 - alpha1/2)*(1 - alpha1/2)*(spr.iat[i, 0]
- 2*spr.iat[i-1, 0] + spr.iat[i-2, 0] + 2*(1 - alpha1)*hiPass.iat[i-1, 0]
- (1 - alpha1)**2 *hiPass.iat[i-2, 0]))
# SuperSmoother
a1 = np.exp(-1.414*(np.pi) / 10)
b1 = 2*a1*np.cos(1.414*(np.pi) / 10)
c2 = b1
c3 = -a1*a1
c1 = 1 - c2 - c3
Filt = pd.DataFrame(None, index=spr.index, columns=['filtered'])
for i in range(len(spr)):
if i < 3:
Filt.iloc[i, 0] = hiPass.iat[i, 0]
else:
Filt.iloc[i, 0] = c1*(hiPass.iat[i, 0] + hiPass.iat[i - 1, 0]/ 2 + c2*Filt.iat[i-1, 0] + c3*Filt.iat[i-2, 0])
Filt['eSuperRCS'] = RSI(Filt['filtered'])
return Filt['eSuperRCS']
def RSI(series, period=25):
delta = series.diff()
up = delta.clip(lower=0)
dn = -1*delta.clip(upper=0)
ema_up = up.ewm(com=period-1, adjust=False).mean()
ewm_dn = dn.ewm(com=period-1, adjust=False).mean()
rs = (ema_up/ewm_dn)
return 100 - 100 / (1 + rs)
def will_pr(data, lb=14):
df = data[['High', 'Low', 'Close']].copy()
df['max_hi'] = data['High'].rolling(window=lb).max()
df['min_lo'] = data['Low'].rolling(window=lb).min()
df['will_pr'] = 0
for i in range(len(df)):
try:
df.iloc[i, 5] = ((df.iat[i, 3] - df.iat[i, 2])/(df.iat[i, 3] - df.iat[i, 4])) * (-100)
except ValueError:
pass
return df['will_pr']
nzdjpy = getData('NZD_JPY')
jpy = JPY_Index()
nzd = NZD_Index()
df = pd.concat((nzdjpy, jpy, nzd), axis=1).ffill(axis='rows')
df
tickers = ['NZD_index', 'JPY_index']
cumm_rtn = (1 + df[tickers].pct_change()).cumprod()
cumm_rtn.plot();
plt.ylabel('Cumulative Return');
plt.xlabel('Time');
plt.title('Cummulative Plot of NZD_index & JPY_index');
import statsmodels.api as sm
obs_mat = sm.add_constant(df[tickers[0]].values, prepend=False)[:, np.newaxis]
# y is 1-dimensional, (alpha, beta) is 2-dimensional
kf = KalmanFilter(n_dim_obs=1, n_dim_state=2,
initial_state_mean=np.ones(2),
initial_state_covariance=np.ones((2, 2)),
transition_matrices=np.eye(2),
observation_matrices=obs_mat,
observation_covariance=10**2,
transition_covariance=0.01**2 * np.eye(2))
state_means, state_covs = kf.filter(df[tickers[1]])
beta_kf = pd.DataFrame({'Slope': state_means[:, 0], 'Intercept': state_means[:, 1]},
index=df.index)
spread_kf = df[tickers[0]] - df[tickers[1]] * beta_kf['Slope'] - beta_kf['Intercept']
spread_kf = spread_kf
spread_kf.plot();
len(df)
df['spread'] = spread_kf
df['NZD/JPY'] = df['NZD_index']/df['JPY_index']
df['eSuperRCS'] = eSuperRCS(df['spread'])
df = df.iloc[-700:]
fig = plt.figure(figsize=(10, 7))
ax1, ax2 = fig.subplots(nrows=2, ncols=1)
ax1.plot(df.index, df['Close'],color='cyan' )
ax2.plot(df.index, df['NZD/JPY'].values, color='maroon')
ax1.set_title('NZD_JPY')
ax2.set_title('NZD/JPY')
plt.show()
def viewPlot(data, win = 150):
fig = plt.figure(figsize=(17, 10))
ax1, ax2 = fig.subplots(nrows=2, ncols=1)
df1 = data.iloc[-win:, ]
# High and Low prices are plotted
for i in range(len(df1)):
ax1.vlines(x = df1.index[i], ymin = df1.iat[i, 2], ymax = df1.iat[i, 1], color = 'magenta', linewidth = 2)
ax2.plot(df1.index, df1['eSuperRCS'].values, color='maroon')
ax2.axhline(55, color='green')
ax2.axhline(45, color='green')
ax2.axhline(50, color='orange')
ax1.set_title('NZD_JPY')
ax2.set_title('spread oscillator')
return plt.show()
viewPlot(df, win = 150)
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
import json
import pickle
from sklearn.externals import joblib
import sys
sys.path.append('../src/')
from TFExpMachine import TFExpMachine, simple_batcher
```
# Load data (see movielens-prepare.ipynb)
```
X_tr, y_tr, s_features = joblib.load('tmp/train_categotical.jl')
X_te, y_te, s_features = joblib.load('tmp/test_categorical.jl')
```
# Prepare init from LogReg
```
from sklearn.preprocessing import OneHotEncoder
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
target_rank = 10
oh = OneHotEncoder()
oh.fit(np.vstack((X_tr, X_te))-1)
X_tr_sp = oh.transform(X_tr-1)
X_te_sp = oh.transform(X_te-1)
logreg = LogisticRegression()
logreg.fit(X_tr_sp, y_tr)
y_pred = logreg.predict_proba(X_te_sp)[:, 1]
print(roc_auc_score(y_te, y_pred))
target_rank = 10
num_features = len(s_features)
w_cores = [None] * num_features
coef = logreg.coef_[0]
intercept = logreg.intercept_[0]
# see paper for details about initialization
begin_feature = [0] + list(np.cumsum(s_features))
for i in range(num_features):
n_factors = s_features[i]
if i == 0:
tmp = np.zeros((n_factors+1, 1, target_rank))
for local_j, global_j in enumerate([-1] + list(range(begin_feature[i], s_features[i]))):
if local_j==0:
tmp[local_j,:1,:2] = [1, 0]
else:
tmp[local_j,:1,:2] = [0, coef[global_j]]
w_cores[i] = tmp.astype(np.float32)
elif i == num_features-1:
tmp = np.zeros((n_factors+1, target_rank, 1))
for local_j, global_j in enumerate([-1] + list(range(begin_feature[i], s_features[i]))):
if local_j==0:
tmp[local_j,:2,:1] = np.array([[intercept], [1]])
else:
tmp[local_j,:2,:1] = [[coef[global_j]], [0]]
w_cores[i] = tmp.astype(np.float32)
else:
tmp = np.zeros((n_factors+1, target_rank, target_rank))
for local_j, global_j in enumerate([-1] + list(range(begin_feature[i], s_features[i]))):
if local_j==0:
tmp[local_j,:2,:2] = np.eye(2)
else:
tmp[local_j,:2,:2] = [[0, coef[global_j]], [0,0]]
w_cores[i] = tmp.astype(np.float32)
```
# Init model
```
model.destroy()
model = TFExpMachine(rank=target_rank, s_features=s_features, init_std=0.001, reg=0.012, exp_reg=1.8)
model.init_from_cores(w_cores)
model.build_graph()
model.initialize_session()
```
# Learning
```
epoch_hist = []
for epoch in range(50):
# train phase
loss_hist = []
penalty_hist = []
for x, y in simple_batcher(X_tr, y_tr, 256):
fd = {model.X: x, model.Y: 2*y-1}
run_ops = [model.trainer, model.outputs, model.loss, model.penalty]
_, outs, batch_loss, penalty = model.session.run(run_ops, fd)
loss_hist.append(batch_loss)
penalty_hist.append(penalty)
epoch_train_loss = np.mean(loss_hist)
epoch_train_pen = np.mean(penalty_hist)
epoch_stats = {
'epoch': epoch,
'train_logloss': float(epoch_train_loss)
}
# test phase
if epoch%2==0 and epoch>0:
fd = {model.X: X_te, model.Y: 2*y_te-1}
run_ops = [model.outputs, model.loss, model.penalty, model.penalized_loss]
outs, raw_loss, raw_penalty, loss = model.session.run(run_ops, fd)
epoch_test_loss = roc_auc_score(y_te, outs)
epoch_stats['test_auc'] = float(epoch_test_loss),
epoch_stats['penalty'] = float(raw_penalty)
print('{}: te_auc: {:.4f}'.format(epoch, epoch_test_loss))
epoch_hist.append(epoch_stats)
# dump to json
json.dump(epoch_hist, open('./tmp/ExM_rank10_ereg1.8.json', 'w'))
# Draw plot
%pylab inline
plot([x['epoch'] for x in epoch_hist if 'test_auc' in x], [x['test_auc'] for x in epoch_hist if 'test_auc' in x])
grid()
ylim(0.775, 0.785)
xlabel('epoch')
ylabel('test auc')
# release resources
model.destroy()
```
| github_jupyter |
```
pip install holidays
import numpy as np
import pandas as pd
import seaborn as sns
import datetime
import ast
import holidays
def load_dataset():
df = pd.read_csv('https://drive.google.com/uc?id=1XzXWsdWV_w95wyCrzaMQT3T-8X6hoAEE',dtype='unicode')
return df
```
Columns to be dropped:
1. belongs_to_collection
2. homepage
3. Overview (hume NLP nahi aati :/)
4. poster_path (hume CV nahi aati :/)
5. Status
6. release_date (kya hi kar lenge iska)
7. tagline (hume NLP nahi aati :/)
8. Video (kya hi kar lenge iska)
9. Spoken Languages
10. original_language ("en","fr", "it","ja","de") (????)
11. adult (T/F)
Counting:
1. budget (Integer)
=> Remove 0 and null values
2. Genres (Dictionary {id,genre_name})
3. movie_id (Unique int)
=> required for joining with other features not in this database
4. imdb_id (Unique string)
5. original title
6. popularity (float)
7. Production Companies
8. Production Country (?????)
9. Revenue
=> remove samples with 0 and null
10. runtime
11. Title
12. Vote average
13. Vote count
```
genre_set = {'Family', 'Animation', 'Thriller', 'Documentary', 'Music', 'Horror', 'Foreign', 'Western', 'TV Movie',
'Fantasy', 'Mystery', 'Action', 'Romance', 'History', 'Comedy', 'Adventure', 'War', 'Science Fiction', 'Crime', 'Drama'}
def drop_columns(df):
df.drop(["adult",
"belongs_to_collection",
"homepage",
"spoken_languages",
"original_title",
"overview",
"poster_path",
"vote_average",
"production_companies",
"status",
"tagline",
"title",
"video",
"vote_count"],axis = 1, inplace = True)
return df
def removeNa(movies):
movies.dropna(subset=['budget', 'genres', 'id', 'imdb_id','popularity',
"production_countries", "release_date",
"revenue" , "runtime", "original_language"], inplace = True)
movies.reset_index(drop = True, inplace = True)
return movies
def drop_zero_revenue_samples(df):
zero_rows = df[ df['revenue'] == '0' ].index
df = df.drop(zero_rows)
return df
def drop_zero_budget_samples(df):
zero_rows = df[ df['budget'] == '0' ].index
df = df.drop(zero_rows)
return df
def drop_zero_p_com_samples(df):
zero_rows = df[ df['production_companies'] == '[]' ].index
df=df.drop(zero_rows)
return df
def drop_zero_p_con_samples(df):
zero_rows = df[ df['production_countries'] == '[]' ].index
df=df.drop(zero_rows)
return df
def remove_zeros(df):
df = drop_zero_revenue_samples(df)
df = drop_zero_budget_samples(df)
df = df.reset_index(drop=True)
return df
# function to convert data types of different columns
def convert_datatype(movies):
datatype_dict = {"budget": np.int64,
"id": np.int64,
"popularity": np.float64,
"revenue": np.int64,
"runtime": np.float64}
movies = movies.astype(datatype_dict, copy=True)
return movies
def remove_revenue_outliers(df):
df=df[df['revenue']>1000]
return df
# function to call the above functions in order
def preprocess_dataset(df):
df = drop_columns(df)
df = removeNa(df)
df = remove_zeros(df)
df = convert_datatype(df)
print(df.shape)
df = remove_revenue_outliers(df)
df = df.reset_index(drop=True)
print(df.shape)
return df
```
# Preprocessing
1. Removing unwanted features
2. Removing 0/Null '[ ]' values from remaining samples
3. Conversion of datatypes from object to float/int
```
df = load_dataset()
df = preprocess_dataset(df)
df
```
#Process Keywords
1. Get a dictionary of all keywords and their counts.
2. Take top N keywords.
3. Count number of top N keywords present in every sample from the dataset.
4. Add the above count as a column to the main dataset.
```
# After we have the final df post preprocessing, we just add the columns
def convert_datatype_keywords(keyword_df):
keyword_df = keyword_df.astype({"id": np.int64}, copy=True)
keyword_df['keywords'] = keyword_df['keywords'].apply(lambda row : ast.literal_eval(str(row)))
return keyword_df
def get_all_keyword_dictionary(df):
all_keywords={}
for a_list in df['keywords']:
if a_list!=[]:
for string in a_list:
indv_dict = ast.literal_eval(str(string))
keyword = indv_dict["name"]
if keyword in all_keywords:
curr=all_keywords[keyword]
curr+=1
all_keywords[keyword]=curr
else:
all_keywords[keyword]=1
return all_keywords
def keep_max_n_values(all_keywords,n):
all_keywords=dict(sorted(all_keywords.items(), key=lambda item: item[1],reverse=True)[:n])
return list(all_keywords.keys())
def get_counts_keywords(keyword_df,top_n_list):
new_column=[]
for a_list in keyword_df['keywords']:
count=0
if a_list!=[]:
for string in a_list:
indv_dict = ast.literal_eval(str(string))
keyword = indv_dict["name"]
if keyword in top_n_list:
count+=1
new_column.append(count)
return new_column
def get_and_process_keywords(current_ids):
'''
1. Get a dictionary of all keywords and their counts.
2. Take top N keywords.
3. Count number of top N keywords present in every sample from the dataset.
4. Add the above count as a column to the main dataset.
'''
keyword_df = pd.read_csv('https://drive.google.com/uc?id=1eCAlbuNFLj2cxQZlydgLe-K_jx8tVJQy', dtype='unicode')
# print(keyword_df)
keyword_df = convert_datatype_keywords(keyword_df)
keyword_df = keyword_df[keyword_df['id'].isin(current_ids)]
keyword_df.drop_duplicates(subset=['id'],inplace=True)
keyword_df = keyword_df.reset_index(drop=True)
#1.
all_keywords=get_all_keyword_dictionary(keyword_df)
#2.
top_n_list=keep_max_n_values(all_keywords,1500)
#3.
count_column= get_counts_keywords(keyword_df,top_n_list)
keyword_df['keyword'] = count_column
keyword_df.drop(["keywords"],axis=1,inplace=True)
keyword_df = keyword_df.reset_index(drop=True)
return keyword_df
def merge_main_and_keyword(main_df,keyword_df):
new_df=pd.merge(main_df, keyword_df, on="id")
return new_df
def process_keywords(df):
current_ids = df['id'].tolist()
keyword_df = get_and_process_keywords(current_ids)
df = merge_main_and_keyword(df,keyword_df)
return df
df = process_keywords(df)
# Dataset after processing keywords
df
```
#Processing Date
1. Segregrate year of release from the release dates.
2. Separate days (monday = 0, tuesday = 1 ... ) from the release dates.
```
def extract_year(df):
"""
function to extract year from release date and append to `df`
"""
df['release_date'] = pd.to_datetime(df['release_date'],format='%Y-%m-%d')
df['Year'] = pd.DatetimeIndex(df['release_date']).year
return df
def extract_dow(df):
"""
function to find day from release date and append to `df`
"""
df_days = []
for i in range(len(df)):
date = df.iloc[i].release_date
day = datetime.datetime.strptime(date, '%Y-%m-%d').weekday()
df_days.append(day)
df['Day'] = df_days
return df
# function to call above functions
def process_date(df):
df = extract_dow(df)
df = extract_year(df)
return df
df = process_date(df)
```
#Processing Genres
1. One hot encode the genres
```
# Separating the genre
def column_management(df, genre_name):
curr_cols = df.columns.values
genre_list = list(genre_name)
new_columns = np.append(curr_cols[:-20], genre_list)
df.columns = new_columns
return df
def get_genre_map(genre_set):
genre_map={}
ind=0
for genres in genre_set:
genre_map[genres]=ind
ind+=1
return genre_map
def genre_process(df,genre_set):
n=df.shape[0]
genre_map=get_genre_map(genre_set)
dummy=np.zeros((n,20))
count=0
for samples in df['genres']:
val = ast.literal_eval(samples)
if(val!=[]):
for string in val:
res = ast.literal_eval(str(string))
genre = res["name"]
dummy[count,genre_map[genre]]=1
count+=1
df.drop(["genres"],axis=1,inplace=True)
df_new = pd.concat([df, pd.DataFrame(dummy)], axis=1)
df=column_management(df_new,genre_set)
return df
df = genre_process(df,genre_set)
df
```
# Processing Date
one hot encode release dates based on if they are released during a holiday, near a holiday or a non-holiday day
```
# Do an exploratory data analysis on holiday
def eda_holidays():
country_holidays = holidays.USA() + holidays.UK()
M = df.shape[0]
count = 0 # Total number of holidays
for i in range(M):
if country_holidays.get(df.iloc[i]['release_date']) != None:
count += 1
print(f"Total number of holidays = {count}\n")
map = {}
for i in range(M):
holiday = country_holidays.get(df.iloc[i]['release_date'])
if holiday != None:
if holiday in map:
map[holiday] += 1
else:
map[holiday] = 1
for itr in map:
print(f"{itr}: {map[itr]}")
eda_holidays()
def preprocess_holidays():
"""
function to pre-process holidays
"""
country_holidays = holidays.USA() + holidays.UK()
M = df.shape[0]
is_holiday = []
is_near_holiday = []
# for all the samples in dataset check if it's a holiday or near some holiday
for i in range(M):
holiday = country_holidays.get(df.iloc[i]['release_date'])
# if it's a holiday append 1 and we're done
if holiday != None:
is_holiday.append(1)
is_near_holiday.append(1)
else:
is_holiday.append(0)
flag = False
# if it's not a holiday then check if any of the nearby (+-2) days were a holiday
for j in range(-2,3):
# find the date j days ahead
date = df.iloc[i]['release_date'] + datetime.timedelta(days = j)
# check if `date` was a holiday
holiday = country_holidays.get(date)
if holiday != None:
flag = True
break
if flag:
is_near_holiday.append(1)
else:
is_near_holiday.append(0)
return df.assign(is_holiday=is_holiday, is_near_holiday=is_near_holiday)
df = preprocess_holidays()
```
# Language
Since 90% of the movies are in English, one hot encode.
```
def process_language(df):
# Changing datatype of column to string
df['original_language'] = df['original_language'].astype("string")
n = df.shape[0]
language = np.zeros(n)
for i in range(n):
if df.loc[i].at['original_language'] == 'en':
language[i] = 1
language = pd.DataFrame(language)
df['original_language'] = language
df.rename(columns = {'original_language': 'language'}, inplace = True)
return df
df = process_language(df)
```
# Country
Out of 80 unique production companies, 8 had > 100 entries. So creating binary columns for each and one for other country too.
```
def getCountryInfo(df):
# Getting the count
d = {}
n = df.shape[0] # number of entries
for i in range(n):
sample = df.loc[i].at['production_countries']
val = ast.literal_eval(sample) # convert from string to list
if(val != []):
for string in val:
res = ast.literal_eval(str(string))
c = res["name"]
if c in d:
d[c] += 1
else:
d[c] = 1
# print(sorted(d.items(), key=lambda x: x[1], reverse=True))
# Output - top 8 countries have > 100 entries
# Creating a binary column for each of those and one for others
country_set = {'United States of America', 'United Kingdom', 'France', 'Germany', 'Canada', 'India', 'Australia', 'Italy', 'Other_Country'}
def column_management(df, country_name):
curr_cols = df.columns.values
country_list = list(country_name)
m = df.shape[1]
m -= 9
new_columns = np.append(curr_cols[:m], country_list)
df.columns = new_columns
return df
def process_country(df, country_set):
n = df.shape[0]
country_map = {}
ind = 0
for countries in country_set:
country_map[countries] = ind
ind += 1
dummy = np.zeros((n, 9))
count = 0
for samples in df['production_countries']:
val = ast.literal_eval(samples)
if(val != []):
for string in val:
res = ast.literal_eval(str(string))
c = res["name"]
if c in country_map:
dummy[count, country_map[c]] = 1
else:
dummy[count, country_map['Other_Country']] = 1 # Other_Country
count += 1
df.drop(["production_countries"], axis = 1, inplace = True)
df_new = pd.concat([df, pd.DataFrame(dummy)], axis = 1)
df = column_management(df_new, country_set)
return df
getCountryInfo(df)
df = process_country(df, country_set)
df
```
#Processing Cast and Crew
1. Get a dictionary of all cast and crew and their counts
2. Take top n cast and crew
3. Parse every sample and check how many of the top n are present in it
4. Finally add columns to main database
```
def convert_datatype_cast(credit_df):
credit_df = credit_df.astype({"id": np.int64}, copy=True)
credit_df['cast'] = credit_df['cast'].apply(lambda row : ast.literal_eval(str(row)))
credit_df['crew'] = credit_df['crew'].apply(lambda row : ast.literal_eval(str(row)))
return credit_df
def get_all_cast_dictionary(df,label):
all_credits={}
for a_list in df[label]:
if a_list!=[]:
for string in a_list:
indv_dict = ast.literal_eval(str(string))
credit = indv_dict["name"]
if credit in all_credits:
curr=all_credits[credit]
curr+=1
all_credits[credit]=curr
else:
all_credits[credit]=1
return all_credits
def get_counts_cast(credit_df,top_n_list,label):
new_column=[]
for a_list in credit_df[label]:
count=0
if a_list!=[]:
for string in a_list:
indv_dict = ast.literal_eval(str(string))
credit = indv_dict["name"]
if credit in top_n_list:
count+=1
new_column.append(count)
return new_column
def get_and_process_credits(current_ids):
import zipfile
zf = zipfile.ZipFile('data/credits.zip')
credit_df = pd.read_csv(zf.open('credits.csv'))
# credit_df = pd.read_csv('https://drive.google.com/uc?id=1FMYqydZ88tyrVaFIKHHIyaDlqdaZiPAM', dtype='unicode')
print(credit_df)
credit_df=convert_datatype_cast(credit_df)
credit_df=credit_df[credit_df['id'].isin(current_ids)]
credit_df.drop_duplicates(subset=['id'],inplace=True)
credit_df = credit_df.reset_index(drop=True)
#1.
all_cast =get_all_cast_dictionary(credit_df,'cast')
all_crew=get_all_cast_dictionary(credit_df,'crew')
#2.
top_n_list=keep_max_n_values(all_cast,9000)
top_n_list2=keep_max_n_values(all_crew,9000)
#3.
count_column= get_counts_cast(credit_df,top_n_list,'cast')
count_column2= get_counts_cast(credit_df,top_n_list2,'crew')
credit_df.drop(["cast"],axis=1,inplace=True)
credit_df.drop(["crew"],axis=1,inplace=True)
credit_df['cast'] = count_column
credit_df['crew'] = count_column2
credit_df = credit_df.reset_index(drop=True)
return credit_df
def merge_main_and_credits(main_df,credit_df):
new_df=pd.merge(main_df, credit_df, on="id")
return new_df
def process_credits(df):
current_ids=df['id'].tolist()
# print(current_ids)
credit_df = get_and_process_credits(current_ids)
df = merge_main_and_credits(df,credit_df)
return df
df = process_credits(df)
df
# Run this command to save the pre-processed file in your drive
# https://drive.google.com/file/d//view?usp=sharing
df.to_csv('data/preprocessed.csv', index=False)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.