text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Chapter 4 - Loops
### There are several techniques to repeatedly execute Python code. While loops are like repeated if statements; the for loop is there to iterate over all kinds of data structures. Learn all about them in this chapter.
### while: warming up
The while loop is like a repeated if statement. The code is executed over and over again, as long as the condition is True. Have another look at its recipe.
```python
while condition :
expression
```
Can you tell how many printouts the following while loop will do?
```python
x = 1
while x < 4 :
print(x)
x = x + 1
```
__Answer__: 3
### Basic while loop
Below you can find the example where the error variable, initially equal to 50.0, is divided by 4 and printed out on every run:
```python
error = 50.0
while error > 1 :
error = error / 4
print(error)
```
This example will come in handy, because it's time to build a while loop yourself! We're going to code a while loop that implements a very basic control system for an inverted pendulum. If there's an offset from standing perfectly straight, the while loop will incrementally fix this offset
```
# Initialize offset
offset = 8
# Code the while loop
while offset !=0:
print('correcting...')
offset -= 1
print(offset)
```
### Add conditionals
The while loop that corrects the offset is a good start, but what if offset is negative? You can try to run the following code where offset is initialized to -6:
```python
# Initialize offset
offset = -6
# Code the while loop
while offset != 0 :
print("correcting...")
offset = offset - 1
print(offset)
```
but your session will be disconnected. The while loop will never stop running, because offset will be further decreased on every run. offset != 0 will never become False and the while loop continues forever.
Fix things by putting an if-else statement inside the while loop.
```
# Initialize offset
offset = -6
# Code the while loop
while offset != 0 :
print("correcting...")
if offset > 0:
offset -= 1
else:
offset += 1
print(offset)
```
### Loop over a list
```
# areas list
areas = [11.25, 18.0, 20.0, 10.75, 9.50]
# Code the for loop
for elements in areas:
print(elements)
```
### Indexes and values (1)
Using a for loop to iterate over a list only gives you access to every list element in each run, one after the other. If you also want to access the index information, so where the list element you're iterating over is located, you can use enumerate().
As an example, have a look:
```python
fam = [1.73, 1.68, 1.71, 1.89]
for index, height in enumerate(fam) :
print("person " + str(index) + ": " + str(height))
```
```
# areas list
areas = [11.25, 18.0, 20.0, 10.75, 9.50]
# Change for loop to use enumerate() and update print()
for index, value in enumerate(areas) :
print("room", index,":",value)
```
### Indexes and values (2)
For non-programmer folks, room 0: 11.25 is strange. Wouldn't it be better if the count started at 1?
```
# Code the for loop
for index, area in enumerate(areas) :
print("room " + str(index+1) + ": " + str(area))
```
### Loop over list of lists
Remember the house variable from the Intro to Python course? . It's basically a list of lists, where each sublist contains the name and area of a room in your house.
It's up to you to build a for loop from scratch this time!
```
# house list of lists
house = [["hallway", 11.25],
["kitchen", 18.0],
["living room", 20.0],
["bedroom", 10.75],
["bathroom", 9.50]]
# Build a for loop from scratch
for name in house:
print("the ",name[0]," is", name[1], "sqm")
```
### Loop over dictionary
In Python 3, you need the items() method to loop over a dictionary:
```python
world = { "afghanistan":30.55,
"albania":2.77,
"algeria":39.21 }
for key, value in world.items() :
print(key + " -- " + str(value))
```
Remember the europe dictionary that contained the names of some European countries as key and their capitals as corresponding value? Go ahead and write a loop to iterate over it!
```
# Definition of dictionary
europe = {'spain':'madrid', 'france':'paris', 'germany':'berlin',
'norway':'oslo', 'italy':'rome', 'poland':'warsaw', 'austria':'vienna' }
# Iterate over europe
for key, value in europe.items():
print("the capital of ",key,"is", value)
```
### Loop over Numpy array
If you're dealing with a 1D Numpy array, looping over all elements can be as simple as:
```python
for x in my_array :
...
```
If you're dealing with a 2D Numpy array, it's more complicated. A 2D array is built up of multiple 1D arrays. To explicitly iterate over all separate elements of a multi-dimensional array, you'll need this syntax:
```python
for x in np.nditer(my_array) :
...
```
Two Numpy arrays that you might recognize from the intro course are available in your Python session: np_height, a Numpy array containing the heights of Major League Baseball players, and np_baseball, a 2D Numpy array that contains both the heights (first column) and weights (second column) of those players.
```
import numpy as np
np_height = np.array([74,74,72,72,73,69,69,71,76,71,73,73,74,74,69,70,73,75,78,79,76,74,76,72,71,75,77,74,73,74,78,73,75,73,75,75,74,69,71,74,73,73,76,74,74,70,72,77
,74,70,73,75,76,76,78,74,74,76,77,81,78,75,77,75,76,74,72,72,75,73,73,73,70,70,70,76,68,71,72,75,75,75,75,68,74,78,71,73,76,74,74,79,75,73,76,74
,74,73,72,74,73,74,72,73,69,72,73,75,75,73,72,72,76,74,72,77,74,77,75,76,80,74,74,75,78,73,73,74,75,76,71,73,74,76,76,74,73,74,70,72,73,73,73,73
,71,74,74,72,74,71,74,73,75,75,79,73,75,76,74,76,78,74,76,72,74,76,74,75,78,75,72,74,72,74,70,71,70,75,71,71,73,72,71,73,72,75,74,74,75,73,77,73
,76,75,74,76,75,73,71,76])
np_baseball = np.array([[74,180,74,215,72,210,72,210,73,188,69,176,69,209,71,200,76,231
,71,180,73,188,73,180,74,185,74,160,69,180,70,185,73,189,75,185
,78,219,79,230,76,205,74,230,76,195,72,180,71,192,75,225,77,203
,74,195,73,182,74,188,78,200,73,180,75,200,73,200,75,245,75,240
,74,215,69,185,71,175,74,199,73,200,73,215,76,200,74,205,74,206
,70,186,72,188,77,220,74,210,70,195,73,200,75,200,76,212,76,224
,78,210,74,205,74,220,76,195,77,200,81,260,78,228,75,270,77,200
,75,210,76,190,74,220,72,180,72,205,75,210,73,220,73,211,73,200
,70,180,70,190,70,170,76,230,68,155,71,185,72,185,75,200,75,225
,75,225,75,220,68,160,74,205,78,235,71,250,73,210,76,190,74,160
,74,200,79,205,75,222,73,195,76,205,74,220,74,220,73,170,72,185
,74,195,73,220,74,230,72,180,73,220,69,180,72,180,73,170,75,210
,75,215,73,200,72,213,72,180,76,192,74,235,72,185,77,235,74,210
,77,222,75,210,76,230,80,220,74,180,74,190,75,200,78,210,73,194
,73,180,74,190,75,240,76,200,71,198,73,200,74,195,76,210,76,220
,74,190,73,210,74,225,70,180,72,185,73,170,73,185,73,185,73,180
,71,178,74,175,74,200,72,204,74,211,71,190,74,210,73,190,75,190
,75,185,79,290,73,175,75,185,76,200,74,220,76,170,78,220,74,190
,76,220,72,205,74,200,76,250,74,225,75,215,78,210,75,215,72,195
,74,200,72,194,74,220,70,180,71,180,70,170,75,195,71,180,71,170
,73,206,72,205,71,200,73,225,72,201,75,225,74,233,74,180,75,225
,73,180,77,220,73,180,76,237,75,215,74,190,76,235,75,190,73,180
,71,165,76,195]]).reshape(200, 2)
# Import numpy as np
import numpy as np
# For loop over np_height
for x in np_height:
print(x,"inches")
# For loop over np_baseball
for x in np.nditer(np_baseball):
print(x)
```
### Loop over DataFrame (1)
Iterating over a Pandas DataFrame is typically done with the iterrows() method. Used in a for loop, every observation is iterated over and on every iteration the row label and actual row contents are available:
```python
for lab, row in brics.iterrows() :
...
```
In this and the following exercises you will be working on the cars DataFrame. It contains information on the cars per capita and whether people drive right or left for seven countries in the world.
```
# Import cars data
import pandas as pd
cars = pd.read_csv('cars.csv', index_col = 0)
# Iterate over rows of cars
for row, value in cars.iterrows():
print(row)
print(value)
```
### Loop over DataFrame (2)
The row data that's generated by iterrows() on every run is a Pandas Series. This format is not very convenient to print out. Luckily, you can easily select variables from the Pandas Series using square brackets:
```python
for lab, row in brics.iterrows() :
print(row['country'])
```
```
# Import cars data
import pandas as pd
cars = pd.read_csv('cars.csv', index_col = 0)
# Adapt for loop
for lab, row in cars.iterrows() :
print(lab,": ",row['cars_per_cap'],sep = "")
```
### Add column (1)
You can add the length of the country names of the brics DataFrame in a new column:
```python
for lab, row in brics.iterrows() :
brics.loc[lab, "name_length"] = len(row["country"])
```
You can do similar things on the cars DataFrame.
```
# Import cars data
import pandas as pd
cars = pd.read_csv('cars.csv', index_col = 0)
# Code for loop that adds COUNTRY column
for lab, row in cars.iterrows():
cars.loc[lab, "COUNTRY"] = row['country'].upper()
# Print cars
cars
```
### Add column (2)
Using iterrows() to iterate over every observation of a Pandas DataFrame is easy to understand, but not very efficient. On every iteration, you're creating a new Pandas Series.
If you want to add a column to a DataFrame by calling a function on another column, the iterrows() method in combination with a for loop is not the preferred way to go. Instead, you'll want to use apply().
Compare the iterrows() version with the apply() version to get the same result in the brics DataFrame:
```python
for lab, row in brics.iterrows() :
brics.loc[lab, "name_length"] = len(row["country"])
brics["name_length"] = brics["country"].apply(len)
```
We can do a similar thing to call the upper() method on every name in the country column. However, upper() is a method, so we'll need a slightly different approach:
```
cars = pd.read_csv('cars.csv', index_col = 0)
# Use .apply(str.upper)
cars['COUNTRY'] = cars['country'].apply(str.upper)
cars
```
#### Great job! It's time to blend everything you've learned together in a case-study. Head over to the next chapter!
| github_jupyter |
# Study friend information
This notebook looks at the players friend information.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
api_calls_day = pd.Timestamp('2019-06-13')
churn_cutoff = api_calls_day - pd.Timedelta(3,'M')
df = pd.read_csv("player_friend_info_200k.csv",dtype={'steamid': str}).drop("Unnamed: 0", axis='columns')
df = df[pd.to_datetime(df['friend_since'], unit='s') < churn_cutoff]
df.head()
df.tail()
df.info()
df['steamid'].nunique()
df['steamid_orig'].nunique()
df['relationship'].nunique()
df = df.drop('relationship',axis='columns')
#df['Root'] = (df['steamid_orig'] == 76561197960434622)
df.head()
df.tail()
#First_tier = list(df[df['Root']]['steamid'])
#df['First_tier'] = df['steamid_orig'].apply(lambda x: x in First_tier)
#Second_tier = list(df[df['First_tier']]['steamid'])
#df['Second_tier'] = df['steamid_orig'].apply(lambda x: x in Second_tier)
#df['Third_tier'] = ((~df['Second_tier']) & (~df['First_tier']))
#df[df['Third_tier']]
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
df_num_Friends = (df.groupby('steamid_orig').count()['steamid']).to_frame()
df_num_Friends = df_num_Friends.reset_index()
df_num_Friends['num_Friends'] = df_num_Friends['steamid']
df_num_Friends[df_num_Friends['steamid_orig'] == 76561197960434622]
df_num_Friends = df_num_Friends.drop('steamid',axis='columns')
df_num_Friends['steamid'] = df_num_Friends['steamid_orig']
df_num_Friends = df_num_Friends.drop('steamid_orig',axis='columns')
df_num_Friends.head()
df_max = df.loc[df.groupby('steamid_orig')['friend_since'].idxmax()]
df_max = df_max.reset_index()
df_max.head()
df_max[df_max['steamid_orig'] == 76561197960434622]
df_max['newest_friend_time'] = df_max['friend_since']
df_max['newest_friend_steamid'] = df_max['steamid']
df_max = df_max.drop(['index','friend_since','steamid'],axis='columns')
df_max.head()
df_max['steamid'] = df_max['steamid_orig']
df_max = df_max.drop('steamid_orig',axis='columns')
df_max.head()
df_max.info()
df_max.nunique()
df_min = df.loc[df.groupby('steamid_orig')['friend_since'].idxmin()]
df_min = df_min.reset_index()
df_min.head()
df_min[df_min['steamid_orig'] == 76561197960434622]
df_min['oldest_friend_time'] = df_min['friend_since']
df_min['oldest_friend_steamid'] = df_min['steamid']
df_min = df_min.drop(['index','friend_since','steamid'],axis='columns')
df_min.head()
df_min['steamid'] = df_min['steamid_orig']
df_min = df_min.drop('steamid_orig',axis='columns')
df_min.head()
df_min.info()
df_min.nunique()
df_max.head()
df_min.head()
df_friend_extremes = df_max.merge(df_min,on=['steamid'],suffixes=('_max', '_min'))
#df_friend_extremes[df_friend_extremes['Root_max'] != df_friend_extremes['Root_min']]
#df_friend_extremes[df_friend_extremes['First_tier_max'] != df_friend_extremes['First_tier_min']]
#df_friend_extremes[df_friend_extremes['Second_tier_max'] != df_friend_extremes['Second_tier_min']]
df_friend_extremes.head()
#df_friend_extremes['Root'] = df_friend_extremes['Root_max']
#df_friend_extremes['First_tier'] = df_friend_extremes['First_tier_max']
#df_friend_extremes['Second_tier'] = df_friend_extremes['Second_tier_max']
#df_friend_extremes['Third_tier'] = df_friend_extremes['Third_tier_max']
#df_friend_extremes = df_friend_extremes.drop(['Root_max','Root_min','First_tier_max','First_tier_min','Second_tier_max','Second_tier_min','Third_tier_max','Third_tier_min'],axis='columns')
df_friend_extremes.head()
df_num_Friends.head()
df_friend_extremes.info()
df_num_Friends.info()
df_friend_summary = df_num_Friends.merge(df_friend_extremes,on='steamid')
df_friend_summary.info()
df_friend_summary.head()
sns.distplot(np.log(df_friend_summary['num_Friends']))
sns.heatmap(df.isnull(), cbar=False)
import datetime
# plot it
fig, ax = plt.subplots(1,1)
ax.hist(df_friend_summary[df_friend_summary['newest_friend_time'] > 1]['newest_friend_time'], bins=500, color='lightblue')
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d.%m.%y'))
plt.show()
fig, ax = plt.subplots(1,1)
ax.hist(df_friend_summary[df_friend_summary['oldest_friend_time'] > 1]['oldest_friend_time'], bins=500, color='lightblue')
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d.%m.%y'))
plt.show()
df_friend_summary.to_csv('200k_friend_summary.csv')
df_friend_summary['newest_friend_time'].max()
pd.to_datetime(1552494489,unit='s')
df[df['friend_since'] == 0]
```
| github_jupyter |
```
# Import required modules
import requests
import time
import csv
# Paste your Access token here
# To create an access token - https://github.com/settings/tokens
token = "access_token=" + "98420d11adbcea106010b8600534aeb8ea7b395b"
# Base API Endpoint
base_api_url = 'https://api.github.com/'
# Enter multiple word queries with a '+' sign
# Ex: machine+learning to search for Machine Learning
# Additional headers
additional_headers = {'Accept': 'application/vnd.github.mercy-preview+json'}
print('Enter the Search Query to get the Data ')
query = input()
print('\n Query entered is', query, '\n')
no_of_repo = int(input())
print('\n Query entered is ', no_of_repo, '\n')
sort_type = input()
print('\n Query entered is ', sort_type, '\n')
search_final_url = base_api_url + 'search/repositories?q=' + query + '&sort=' + sort_type + '&' + token
print(search_final_url)
# A CSV file containting the data would be saved with the name as the query
# Ex: machine+learning.csv
filename = query + '.csv'
# Create a CSV file or clear the existing one with the same name
with open(filename, 'w', newline='') as csvfile:
write_to_csv = csv.writer(csvfile, delimiter='|')
import math
pages = int(math.ceil(no_of_repo/30.0))
# GitHub returns information of only 30 repositories with every request
# The Search API Endpoint only allows upto 1000 results, hence the range has been set to 35
counter = 0
for page in range(1, pages+1):
# Building the Search API URL
search_final_url = base_api_url + 'search/repositories?q=' + \
query + '&page='+ str(page) + '&sort=' + sort_type + '&' + token
# try-except block just incase you set up the range in the above for loop beyond 35
try:
response = requests.get(search_final_url, headers=additional_headers).json()
except:
print("Issue with GitHub API, Check your token")
# Parsing through the response of the search query
for item in response['items']:
if counter <= no_of_repo:
# Append to the CSV file
with open(filename, 'a', newline='') as csvfile:
write_to_csv = csv.writer(csvfile, delimiter='\t')
repo_name = item['name']
repo_description = item['description']
repo_stars = item['stargazers_count']
repo_watchers = item['watchers_count']
repo_forks = item['forks_count']
repo_issues_count = item['open_issues_count']
repo_main_language = item['language']
repo_clone_url = item['clone_url']
repo_topics = item['topics']
repo_license = None
# repo_score is the relevancy score of a repository to the search query
# Reference - https://developer.github.com/v3/search/#ranking-search-results
repo_score = item['score']
# Many Repositories don't have a license, this is to filter them out
if item['license']:
repo_license = item['license']['name']
else:
repo_license = "NO LICENSE"
# Just incase, you face any issue with GitHub API Rate Limiting, use the sleep function as a workaround
# Reference - https://developer.github.com/v3/search/#rate-limit
# time.sleep(10)
# Languages URL to access all the languages present in the repository
language_url = item['url'] + '/languages?' + token
language_response = requests.get(language_url).json()
repo_languages = {}
# Calculation for the percentage of all the languages present in the repository
count_value = sum([value for value in language_response.values()])
for key, value in language_response.items():
key_value = round((value / count_value) * 100, 2)
repo_languages[key] = key_value
print("Repo Name = ", repo_name, "\tDescription", repo_description, "\tStars = ", repo_stars, "\tWatchers = ", repo_watchers, "\tForks = ", repo_forks,
"\tOpen Issues = ", repo_issues_count, "\tPrimary Language = ", repo_main_language, "\tRepo Languages =", repo_languages, '\tRepo Score', repo_score)
# Write as a row to the CSV file
write_to_csv.writerow([repo_name, repo_description, repo_topics, repo_stars, repo_watchers, repo_forks,
repo_license, repo_issues_count, repo_score, repo_clone_url, repo_main_language, repo_languages])
print('==========')
counter+=1
import pandas as pd
data = pd.read_csv("tensorflow.csv", delimiter="\t", header= None)
data.columns =['repository_name', 'repository_description', 'repository_topics', 'repository_stars', 'repository_watchers', 'repository_forks','repository_license',
'repository_issues_count', 'repository_score', 'repository_clone_url', 'repository_main_language', 'repository_languages']
data.info()
r=requests.get("https://api.github.com/", headers={"Accept":"application/vnd.github.v3+json"}).json()
r
import pandas as pd
data = pd.DataFrame(columns=['repository_name', 'repository_description', 'repository_topics', 'repository_stars', 'repository_watchers', 'repository_forks','repository_license',
'repository_issues_count', 'repository_score', 'repository_clone_url', 'repository_main_language', 'repository_languages'])
data
```
| github_jupyter |
# Homework 4
## Problem 1
Construct the following `numpy` arrays. For full credit, you should **not** use the code pattern `np.array(my_list)` in any of your answers, nor should you use `for`-loops or any other solution that involves creating or modifying the array one entry at a time.
**Please make sure to show your result so that the grader can evaluate it!**
```
# run this block to import numpy
import numpy as np
```
### (A).
```
array([[0, 1],
[2, 3],
[4, 5],
[6, 7],
[8, 9]])
```
#### Your Solution
### (B).
```
array([[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]])
```
#### Your Solution
### (C).
```
array([0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ])
```
#### Your Solution
### (D).
```
array([ 1, 1, 1, 1, 1, 10, 10, 10, 10, 10])
```
#### Your Solution
### (E).
```
array([[30, 1, 2, 30, 4],
[ 5, 30, 7, 8, 30]])
```
#### Your Solution
### (F).
```
array([[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]])
```
#### Your Solution
### (G).
```
array([[ 1, 3],
[ 5, 7],
[ 9, 11],
[13, 15],
[17, 19]])
```
#### Your Solution
## Problem 2
Consider the following array:
```python
A = np.arange(12).reshape(4, 3)
A
```
```
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11]])
```
Construct the specified arrays by indexing `A`. For example, if asked for `array([0, 1, 2])`, a correct answer would be `A[0,:]`. Each of the parts below may be performed in a single line.
```
# run this block to initialize A
A = np.arange(12).reshape(4, 3)
A
```
### (A).
```
array([6, 7, 8])
```
#### Your Solution
### (B).
```
array([5, 8])
```
#### Your Solution
### (C).
```
array([ 6, 7, 8, 9, 10, 11])
```
#### Your Solution
### (D).
```
array([ 0, 2, 4, 6, 8, 10])
```
#### Your Solution
### (E).
```
array([ 0, 1, 2, 3, 4, 5, 11])
```
#### Your Solution
### (F).
```
array([ 4, 11])
```
#### Your Solution
## Problem 3
In this problem, we will use `numpy` array indexing to repair an image that has been artificially separated into multiple pieces. The following code will retrieve two images, one of which has been cutout from the other. Your job is to piece them back together again.
You've already seen `urllib.request.urlopen()` to retrieve online data. We'll play with `mpimg.imread()` a bit more in the future. The short story is that it produces a representation of an image as a `numpy` array of `RGB` values; see below. You'll see `imshow()` a lot more in the near future.
```
import matplotlib.image as mpimg
from matplotlib import pyplot as plt
import urllib
f = urllib.request.urlopen("https://philchodrow.github.io/PIC16A/homework/main.jpg")
main = mpimg.imread(f, format = "jpg").copy()
f = urllib.request.urlopen("https://philchodrow.github.io/PIC16A/homework/cutout.jpg")
cutout= mpimg.imread(f, format = "jpg").copy()
fig, ax = plt.subplots(1, 2)
ax[0].imshow(main)
ax[1].imshow(cutout)
```
The images are stored as two `np.array`s `main` and `cutout`. Inspect each one. You'll observe that each is a 3-dimensional `np.array` of shape `(height, width, 3)`. The `3` in this case indicates that the color of each pixel is encoded as an RGB (Red-Blue-Green) value. Each pixel has one RGB value, each of which has three elements.
Use array indexing to fix the image. The result should be that array `main` also contains the data for the face. This can be done just a few lines of carefully-crafted `numpy`. Once you're done, visualize the result by running the indicated code block.
**The black region in `main` starts at row 0, and column 50**. You can learn more about its shape by inspecting the shape of `cutout`.
```
# your solution
# run this block to check your solution
plt.imshow(main)
```
## Problem 4
### (A).
Read [these notes](https://jakevdp.github.io/PythonDataScienceHandbook/02.05-computation-on-arrays-broadcasting.html) from the Python Data Science Handbook on *array broadcasting*. Broadcasting refers to the automatic expansion of one or more arrays to make a computation "make sense." Here's a simple example:
```python
a = np.array([0, 1, 2])
b = np.ones((3, 3))
a.shape, b.shape
```
```
((3,), (3, 3))
```
```python
a+b
```
```
array([[1., 2., 3.],
[1., 2., 3.],
[1., 2., 3.]])
```
What has happened here is that the first array `a` has been "broadcast" from a 1d array into a 2d array of size 3x3 in order to match the dimensions of `b`. The broadcast version of `a` looks like this:
```
array([[0., 1., 2.],
[0., 1., 2.],
[0., 1., 2.]])
```
Consult the notes above for many more examples of broadcasting. Pay special attention to the discussion of the [rules of broadcasting](https://jakevdp.github.io/PythonDataScienceHandbook/02.05-computation-on-arrays-broadcasting.html#Rules-of-Broadcasting).
### (B).
Review, if needed, the `unittest` module for constructing automated unit tests in Python. You may wish to refer to the [required reading](https://docs.python.org/3/library/unittest.html) from that week, the [lecture notes](https://nbviewer.jupyter.org/github/PhilChodrow/PIC16A/blob/master/content/IO_and_modules/modules/unit_testing.ipynb), or the recorded [lecture video](https://www.youtube.com/watch?v=TwOmk9oSaR8&feature=youtu.be).
### (C).
Implement an automated test class called `TestBroadcastingRules` which tests the three rules of array broadcasting.
> **Rule 1**: If the two arrays differ in their number of dimensions, the shape of the one with fewer dimensions is padded with ones on its leading (left) side.
To test this rule, write a method `test_rule_1()` that constructs the arrays:
```
a = np.ones(3)
b = np.arange(3).reshape(1, 3)
c = a + b
```
Then, within the method, check (a) that the `shape` of `c` has the value you would expect according to Rule 1 and (b) that the final entry of `c` has the value that you would expect. **Note:** you should use `assertEqual()` twice within this method.
In a docstring to this method, explain how this works. In particular, explain which of `a` or `b` is broadcasted, and what its new shape is according to Rule 1. You should also explain the value of the final entry of `c`.
> **Rule 2**: If the shape of the two arrays does not match in any dimension, the array with shape equal to 1 in that dimension is stretched to match the other shape.
To test this rule, write a method `test_rule_2()` that constructs the following two arrays:
```
a = np.ones((1, 3))
b = np.arange(9).reshape(3, 3)
c = a + b
```
Then, within the method, check (a) that the `shape` of `c` has the value you would expect according to Rule 2 and (b) that the entry `c[1,2]` has the value that you would expect. You should again use `assertEqual()` twice within this method.
In a docstring to this method, explain how this works. In particular, explain which of `a` or `b` is broadcasted, and what its new shape is according to Rule 2. You should also explain the value of the entry `c[1,2]`.
> **Rule 3**: If in any dimension the sizes disagree and neither is equal to 1, an error is raised.
To test this rule, write a method `test_rule_3` that constructs the arrays
```
a = np.ones((2, 3))
b = np.ones((3, 3))
```
It should then attempt to construct `c = a + b`. The test should *pass* if the Rule 3 error is raised, and fail otherwise. You will need to figure out what kind of error is raised by Rule 3 (is it a `TypeError`? `ValueError`? `KeyError`?). You will also need to handle the error using the `assertRaises()` method as demonstrated in the readings.
In a docstring to this method, explain why an error is raised according to Rule 3.
You should be able to perform the unit tests like this:
```
tester = TestBroadcastingRules()
tester.test_rule_1()
tester.test_rule_2()
tester.test_rule_3()
```
Your tests have passed if no output is printed when you run this code.
### Your Solution
```
# write your tester class here
# run your tests
# your tests have passed if no output or errors are shown.
tester = TestBroadcastingRules()
tester.test_rule_1()
tester.test_rule_2()
tester.test_rule_3()
```
# Problem 5
Recall the simple random walk. At each step, we flip a fair coin. If heads, we move "foward" one unit; if tails, we move "backward."
## (A).
Way back in Homework 1, you wrote some code to simulate a random walk in Python.
Start with this code, or use posted solutions for HW1. If you have since written random walk code that you prefer, you can use this instead. Regardless, take your code, modify it, and enclose it in a function `rw()`. This function should accept a single argument `n`, the length of the walk. The output should be a list giving the position of the random walker, starting with the position after the first step. For example,
```python
rw(5)
[1, 2, 3, 2, 3]
```
Unlike in the HW1 problem, you should not use upper or lower bounds. The walk should always run for as long as the user-specified number of steps `n`.
Use your function to print out the positions of a random walk of length `n = 10`.
Don't forget a helpful docstring!
```
# solution (with demonstration) here
```
## (B).
Now create a function called `rw2(n)`, where the argument `n` means the same thing that it did in Part A. Do so using `numpy` tools. Demonstrate your function as above, by creating a random walk of length 10. You can (and should) return your walk as a `numpy` array.
**Requirements**:
- No for-loops.
- This function is simple enough to be implemented as a one-liner of fewer than 80 characters, using lambda notation. Even if you choose not to use lambda notation, the body of your function definition should be no more than three lines long. Importing `numpy` does not count as a line.
- A docstring is required if and only if you take more than one line to define the function.
**Hints**:
- Check the documentation for `np.random.choice()`.
- Discussion 9, and `np.cumsum()`.
```
# solution (with demonstration) here
```
## (C).
Use the `%timeit` magic macro to compare the runtime of `rw()` and `rw2()`. Test how each function does in computing a random walk of length `n = 10000`.
```
# solution (with demonstration) here
```
## (D).
Write a few sentences in which you comment on (a) the performance of each function and (b) the ease of writing and reading each function.
*Your discussion here*
## (E).
In this problem, we will perform a `d`-dimensional random walk. There are many ways to define such a walk. Here's the definition we'll use for this problem:
> At each timestep, the walker takes one random step forward or backward **in each of `d` directions.**
For example, in a two-dimensional walk on a grid, in each timestep the walker would take a step either north or south, and then another step either east or west. Another way to think about is as the walker taking a single "diagonal" step either northeast, southeast, southwest, or northwest.
Write a function called `rw_d(n,d)` that implements a `d`-dimensional random walk. `n` is again the number of steps that the walker should take, and `d` is the dimension of the walk. The output should be given as a `numpy` array of shape `(n,d)`, where the `k`th row of the array specifies the position of the walker after `k` steps. For example:
```python
P = rw_d(5, 3)
P
```
```
array([[-1, -1, -1],
[ 0, -2, -2],
[-1, -3, -3],
[-2, -2, -2],
[-1, -3, -1]])
```
In this example, the third row `P[2,:] = [-1, -3, -3]` gives the position of the walk after 3 steps.
Demonstrate your function by generating a 3d walk with 5 steps, as shown in the example above.
All the same requirements and hints from Part B apply in this problem as well. It should be possible to solve this problem by making only a few small modifications to your solution from Part B. If you are finding that this is not possible, you may want to either (a) read the documentation for the relevant `numpy` functions more closely or (b) reconsider your Part B approach.
```
# solution (with demonstration) here
```
## (F).
In a few sentences, describe how you would have solved Part E without `numpy` tools. Take a guess as to how many lines it would have taken you to define the appropriate function. Based on your findings in Parts C and D, how would you expect its performance to compare to your `numpy`-based function from Part E? Which approach would your recommend?
Note: while I obviously prefer the `numpy` approach, it is reasonable and valid to prefer the "vanilla" way instead. Either way, you should be ready to justify your preference on the basis of writeability, readability, and performance.
*Your discussion here*
## (G).
Once you've implemented `rw_d()`, you can run the following code to generate a large random walk and visualize it.
```python
from matplotlib import pyplot as plt
W = rw_d(20000, 2)
plt.plot(W[:,0], W[:,1])
```
You may be interested in looking at several other visualizations of multidimensional random walks [on Wikipedia](https://en.wikipedia.org/wiki/Random_walk). Your result in this part will not look exactly the same, but should look qualitatively fairly similar.
You only need to show one plot. If you like, you might enjoy playing around with the plot settings. While `ax.plot()` is the normal method to use here, `ax.scatter()` with partially transparent points can also produce some intriguing images.
```
# solution (with demonstration) here
```
| github_jupyter |
# Data Discretization and Gaussian Mixture Models
The purpose of this notebook is to explore the use of Gaussian Mixture Models (GMMs) to discretize data. Here, we generate dummy data from 2 populations (male children and female children). The features that we generate are all gaussians and hypothetical around height, weight and sleep, denoted as $X_0$, $X_1$ and $X_2$, respectively. For the male children, these variables are sampled as follows.
* $X_0 \sim \mathcal{N}(60, 5)$
* $X_1 \sim \mathcal{N}(50 + 0.5 X_0, 20)$
* $X_2 \sim \mathcal{N}(1 + 0.1 X_1, 1.5)$
For the female children, these variables are sampled as follows.
* $X_0 \sim \mathcal{N}(50, 5)$
* $X_1 \sim \mathcal{N}(45 + 0.5 X_0, 10)$
* $X_2 \sim \mathcal{N}(1.5 + 0.1 X_1, 1.5)$
```
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import random
from scipy import stats
random.seed(37)
np.random.seed(37)
plt.style.use('ggplot')
def get_m_ch_data(N=1000):
h = np.random.normal(60, 5, N)
w = np.random.normal(50 + 0.5 * h, 20, N)
s = np.random.normal(1 + 0.1 * w, 1.5, N)
return np.concatenate([h.reshape(-1, 1), w.reshape(-1, 1), s.reshape(-1, 1)], axis=1)
def get_f_ch_data(N=1000):
h = np.random.normal(50, 5, N)
w = np.random.normal(45 + 0.5 * h, 10, N)
s = np.random.normal(1.5 + 0.1 * w, 1, N)
return np.concatenate([h.reshape(-1, 1), w.reshape(-1, 1), s.reshape(-1, 1)], axis=1)
def get_data(N=1000):
m_ch = get_m_ch_data(N)
f_ch = get_f_ch_data(N)
return np.concatenate([m_ch, f_ch], axis=0)
X = get_data()
```
## Visualization
Here are some visualization of the variables. Had we not known that the full data comes from two populations, it is difficult to tell from the univariate, bivariate and multivariate plots if there are any clusters.
### Univariate Density Plot
With the exception of $X_0$, all variables seem like well-behaved gaussian distributed variables. For $X_0$, there is a hump that visually detectable.
```
fig, ax = plt.subplots(1, 3, figsize=(15, 5))
for c in range(X.shape[1]):
sns.distplot(X[:,c], ax=ax[c])
ax[c].set_title(r'$X_{}$'.format(c))
plt.tight_layout()
```
### Bivariate Density Plots
These are pairwise density plots for the three variables. For $X_0$ vs $X_1$ and $X_0$ vs $X_2$, there seems to be a skewing to the top right. Again, not indicative of any clusters or anything separable along the pairwise dimensions.
```
fig, ax = plt.subplots(1, 3, figsize=(15, 5))
sns.kdeplot(X[:,0], X[:,1], ax=ax[0], cmap="Reds", shade=True)
sns.kdeplot(X[:,0], X[:,2], ax=ax[1], cmap="Reds", shade=True)
sns.kdeplot(X[:,1], X[:,2], ax=ax[2], cmap="Reds", shade=True)
ax[0].set_xlabel(r'$X_0$'); ax[0].set_ylabel(r'$X_1$');
ax[1].set_xlabel(r'$X_0$'); ax[1].set_ylabel(r'$X_2$');
ax[2].set_xlabel(r'$X_1$'); ax[2].set_ylabel(r'$X_2$');
plt.tight_layout()
```
### Multivariate Scatter Plot
Here's a scatter plot. It looks nice as a 3-D plot, but not telling of any patterns.
```
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X[:,0], X[:,1], X[:,2], c=X[:,2], cmap='viridis', linewidth=0, antialiased=True)
ax.set_xlabel(r'$X_0$')
ax.set_ylabel(r'$X_1$')
_ = ax.set_zlabel(r'$X_2$')
```
## Gaussian Mixture Model (GMM)
Now, let's see if we may use GMM as an unsupervised learning technique to recover any meaningful clusters in the full data. When we use GMM, we iterate from 2 to 5 (inclusive both ends) and observe the associated [AIC](https://en.wikipedia.org/wiki/Akaike_information_criterion) score (a lower AIC is better). Using GMM, we discover that when the number of components is 2, that model has the lowest AIC score. This observation already is promising, as we know that there are 2 populations and GMM (with the number of components set to 2) has basically recovered the clusters.
```
from sklearn.mixture import GaussianMixture
def get_gmm_labels(X, k):
gmm = GaussianMixture(n_components=k, max_iter=50, random_state=37)
gmm.fit(X)
aic = gmm.aic(X)
bic = gmm.bic(X)
return aic, bic, k, gmm
gmm_scores = [get_gmm_labels(X, k) for k in range(2, 6)]
gmm_scores = sorted(gmm_scores, key=lambda tup: (tup[0], tup[2]))
gmm = gmm_scores[0][3]
```
Look at the means. They are nearly identical to the expected values for each cluster.
```
gmm.means_
np.sqrt(gmm.covariances_)
```
The weights are a bit off; they should be nearly 50% each, however, the zeroth cluster is at 60%.
```
gmm.weights_
```
## Data discretization
We can actually exploit a GMM model to discretize the data. The logic is very simple. Associated with a GMM are N clusters (or components) and each has an associated vector of means and matrix of covariance; these may be used to form a multivariate gaussian or marginalized out to form a univariate guassian. Call any cluster/component the i-th cluster, and call any mean the j-th mean (there should be as many means as there are features; in this running example, 3). For a value $v$ from the j-th variable, compute the probabilty of the value for each probability density function $P_{ij}(v)$. Take the i-th index associated with the highest $P_{ij}$, and that is the label associated with that value (this step is the discretization).
In this example, since there are 2 components in the GMM, this reduces the mapping of continuous values to the binary domain of 0 or 1.
### Simulation
Here, we simulate $y$, $S$ and $D$ where
* $y$ is a vector of labels, $y \in \{0, 1\}$,
* $S$ is a matrix of simulated (continuous) data from the GMM, and
* $D$ is a matrix corresponding to $S$ with the data discretized.
Using the `fitting` pattern of the Scikit-Learn API's, we can conveniently fit models to the pairs, $(S, y)$ and $(D, y)$.
```
unvgauss = {}
for i in range(gmm.n_components):
unvgauss[i] = {}
for j in range(len(gmm.means_[i])):
unvgauss[i][j] = stats.norm(gmm.means_[i][j], np.sqrt(gmm.covariances_)[i][j][j])
mvngauss = {}
for i in range(gmm.n_components):
mvngauss[i] = stats.multivariate_normal(gmm.means_[i], gmm.covariances_[i])
from scipy.stats import dirichlet
def get_data_labels(unvgauss, r):
data = []
num_clusters = len(unvgauss)
num_vars = len(unvgauss[0])
for v in range(num_vars):
p = np.argmax([unvgauss[c][v].pdf(r[v]) for c in range(num_clusters)])
data.append(p)
return np.array(data)
def get_simulated_data(X, gmm):
y = np.argmax(dirichlet.rvs(gmm.weights_, size=X.shape[0], random_state=37), axis=1)
S = np.vstack([mvngauss[label].rvs() for label in y])
D = np.vstack([get_data_labels(unvgauss, S[r]) for r in range(S.shape[0])])
return y, S, D
y, S, D = get_simulated_data(X, gmm)
```
### Applying logistic regression on continuous data
When logistic regression learns and validates against the same continuous data, the [AUROC](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) score is 90%.
```
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
lr = LogisticRegression(solver='lbfgs', random_state=37)
lr.fit(S, y)
print(lr.coef_)
y_pred = lr.predict_proba(S)[:,1]
print(lr.score(S, y))
print(roc_auc_score(y, y_pred))
```
### Applying logistic regression on discretized data
When logistic regression learns and validatest against the same discretized data, the AUROC is 86%.
```
lr = LogisticRegression(solver='lbfgs', random_state=37)
lr.fit(D, y)
print(lr.coef_)
y_pred = lr.predict_proba(D)[:,1]
print(lr.score(D, y))
print(roc_auc_score(y, y_pred))
```
### Applying random forest on continuous data
When random forest learns and validates against the same continuous data, the AUROC is nearly perfect at 99%.
```
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=100, random_state=37)
rf.fit(S, y)
y_pred = rf.predict_proba(S)[:,1]
print(roc_auc_score(y, y_pred))
```
### Applying random forest on discretized data
When random forest learns and validates against the same discretized data, the AUROC is nearly perfect at 86%.
```
rf = RandomForestClassifier(n_estimators=100, random_state=37)
rf.fit(D, y)
y_pred = rf.predict_proba(D)[:,1]
print(roc_auc_score(y, y_pred))
```
### Testing logistic regression and random forest on hold-out continuous data
Here we apply logistic regression and random forest to learn from the same continous data, but the validation is against a hold-out set (never seen before).
```
y_h, S_h, D_h = get_simulated_data(X, gmm)
lr = LogisticRegression(solver='lbfgs', random_state=37)
lr.fit(S, y)
y_pred = lr.predict_proba(S_h)[:,1]
print(roc_auc_score(y_h, y_pred))
print('>')
rf = RandomForestClassifier(n_estimators=100, random_state=37)
rf.fit(S, y)
y_pred = rf.predict_proba(S_h)[:,1]
print(roc_auc_score(y_h, y_pred))
```
### Testing logistic regression and random forest on hold-out discretized data
Here we apply logistic regression and random forest to learn from the same discretized data, but the validation is against a hold-out set (never seen before).
```
lr = LogisticRegression(solver='lbfgs', random_state=37)
lr.fit(D, y)
y_pred = lr.predict_proba(D_h)[:,1]
print(roc_auc_score(y_h, y_pred))
print('>')
rf = RandomForestClassifier(n_estimators=100, random_state=37)
rf.fit(D, y)
y_pred = rf.predict_proba(D_h)[:,1]
print(roc_auc_score(y_h, y_pred))
```
### Testing random forest on hold-out data
Here we want to see how well random forest performs on unseen continuous and discretized data. We want to compare if we have sacrificed any accuracy when going from the continuous to discretized domain for classification tasks. We always learn our models from the same original data, $S$ (continuous) or $D$ (discretized), but we validate on 20 new synthetic data sets the model has not seen before.
```
import pandas as pd
def do_validation(S, y, X, gmm):
y_h, S_h, D_h = get_simulated_data(X, gmm)
rf = RandomForestClassifier(n_estimators=100, random_state=37)
rf.fit(S, y)
y_pred = rf.predict_proba(S_h)[:,1]
s_score = roc_auc_score(y_h, y_pred)
rf = RandomForestClassifier(n_estimators=100, random_state=37)
rf.fit(D, y)
y_pred = rf.predict_proba(D_h)[:,1]
d_score = roc_auc_score(y_h, y_pred)
return s_score, d_score
valid_scores = pd.DataFrame([do_validation(S, y, X, gmm) for _ in range(20)], columns=['continuous', 'discretized'])
```
The mean AUROC scores when using random forest on continuous and discretized data seem very close.
```
print(valid_scores.mean())
print('difference is {:.3f}'.format(
valid_scores.mean().values[0] - valid_scores.mean().values[1]))
```
A simple two-sided t-test indicates that p < 0.01, so we fail to reject the null hypothesis and there is a significant difference between the average AUROC performance.
```
stats.ttest_ind(valid_scores['continuous'], valid_scores['discretized'])
```
## Closing thoughts
It's interesting that GMMs can be used for not only clustering but also data discretization. As you can see, in the discretized space, this has a significant but small impact as far as this fake study shows. One thing to be careful is that for N components, there will be N values. It might make more sense to apply GMM to identify the clusters as a multivariate approach, and then attempt to adjust the number of discretized values as a univariate operation (again using GMM). If you decide to do this extra work, please let us know.
| github_jupyter |
# Time series outlier detection with Seq2Seq models on synthetic data
## Method
The [Sequence-to-Sequence](https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf) (Seq2Seq) outlier detector consists of 2 main building blocks: an encoder and a decoder. The encoder consists of a [Bidirectional](https://en.wikipedia.org/wiki/Bidirectional_recurrent_neural_networks) [LSTM](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) which processes the input sequence and initializes the decoder. The LSTM decoder then makes sequential predictions for the output sequence. In our case, the decoder aims to reconstruct the input sequence. If the input data cannot be reconstructed well, the reconstruction error is high and the data can be flagged as an outlier. The reconstruction error is measured as the mean squared error (MSE) between the input and the reconstructed instance.
Since even for normal data the reconstruction error can be state-dependent, we add an outlier threshold estimator network to the Seq2Seq model. This network takes in the hidden state of the decoder at each timestep and predicts the estimated reconstruction error for normal data. As a result, the outlier threshold is not static and becomes a function of the model state. This is similar to [Park et al. (2017)](https://arxiv.org/pdf/1711.00614.pdf), but while they train the threshold estimator separately from the Seq2Seq model with a Support-Vector Regressor, we train a neural net regression network end-to-end with the Seq2Seq model.
The detector is first trained on a batch of unlabeled, but normal (*inlier*) data. Unsupervised training is desireable since labeled data is often scarce. The Seq2Seq outlier detector is suitable for both **univariate and multivariate time series**.
## Dataset
We test the outlier detector on a synthetic dataset generated with the [TimeSynth](https://github.com/TimeSynth/TimeSynth) package. It allows you to generate a wide range of time series (e.g. pseudo-periodic, autoregressive or Gaussian Process generated signals) and noise types (white or red noise). It can be installed as follows:
```bash
!pip install git+https://github.com/TimeSynth/TimeSynth.git
```
```
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.metrics import accuracy_score, confusion_matrix, f1_score, recall_score
import tensorflow as tf
import timesynth as ts
from alibi_detect.od import OutlierSeq2Seq
from alibi_detect.utils.perturbation import inject_outlier_ts
from alibi_detect.utils.saving import save_detector, load_detector
from alibi_detect.utils.visualize import plot_feature_outlier_ts, plot_roc
```
## Create multivariate time series
Define number of sampled points and the type of simulated time series. We use [TimeSynth](https://github.com/TimeSynth/TimeSynth) to generate sinusoidal signals with noise.
```
n_points = int(1e6) # number of timesteps
perc_train = 80 # percentage of instances used for training
perc_threshold = 10 # percentage of instances used to determine threshold
n_train = int(n_points * perc_train * .01)
n_threshold = int(n_points * perc_threshold * .01)
n_features = 2 # number of features in the time series
seq_len = 50 # sequence length
# set random seed
np.random.seed(0)
# timestamps
time_sampler = ts.TimeSampler(stop_time=n_points // 4)
time_samples = time_sampler.sample_regular_time(num_points=n_points)
# create time series
ts1 = ts.TimeSeries(
signal_generator=ts.signals.Sinusoidal(frequency=0.25),
noise_generator=ts.noise.GaussianNoise(std=0.1)
)
samples1 = ts1.sample(time_samples)[0].reshape(-1, 1)
ts2 = ts.TimeSeries(
signal_generator=ts.signals.Sinusoidal(frequency=0.15),
noise_generator=ts.noise.RedNoise(std=.7, tau=0.5)
)
samples2 = ts2.sample(time_samples)[0].reshape(-1, 1)
# combine signals
X = np.concatenate([samples1, samples2], axis=1).astype(np.float32)
# split dataset into train, infer threshold and outlier detection sets
X_train = X[:n_train]
X_threshold = X[n_train:n_train+n_threshold]
X_outlier = X[n_train+n_threshold:]
# scale using the normal training data
mu, sigma = X_train.mean(axis=0), X_train.std(axis=0)
X_train = (X_train - mu) / sigma
X_threshold = (X_threshold - mu) / sigma
X_outlier = (X_outlier - mu) / sigma
print(X_train.shape, X_threshold.shape, X_outlier.shape)
```
Visualize:
```
n_features = X.shape[-1]
istart, istop = 50, 100
for f in range(n_features):
plt.plot(X_train[istart:istop, f], label='X_train')
plt.title('Feature {}'.format(f))
plt.xlabel('Time')
plt.ylabel('Feature value')
plt.legend()
plt.show()
```
## Load or define Seq2Seq outlier detector
```
load_outlier_detector = False
filepath = 'my_path' # change to directory where model is saved
if load_outlier_detector: # load pretrained outlier detector
od = load_detector(filepath)
else: # define model, initialize, train and save outlier detector
# initialize outlier detector
od = OutlierSeq2Seq(n_features,
seq_len,
threshold=None,
latent_dim=100)
# train
od.fit(X_train,
epochs=10,
verbose=False)
# save the trained outlier detector
save_detector(od, filepath)
```
We still need to set the outlier threshold. This can be done with the `infer_threshold` method. We need to pass a time series of instances and specify what percentage of those we consider to be normal via `threshold_perc`. First we create outliers by injecting noise in the time series via `inject_outlier_ts`. The noise can be regulated via the percentage of outliers (`perc_outlier`), the strength of the perturbation (`n_std`) and the minimum size of the noise perturbation (`min_std`). Let's assume we have some data which we know contains around 10% outliers in either of the features:
```
np.random.seed(0)
X_thr = X_threshold.copy()
data = inject_outlier_ts(X_threshold, perc_outlier=10, perc_window=10, n_std=2., min_std=1.)
X_threshold = data.data
print(X_threshold.shape)
```
Visualize outlier data used to determine the threshold:
```
istart, istop = 0, 50
for f in range(n_features):
plt.plot(X_threshold[istart:istop, f], label='outliers')
plt.plot(X_thr[istart:istop, f], label='original')
plt.title('Feature {}'.format(f))
plt.xlabel('Time')
plt.ylabel('Feature value')
plt.legend()
plt.show()
```
Let's infer the threshold. The ```inject_outlier_ts``` method distributes perturbations evenly across features. As a result, each feature contains about 5% outliers. We can either set the threshold over both features combined or determine a feature-wise threshold. Here we opt for the **feature-wise threshold**. This is for instance useful when different features have different variance or sensitivity to outliers. We also manually decrease the threshold a bit to increase the sensitivity of our detector:
```
od.infer_threshold(X_threshold, threshold_perc=[95, 95])
od.threshold -= .15
print('New threshold: {}'.format(od.threshold))
```
Let's save the outlier detector with the updated threshold:
```
save_detector(od, filepath)
```
We can load the same detector via `load_detector`:
```
od = load_detector(filepath)
```
## Detect outliers
Generate the outliers to detect:
```
np.random.seed(1)
X_out = X_outlier.copy()
data = inject_outlier_ts(X_outlier, perc_outlier=10, perc_window=10, n_std=2., min_std=1.)
X_outlier, y_outlier, labels = data.data, data.target.astype(int), data.target_names
print(X_outlier.shape, y_outlier.shape)
```
Predict outliers:
```
od_preds = od.predict(X_outlier,
outlier_type='instance', # use 'feature' or 'instance' level
return_feature_score=True, # scores used to determine outliers
return_instance_score=True)
```
## Display results
F1 score, accuracy, recall and confusion matrix:
```
y_pred = od_preds['data']['is_outlier']
f1 = f1_score(y_outlier, y_pred)
acc = accuracy_score(y_outlier, y_pred)
rec = recall_score(y_outlier, y_pred)
print('F1 score: {:.3f} -- Accuracy: {:.3f} -- Recall: {:.3f}'.format(f1, acc, rec))
cm = confusion_matrix(y_outlier, y_pred)
df_cm = pd.DataFrame(cm, index=labels, columns=labels)
sns.heatmap(df_cm, annot=True, cbar=True, linewidths=.5)
plt.show()
```
Plot the feature-wise outlier scores of the time series for each timestep vs. the outlier threshold:
```
plot_feature_outlier_ts(od_preds,
X_outlier,
od.threshold[0],
window=(150, 200),
t=time_samples,
X_orig=X_out)
```
We can also plot the ROC curve using the instance level outlier scores:
```
roc_data = {'S2S': {'scores': od_preds['data']['instance_score'], 'labels': y_outlier}}
plot_roc(roc_data)
```
| github_jupyter |
## List (continued)
```
fast = [1,2,3,4,]
campus = [5,6,7]
fast + campus
fast.extend(campus)
fast
3 in fast
9 in fast
4 not in fast
9 not in fast
fast
fast[2] = 100
fast
```
## Tuple
```
some_tuple = (1, 2)
some_tuple
some_tuple[1] = 100
#tuple에서는 한번 선언한 값을 바꿀수 없다
def multiple_return(num):
result = num + 2
return (num, result)
multiple_return(2)
a = 1
b = 2
print (a, b)
#switch 하려면
a = 1
b = 2
temp = b
b = a
a = temp
print(a , b)
a = 1
b = 2
(a,b) = (b,a)
print(a,b)
weather_info = (temp, hum, finedust)
some_tuple = ('python', 'java', 'c', 'cplus', 'golang')
some_tuple
some_listed = list(some_tuple)
some_listed.append('julia')
some_tuple = tuple(some_listed)
some_tuple
#tuple을 바꾸려면 list로 typle casting한 다음 사용
three_by_three = [[1,0,0], [0,1,0], [0,0,1]]
three_by_three
```
## Conditional Statements
```
a = 10
if a == 10:
print("a is 10.")
a += 1
print("a is now eleven.", a)
a = 9
if a == 10:
print("a is 10,")
else:
print("a is not 10.")
a = 9
result = "a is 10. "if a == 10 else "a is not 10."
print(result)
#삼항연산자
b = 10
if b < 5:
print("b는 5보다 작습니다")
else:
print("b는 5보다 같거나 큽니다")
b = 10
if b < 5:
print("b는 5보다 작습니다")
else:
if b < 20:
print("b는 5보다 크거나 같고 20보다 작습니다")
else:
print("b는 20보다 크거나 같습니다")
b = 10
if b < 5:
print("b는 5보다 작습니다")
else:
if b < 20:
print("b는 5보다 크거나 같고 20보다 작습니다")
else:
if b < 50:
print("20 <= b < 50")
else:
print("b>50")
#논리적으로는 맞지만 pep에 맞지않음. 밑에 수정본.
b = 10
if b < 5:
print("b는 5보다 작습니다")
elif b < 20:
print("b는 5보다 크거나 같고 20보다 작습니다")
elif b < 50:
print("20 <= b < 50")
else:
print("b>50")
```
### Small Project: Numguess
```
import random
answer = random.randint(1, 100)
print("Answer: ", answer)
guess = int(input("1에서 100사이의 정수를 입력하세요: "))
if guess == answer:
print("정답입니다. 정답은 {}".format(answer))
else:
print("틀렸습니다. 정답은 {}".format(answer))
```
### Ternary Operations
```
a = 9
result = "a is 10. "if a == 10 else "a is not 10."
print(result)
#삼항연산자
a = 9
if a == 10:
result = "a is 10."
else:
result = "a is not 10."
print(result)
a = 9
if a == 10:
result = "a is 10."
else:
if a == 9:
result = "a is 9."
else:
result = "a is neither 10 nor 9."
print(result)
#Conditionals Expressions
a = 9
result = "a is 10." if a == 10 else "a is 9." if a == 9 else "a is neither 10 nor 9."
print(result)
```
### Small Project
```
weekday = ["월", "화", "수", "목", "금"]
weekend = ["토", "일"]
day_input = input("요일을 입력하세요: ")
day_input=day_input[0]
if day_input in weekday:
print("평일이네요ㅜㅜ")
elif day_input in weekdend:
print("주말입니다^^")
else:
print("요일을 입력하세요.")
weekday = ["월", "화", "수", "목", "금"]
weekend = ["토", "일"]
day_input = input("요일을 입력하세요: ")
day_input=day_input[0]
result = "평일이네요ㅜㅜ" if day_input in weekday else "주말입니다^^" if day_input in weekdend else "값을 다시 입력해라"
print(result)
```
| github_jupyter |
```
import gym
from IPython import display
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
```
# 1. 100 steps (frames) of one episode
```
env = gym.make('LunarLander-v2')
# env.render()
env.reset()
img = plt.imshow(env.render(mode='rgb_array')) # only call this once
from time import sleep
for _ in range(100):
img.set_data(env.render(mode='rgb_array')) # just update the data
display.display(plt.gcf())
display.clear_output(wait=True)
action = env.action_space.sample()
env.step(action)
# sleep(0.1)
print 'Simulation finished'
```
# 2. Run a training process
```
import random
from keras import Sequential
from collections import deque
from keras.layers import Dense
from keras.optimizers import adam
import matplotlib.pyplot as plt
from keras.activations import relu, linear
import numpy as np
env = gym.make('LunarLander-v2')
env.seed(0)
np.random.seed(0)
class DQN:
""" Implementation of deep q learning algorithm """
def __init__(self, action_space, state_space):
self.action_space = action_space
self.state_space = state_space
self.epsilon = 1.0
self.gamma = .99
self.batch_size = 64
self.epsilon_min = .01
self.lr = 0.001
self.epsilon_decay = .996
self.memory = deque(maxlen=1000000)
self.model = self.build_model()
def build_model(self):
model = Sequential()
model.add(Dense(150, input_dim=self.state_space, activation=relu))
model.add(Dense(120, activation=relu))
model.add(Dense(self.action_space, activation=linear))
model.compile(loss='mse', optimizer=adam(lr=self.lr))
return model
def remember(self, state, action, reward, next_state, done):
self.memory.append((state, action, reward, next_state, done))
def act(self, state):
if np.random.rand() <= self.epsilon:
return random.randrange(self.action_space)
act_values = self.model.predict(state)
return np.argmax(act_values[0])
def replay(self):
if len(self.memory) < self.batch_size:
return
minibatch = random.sample(self.memory, self.batch_size)
states = np.array([i[0] for i in minibatch])
actions = np.array([i[1] for i in minibatch])
rewards = np.array([i[2] for i in minibatch])
next_states = np.array([i[3] for i in minibatch])
dones = np.array([i[4] for i in minibatch])
states = np.squeeze(states)
next_states = np.squeeze(next_states)
targets = rewards + self.gamma*(np.amax(self.model.predict_on_batch(next_states), axis=1))*(1-dones)
targets_full = self.model.predict_on_batch(states)
ind = np.array([i for i in range(self.batch_size)])
targets_full[[ind], [actions]] = targets
self.model.fit(states, targets_full, epochs=1, verbose=0)
if self.epsilon > self.epsilon_min:
self.epsilon *= self.epsilon_decay
def train_dqn(episode):
loss = []
agent = DQN(env.action_space.n, env.observation_space.shape[0])
for e in range(episode):
state = env.reset()
state = np.reshape(state, (1, 8))
score = 0
max_steps = 3000
for i in range(max_steps):
action = agent.act(state)
env.render()
next_state, reward, done, _ = env.step(action)
score += reward
next_state = np.reshape(next_state, (1, 8))
agent.remember(state, action, reward, next_state, done)
state = next_state
agent.replay()
if done:
print("episode: {}/{}, score: {}".format(e, episode, score))
break
loss.append(score)
# Average score of last 100 episode
is_solved = np.mean(loss[-100:])
if is_solved > 200:
print('\n Task Completed! \n')
break
print("Average over last 100 episode: {0:.2f} \n".format(is_solved))
return loss
if __name__ == '__main__':
print(env.observation_space)
print(env.action_space)
episodes = 400
loss = train_dqn(episodes)
plt.plot([i+1 for i in range(0, len(loss), 2)], loss[::2])
plt.show()
```
| github_jupyter |
# Guide to using Python with Jupyter
In this file you can find some of the most important things about how Python works and different functions that might be helpful with getting started. Also including some examples of how they work.
About using this document: you should run the cell found in **section 2** first (every time you use this document) and then move to the section/example you wanted to check out. For this reason some functions are introduced multiple times throughout the document, so don't let it confuse you.
If you can't remember how to do something in notebook, just press **H** while not in edit mode and you can see a list of shortcuts you can use in Jupyter.
1. [At first](#first)
2. [Modules](#modules)
3. [Data types and modifying data](#data)
4. [Basic calculus and syntax](#basics)
5. [Creating random data](#random)
6. [Plotting diagrams](#plot)
7. [Animations](#anim)
8. [Maps and heatmaps](#maps)
9. [Problems? Check this](#prblm)
<a id="first"></a>
### 1. At first
In programming you can save different values in to **variables** which you can use or change later. Different kinds of variable are integers (int), floating-point numbers (float) or strings (str) for example. In Python creating variables is easy, since you don't have to initialize them.
Sometimes bits of memory can be 'left' in the **kernel** running the program, which makes the program not run correctly. It happens regularly, and is nothing to worry about. Just press ***Kernel*** from the top bar menu and choose ***Restart & Clear output***. This resets the kernel memory and clears all output, after which you can start over again. This doesn't affect any changes in the text or code, so it's not for fixing those errors.
<a id="modules"></a>
### 2. Modules
Python is widely used in scientific community for computing, modifying and analyzing data, and for these purposes Python is greatly optimized. Part of Python is to use different kind of *modules*, which are files containing definitions (functions) and statements. These modules are imported using **import**-command, and even if at first it seems some kind of magic as to which modules to import, it gets easier with time.
If you check the materials used in the Open Data -project, you'll probably notice that each Github-folder contains a text file 'requirements.txt'. These contains the module names used in the notebooks so for example [MyBinder](www.mybinder.org) can build a working platform for Jupyter. The most important modules we're going to use are:
```
# Most essential modules:
import pandas as pd # includes tools used in reading data
import numpy as np # includes tools for numerical calculus
import matplotlib.pyplot as plt # includes tools used in plotting data
# Other useful modules:
import random as rand # includes functions in generating random data
from scipy import stats # includes tools for statistical analysis
from scipy.stats import norm # tools for normal distribution
import matplotlib.mlab as mlab # more plotting tools for more complicated diagrams
# Not a module, but essential command which makes the output look prettier in notebooks
%matplotlib inline
```
Remember to run the cell above if you want the examples in this notebook to work.
You can write the above ```import -- as``` shorter without **as**, which just renames the modules, but it makes your future much easier. If you want to read more about the used modules, select 'Help' from the top bar and you can find some links to documentation.
Of course there are a lot of other modules as well, which you can easily google if need be. Thanks to Python being used so widely, you can find thousands of examples online. If you have some problems/questions, [StackExchange](https://stackexchange.com/) and [StackOverflow](https://stackoverflow.com/) are good places to start. Chances are that someone has ran in to the exact same you are facing before.
<a id="data"></a>
### 3. Data types and modifying data
**Summary of data-manipulation:**
Reading .csv $\rightarrow$
``` Python
name = pd.read_csv('path', varargin)
```
Reading tables $\rightarrow$
``` Python
pd.read_table('path', varargin)
```
Checking what's in the file $\rightarrow$
``` Python
name.head(n)
```
Length $\rightarrow$
``` Python
len(name)
```
Shape $\rightarrow$
``` Python
name.shape
```
Columns $\rightarrow$
``` Python
name.column
name['column']
```
Choosing data within limits $\rightarrow$
``` Python
name[(name.column >= lower_limit) & (name.column <= upper_limit)]
```
Searching for text $\rightarrow$
``` Python
name['column'].str.contains('part_of_text')
```
Add columns $\rightarrow$
``` Python
name = name.assign(column = info)
```
Remove columns $\rightarrow$
``` Python
name.drop(['column1','column2'...], axis = 1)
```
Open data from CMS-experiment is in .csv (comma-separated-values) files. For a computer, this kind of data is easy to read using *pandas*-module. Saving the read file in a variable makes the variable type *dataframe*. If you're interested more in dataframes and what you can do to it, you can check [here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) for more information.
The easiest way to read data are **pandas.read_csv** and **pandas.read_tabe**. If the data if nice (as in separated by commas, headings are nice, fonts aren't too exotic..), you don't usually need any extra steps.
```
# Let's load a dataset about particles and save it in to a variable:
doublemu = pd.read_csv('http://opendata.cern.ch/record/545/files/Dimuon_DoubleMu.csv')
```
This kind of form ('...//opendata.cern...') fetches the data directly from the website. It could also be of the form **'Dimuon_doubleMu.csv'**, if the data you want to read is in the same folder with the notebook. Or if the file is in another folder it could be of the form **'../folder/data.csv'**.
If the data is not in .csv, you can read it using the more broad **pandas.read_table**-command, which can read multiple types of files and not just csv. The most common problem is data being separated with something else than comma, such as ; or -. In this case you can put an extra argument in the command: **pandas.read_table('path', sep='x')**, with x being the separator. Another common problem is if the ordinal number of rows starts with something different than zero, or if the heading for columns is somewhere else than the first row. In this case you might want to put an extra argument **header = n**, where n is the number of the row headers are in. NOTE! In computing you always start counting at zero, unless otherwise mentioned.
More information about possible arguments [here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html).
Under this you can see an example about data which doesn't have a line for headers. In the file there is saved data about the Sun since 1992. If you want to see what each column holds in, you can see the meaning in [here](http://sidc.oma.be/silso/infosndhem).
```
# Load a set of Sun's data and name it the way we want
sunDat = pd.read_table('http://sidc.oma.be/silso/INFO/sndhemcsv.php', sep = ';', encoding = "ISO-8859-1")
```
For clarity let's see what does the data look like. For this, the command **name.head(n)** is nice, which shows the first n rows of the chosen data. By assumption n = 5, in case you don't give it any value.
```
doublemu.head()
sunDat.head()
```
Above you can see that **sunDat**-variable's first real row is now a header, which is nasty for 1) headings are confusing and 2) we are missing one line of data. Let's solve this by putting a header argument in read_table like this: **read_table('path', header = -1)**, which tells the program that header doesn't exist. The command automatically creates a new first line with numbers as headings.
```
sunDat = pd.read_table('http://sidc.oma.be/silso/INFO/sndhemcsv.php', sep = ';', encoding = "ISO-8859-1", header = -1)
sunDat.head()
```
Which isn't very informative for us... We can of course rename them to make it easier (for a human) to read by using **names = ['name1', 'name2', 'name3'..]** command.
```
sunDat = pd.read_table('http://sidc.oma.be/silso/INFO/sndhemcsv.php', sep=';', encoding = "ISO-8859-1", header = None,
names = ['Year','Month','Day','Fraction','$P_{tot}$','$P_{nrth}$','$P_{sth}$','$\sigma_{tot}$','$\sigma_{nrth}$',
'$\sigma_{sth}$','$N_{tot}$','$N_{nrth}$','$N_{sth}$','Prov'])
sunDat.head()
```
Apart from **name.head()**-command there are couple more commands which are useful when checking out the shape of data. **len(name)** tells you the amount of rows (length of the variable) and **name.shape** tells both amount of rows and columns.
```
# Usually the code sells show only the last line of the code in output. With print()-command you can get more of the values
# visible. You can try what happens if you remove the print().
print (len(sunDat))
print (sunDat.shape)
```
When the data is saved in a variable, we can start to modify it the way we want. More often than not, we are interested in a single variables in the data. In this case you want to be able to take single columns of the data, or choose just the rows where the values are within certain limits.
You can choose a column by writing **data_name.column** or **data_name['column']**. The latter is useful if the column name starts with a number (in this scase the computer probably thinks the number as an ordinal number). If you want to make your life a bit easier and don't care about other columns or rows that you've chosen, you might want to save them to a new variable (with _very_ large datasets this might cause memory related problems, but you probably don't have to worry about it). You can do this by just writing **new_variable_name = data_name = data_name.column** and use the new variable instead. Using the new variable also helps in case of different errors, as smaller amount data is easier to handle and possible mistakes are easier to notice (for example if the program starts to draw a histogram of multiple variables and looks like it's stuck in an infinite loop).
```
# Let's save the data of invariant masses (column named M in the data) in to a new variable
invMass = doublemu.M
invMass.head()
```
An easy way to choose certain rows is to create a new variable, in which you save the values from the original data that fulfill certain conditions. In this case choosing values between limits would look like this:
```Python
new_var = name[(name.column >= lower_limit) & (name.column <= upper_limit)]
```
Of course the condition might be any other logical element, such as a certain number (value == number) or a piece of text in a non-numerical data.
```
# As an example, let's isolate the rows from the original data, where both of the particles energies are at least 30 GeV
highEn = doublemu[(doublemu.E1 >= 30) & (doublemu.E2 >= 30)]
highEn.head()
print ('Amount of particles with energy >= 30 GeV: ', len(highEn))
print ('Amount of total particles: ',len(doublemu))
```
If you want to search text, you can try **name.loc[ ]**-function:
```Python
new_var = old_var.loc[old_var['column'] == 'wanted_thing']
```
In this case you of course have to know exactly what you're looking for. If you want to choose rows more blindly (as in you know what the column _might_ contain), you can try **str.contains**-function (str.contains() actually returns a boolean value depending whether the column contains the text or not, that's why we have to choose the certain rows from the data for which the statement is true):
```Python
new_var = old_var[old_var['column'].str.contains('contained_text')]
```
This creates a new variable, that contains all the rows in which the 'column' contains 'contained_text' somewhere in it's value. By assumption str.contains() is case-sensitive, but it can be set off:
```Python
new_var = old_var[old_var['column'].str.contains('contained_text', case = False)]
```
Also negation works, as for example below where we delete all Ltd-companies (Oy or Oyj in Finnish, Ab in Swedish) from a data containing all Finnish companies producing alcoholic beverages. (This may also delete companies having -oy- somewhere in the name, so you should be careful with this method.)
```
alcBev = pd.read_csv('http://avoindata.valvira.fi/alkoholi/alkoholilupa_valmistus.csv',
sep = ';', encoding = "ISO-8859-1", na_filter = False)
# Sorry about the Finnish headings!
alcBev.head()
producers = alcBev[alcBev['Nimi'].str.contains('Oy|Ab') == False]
producers.head()
```
If you want to add or remove columns from the data, you can use **name = name.assign(column = information)** to add columns and
**name.drop(['column1', 'column2',...], axis = 1)** to drop columns. In drop **axis** is meanful so the command targets columns specifically instead of rows.
```
# Removing a column with .drop.
# Sometimes .drop doesn't work correctly (we don't know why, gotta look in to it), so let's just save the result to the old
# variable to avoid it
alcBev = alcBev.drop(['Nimi'], axis = 1)
alcBev.head()
# Inserting a column using assign
# Let's insert a column R with some numbers in it. Remember to check that the length of the column is correct
numb = np.linspace(0, 100, len(alcBev))
alcBev = alcBev.assign(R = numb)
alcBev.head()
```
<a id="basics"></a>
### 4. Basic calculus and syntax
**Summary of basic calculus:**
Absolute values $\rightarrow$
```Python
abs(x)
```
Square root $\rightarrow$
```Python
sqrt(x)
```
Addition $\rightarrow$
```Python
x + y
```
Substraction $\rightarrow$
```Python
x - y
```
Division $\rightarrow$
```Python
x/y
```
Multiplying $\rightarrow$
```Python
x*y
```
Powers $\rightarrow$
```Python
x**y
```
Maximum value $\rightarrow$
```Python
max(x)
```
Minimum value $\rightarrow$
```Python
min(x)
```
Creating own function $\rightarrow$
```Python
def name(input):
do something to input
return
```
The basic operations are very basic, you write them as you would in any computer-based calculator. If you want the program to print out more than one thing, remember to use **print()**. You can also combine text and numbers. Function **repr(numbers)** might come in handy as it transforms the number to a more printable datatype. In [this](https://docs.python.org/3/library/functions.html) you can find all the functions you can use in Python without importing any modules. In [here](https://docs.python.org/3/library/stdtypes.html) you can find pretty much everything that's built-in in the Python interpreter, in case you're interested.
```
# You can change what kind of calculation (result) is saved in to the 'num'-variable
num = 14*2+5/2**2
text = 'The result of the day is: '
print (text + repr(num))
# max() finds the largest number in the set
bunch_of_numbers = [3,6,12,67,578,2,5,12,-34]
print('The largest number is: ' + repr(max(bunch_of_numbers)))
```
The more interesting case is creating your own functions in your own needs. This works by **defining** the function as follows:
``` Python
def funcName(input):
do stuff
return
```
Function doesn't actually have to return anything, if for example it's only used to print stuff.
```
# Let's create a function that prints out half of the given number
def divide_2(a):
print(a/2)
divide_2(6)
# Let's make a addition-function, that asks the user for integers
def add(x, y):
summ = x + y
text = '{} and {} together are {}.'.format(x, y, summ)
print(text)
def ownChoice():
a = int(input("Give an integer: "))
b = int(input("And another one: "))
add(a, b)
ownChoice()
# How about a function that returns a given list of radians in degrees. While-loop loops through the list from the first (i=0)
# element to the last one (len(list) - 1) and does the operation to each one
def angling(a):
b = a.copy() # list.copy() is useful so the original list doesn't change
i=0
while i < len(a):
b[i] = b[i]*360/(2*np.pi)
i+=1
return b;
rads = [5,2,4,2,1,3]
angles = angling(rads)
print('Radians: ', rads)
print('Angles: ', angles)
# The same using for-loop:
def angling2(a):
b = a.copy()
for i in range(0,len(a)):
b[i] = b[i]*360/(2*np.pi)
return b;
rad = [1,2,3,5,6]
angle = angling2(rad)
print('Radians: ', rad)
print('Angles: ', angle)
```
<a id="random"></a>
### 5. Creating random data
**Summary:**
Random integer between lower and upper $\rightarrow$
```Python
rand.randint(lower,upper)
```
Random float between 0 and 1 $\rightarrow$
```Python
rand.random()
```
Choose a random (non-uniform) sample $\rightarrow$
```Python
rand.choices(set, probability, k = amount)
```
Generate a random sample of a given size $\rightarrow$
```Python
rand.sample(set, k = amount)
```
Normal distribution $\rightarrow$
```Python
rand.normalvariate(mean, standard deviation)
```
Evenly spaced numbers over interval $\rightarrow$
```Python
np.linspace(begin, end, num = number of samples)
```
Evenly spaced numbers over interval $\rightarrow$
```Python
np.arange(begin, end, stepsize)
```
It is sometimes interesting and useful to generate simulated or random data among real data. Generating more complex simulations (such as [Monte Carlo](https://en.wikipedia.org/wiki/Monte_Carlo_method) for example) are outside of the goals of this guide, we can still look at different ways to generate random numbers. Of course you have to remember that the usual random generation methods are pseudorandom, so you might not want to use these to hide your banking accounts or to generate safety numbers. Leave that to more complex and heavier methods (you probably should just forget it and leave it to professionals).
```
# Let's generate a random integer between 1 and 100
lottery = rand.randint(1,100)
text = 'Winning number of the day is: '
print (text + repr(lottery))
# Generate a random float number between 0 and 1 and multiply it by 5
num = rand.random()*5
print(num)
# Let's pick random elements from a list, but make certain elements more likely
kids = ['Pete','Jack','Ida','Nelly','Paula','Bob']
probabilities = [10,30,20,20,5,5]
# k is how many we want to choose, choices-command might take the same name multiple times
names = rand.choices(kids, weights = probabilities, k = 3)
print(names)
# Let's do the same without multiple choices (this is useful for teachers to pick 'volunteers')
volunteers = rand.sample(kids, k = 3)
print (volunteers)
# Random number from a given normal distribution (mean, standard dev.)
num = rand.normalvariate(3, 0.1)
print(num)
# Let's create an evenly spaced list of numbers between 1 and 10, and randomize it a bit
numbers = np.linspace(1, 10, 200)
def randomizer(a):
b = a.copy()
for i in range(0,len(b)):
b[i] = b[i]*rand.uniform(0,b[i])
return b
result = randomizer(numbers)
# print(numbers)
# print(result)
fig = plt.figure(figsize=(15, 10))
plt.plot(result,'g*')
plt.show()
# Another method to create a list of evenly spaced numbers [a,b[ is by arange(a,b,c), where c is the stepsize.
# Notice that b is not included in the result. (The result might be inconsistant if c is not an integer)
numbers = np.arange(1,10,2)
print(numbers)
```
<a id="plot"></a>
### 6. Plotting diagrams
**Summary:**
Basic plot $\rightarrow$
```Python
plt.plot(name, 'style and colour', varargin)
```
Scatterplot $\rightarrow$
```Python
plt.scatter(x-data, y-data, marker = 'markerstyle', color = 'colour', varargin)
```
Histogram $\rightarrow$
```Python
plt.hist(data, amount of bins, range = (begin,end), varargin)
```
Legend $\rightarrow$
```Python
plt.legend()
```
show plot $\rightarrow$
```Python
plt.show()
```
Fitting normal distribution in data $\rightarrow$
```Python
(mu, sigma) = norm.fit(data)
... et cetera
```
Formatting $\rightarrow$
```Python
plt.xlabel('x-axis name')
plt.title('title name')
fig = plt.figure(figsize = (horizontal size, vertical size))
```
Plotting errors$\rightarrow$
```Python
plt.errorbar(val1, val2, xerr = err1, yerr = err2, fmt = 'none')
```
Diagrams might very well be the reason to use programming in scientific teaching. Even for bigger datasets it is somewhat quick and effortless to create clarifying visualizations. Next we're going to see how plotting works with Python.
You can freely (end easily) change the colours and markers of the diagrams. [Here](https://matplotlib.org/api/markers_api.html?highlight=markers#module-matplotlib.markers) you can find the most important things used in plotting data, which of course is different marker styles.
```
# Basic diagram with plot-function. If the parameters contains only one line of data, x-axis is the ordinal numbers
numbers = [1,3,54,45,52,34,4,1,2,3,2,4,132,12,12,21,12,12,21,34,2,8]
plt.plot(numbers, 'b*')
# plt.show() should always be used if you want to see what the plot looks like. Otherwise the output shows the memory
# location of the picture among other things, which we probably don't want to look at. So use this
plt.show()
# It's good practice to name different plots, so the readers can understand what's going on
# Here you can see how to name different sets
# Two random datasets
result1 = np.linspace(10, 20, 50)*rand.randint(2,5)
result2 = np.linspace(10, 20, 50)*rand.randint(2,5)
# Draw them both
plt.plot(result1, 'r^', label = 'Measurement 1')
plt.plot(result2, 'b*', label = 'Measurement 2')
# Name the axes and title, with fontsize-parameter you can change the size of the font
plt.xlabel('Time (s)', fontsize = 15)
plt.ylabel('Speed (m/s)', fontsize = 15)
plt.title('Measurements of speed \n', fontsize = 15) # \n creates a new line to make the picture look prettier
# Let's add legend. If the loc-parameter is not defined, legend is automatically placed somewhere where it fits, usually
plt.legend(loc='upper left', fontsize = 15)
# and show the plot
plt.show()
# Just as easily we can plot trigonometric functions
# Let the x-axis be an evenly spaced number line
x = np.linspace(0, 10, 100)
# Define the functions we're going to plot
y1 = np.sin(x)
y2 = np.cos(x)
# and draw
plt.plot(x, y1, color = 'b', label = 'sin(x)')
plt.plot(x, y2, color = 'g', label = 'cos(x)')
plt.legend()
plt.show()
# The basic size of the pictures looks somewhat small. Figsize-command is going to help us making them the size we want
x = np.linspace(0, 10, 100)
y1 = np.sin(x)
y2 = np.cos(x)
# Here we define the size, you can try what different sizes look like
fig = plt.figure(figsize=(15, 10))
plt.plot(x, y1, color = 'b', label = 'sin(x)')
plt.plot(x, y2, color = 'g', label = 'cos(x)')
plt.legend()
plt.show()
```
Another traditional diagram is a [scatterplot](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.scatter.html), where both axes are variables. This is very common in for example physics research.
```
def randomizer(a):
b = a.copy()
for i in range(0,len(b)):
b[i] = b[i]*rand.uniform(0,1)
return b
# Let's generate random data, where the other value is between 0 and 5, and the other between 0 and 20
val1 = randomizer(np.linspace(3,5,100))
val2 = randomizer(np.linspace(10,20,100))
fig = plt.figure(figsize=(10,5))
plt.scatter(val1, val2, marker ='*', color = 'b')
plt.show()
# Another scatter-example. Now both values are scattered by normal distribution, not uniforlmy random
def randomizer(a):
b = a.copy()
for i in range(0,len(b)):
b[i] = b[i]*rand.normalvariate(1, 0.1)
return b
val1 = randomizer(np.linspace(3,5,100))
val2 = randomizer(np.linspace(10,20,100))
fig = plt.figure(figsize=(10,5))
plt.scatter(val1, val2, marker ='*', color = 'b', label = 'Measurements')
# Just for fun: let's fit a line there using linear regression
slope, intercept, r_value, p_value, std_err = stats.linregress(val1, val2)
plt.plot(val1, intercept + slope*val1, 'r', label='Linreg. fit')
plt.legend(fontsize = 15)
plt.show()
# If you want to know more about the fitted line, you can write print(slope), print(r_value) etc.
```
Another significant diagram is a histogram, which represents the amount of different results in the data. Histograms are fairly common, for example in (particle) physics, medical science and social sciences.
```
# Let's make a random age distribution and create a histogram out of it
def agegenerator(a):
b = a.copy()
for i in range(0, len(b)):
b[i] = b[i]*rand.randint(1,100)
return b;
ages = agegenerator(np.ones(1000))
fig = plt.figure(figsize = (10,5))
plt.hist(ages, bins = 100, range = (0,110))
plt.xlabel('Ages', fontsize = 15)
plt.ylabel('Amount', fontsize = 15)
plt.title('Age distribution in a sample of %i people \n' %(len(ages)), fontsize = 15 )
plt.show()
# Let's see what a histogram for particle collisions look like
doublemu = pd.read_csv('http://opendata.cern.ch/record/545/files/Dimuon_DoubleMu.csv')
# So this histogram is about the distribution of invariant masses (column M of the data)
fig = plt.figure(figsize = (10,5))
plt.hist(doublemu.M, bins = 300, range = (0,150))
plt.xlabel('Invariant mass(GeV/$c^2$)', fontsize = 15)
plt.ylabel('Number of events', fontsize = 15)
plt.title('Distribution of invariant masses from muons \n', fontsize = 15 )
plt.show()
# Let's focus on the bump between 80 and 100 GeV. We could just set range = (80,100), but for the sake of example
# we're going to crop the data and choose only the events in the specific range
part = doublemu[(doublemu.M >= 80) & (doublemu.M <= 100)]
fig = plt.figure(figsize = (10,5))
plt.hist(part.M, bins = 200)
plt.xlabel('Invariant mass (GeV/$c^2$)', fontsize = 15)
plt.ylabel('Number of events', fontsize = 15)
plt.title('Invariant mass distribution from muons \n', fontsize = 15 )
plt.show()
```
In general making non-linear fits for the results requires more or less (more) coding, but in case of distributions (normal, as the invariant mass looks like, for example) Python has quite a lot of commands to make your life easier.
```
# Here we set the limits for the fit. It is good practice to set these in variables in case you want to change them later,
# makes it much easier
lower = 87
upper = 95
piece = doublemu[(doublemu.M > lower) & (doublemu.M < upper)]
fig = plt.figure(figsize=(15,10))
# Above is the limits for the normal fit, below are the limits of how wide are we going to draw the histogram.
# Note that the fit isn't for everything that's seen on the histogram
shw_lower = 80
shw_upper = 100
area = doublemu[(doublemu.M > shw_lower) & (doublemu.M < shw_upper)]
# Because the shown histogram's area is equal to 1, we have to calculate a multiplier for the fitted curve
multip = len(piece)/len(area)
# standard deviation and variance for the fit
(mu, sigma) = norm.fit(piece.M)
# Let's draw the histogram
n, bins, patches = plt.hist(area.M, 300, density = 1, facecolor = 'g', alpha=0.75, histtype = 'stepfilled')
# And make the fit as well
y_fit = multip*norm.pdf(bins, mu, sigma)
line = plt.plot(bins, y_fit, 'r--', linewidth = 2)
# This heading looks bad in the code, but beautiful in the final picture.
plt.title(r'$\mathrm{Histogram\ of\ invariant\ masses\ normed\ to\ one:}\ \mu=%.3f,\ \sigma=%.3f$'
%(mu,sigma),fontsize=15)
# While we're at it, let's give the plot a grid!
plt.grid(True)
plt.show()
```
We can also draw a histogram out of data which has no numbers. Let's take a look at [collision data from London](http://roads.data.tfl.gov.uk).
```
# Here's all the collisions from 2016, a bit over 40 000 different vehicles. Same events have the same AREFNO.
traffic = pd.read_table('http://roads.data.tfl.gov.uk/AccidentStats/Prod/2016-gla-data-extract-vehicle.csv', sep = ",")
casualties = pd.read_table('http://roads.data.tfl.gov.uk/AccidentStats/Prod/2016-gla-data-extract-casualty.csv', sep = ",")
traffic.head()
casualties.head()
# Let's check the collisions for ages between certain limits
lower = 18
upper = 25
age_collisions = traffic.loc[(traffic['Driver Age'] <= upper) & (traffic['Driver Age'] >= lower)]
# What does the vehicle distribution with this age group look like?
fig = plt.figure(figsize=(10,5))
plt.hist(age_collisions['Vehicle Type'])
# We have to rotate the xticks to see what kind of vehicles are actually used
plt.xticks(rotation = 40, ha='right')
plt.show()
```
Cars seems to dominate this statistic, which isn't too surprising. But ridden horse? We could dig deeper into this:
```
# Let's take out all the horses from the data:
horses = traffic.loc[traffic['Vehicle Type'] == '16 Ridden Horse']
horses.head()
# Hmm, same AREFNO, so the horses seems to have collided with each other (Veh. Impact: Front hit first, back hit first)
# How severe was this collision?
horseCasualties = casualties.loc[casualties['AREFNO'] == '0116TW60237']
horseCasualties.head()
# Protip: manually entering the ref# is not a good practice, particularly when working with larger datasets. In that
# case you should make a reference to another table and compare the ref# to make this work automatically.
```
Luckily the collision wasn't too severe, and only one of the riders got hurt slightly.
A word on errors when plotting data: in reality there's always some variance regarding how accurate a measurement is, or even how accurately you can measure something. These precision limits can be found out using statistical methods on the fits made for the data, they can be known for each data point separately (which often is the case in measurements made in schools). Let's make a example of this.
```
# As you may have noticed by now, we have defined randomizer multiple times in this document. That's not how it should
# be done, as it takes away the idea of functions. However it's done this way if someone wants to check out only
# this example and not the first one where this function was introduced.
def randomizer(a):
b = a.copy()
for i in range(0,len(b)):
b[i] = b[i]*rand.normalvariate(1, 0.1)
return b
# Let's generate the random data
val1 = randomizer(np.linspace(3,5,100))
val2 = randomizer(np.linspace(10,20,100))
# And let's give each datapoint a random error
err1 = (1/5)*randomizer(np.ones(len(val1)))
err2 = randomizer(np.ones(len(val2)))
fig = plt.figure(figsize=(10,5))
plt.scatter(val1, val2, marker ='*', color = 'b', label = 'Measurements')
plt.errorbar(val1, val2, xerr = err1, yerr = err2, fmt = 'none')
# Let's throw in a fit based on linear regression as well
slope, intercept, r_value, p_value, std_err = stats.linregress(val1, val2)
plt.plot(val1, intercept + slope*val1, 'r', label='Fit')
plt.legend(fontsize = 15)
plt.show()
# If you want to know more of the mathematical values of the fit, you can write print(slope), print(std_err), etc..
```
<a id="anim"></a>
### 7. Animations
You can pretty easily also create animations using Python. This can be done with multiple different modules, but we recommend **NOT** to use plotly with Notebooks, as it slows down everything to the point nothing can be done. In this example we're going to create an animation of a histogram which nicely shows why more data = better results.
```
data = pd.read_csv('http://opendata.cern.ch/record/545/files/Dimuon_DoubleMu.csv')
iMass = data.M
# Let's define the function that's going to upgrade the histogram
# variable num is basically the frame number
# So the way animations work is that this function calculates a new histogram for each frame
def updt_hist(num, iMass):
plt.cla()
axes = plt.gca()
axes.set_ylim(0,8000)
axes.set_xlim(0,200)
plt.hist(iMass[:num*480], bins = 120)
```
NOTE: cells including animations are $\Large \textbf{ slow }$ to run. The more frames the more time it takes to run.
```
# Required for animations
import matplotlib.animation
%%capture
fig = plt.figure()
# fargs tells which variables the function (updt_hist) is going to take in, the empty variable is required
# so the program knows that there's two variables used in the function. The other one is automatically
# the current frame
anim = matplotlib.animation.FuncAnimation(fig, updt_hist, frames = 200, fargs=(iMass, ) )
# anim.to_jshtml() changes the animation to (javascript)html, so it can be shown on Notebook
from IPython.display import HTML
HTML(anim.to_jshtml())
```
The above cell doesn't give output because of the ```%%capture``` -magic command. This is done because otherwise we'd get two different pictures of the animation. It looks prettier this way.
```
HTML(anim.to_jshtml())
```
<a id="maps"></a>
### 8. Maps and heatmaps
Using interactive maps in Jupyter Notebook so you can plot data on them? Yes please! Using them is much simpler than it sounds. In this example you'll see how. The data you're going to plot just needs to have latitude and longitude columns so you can plot it (or some other coordinate system from which you can calculate latitude and longitude).
```
# Folium has maps:
import folium
# We're also going to need a way to plot a heatmap:
from folium.plugins import HeatMap
# The data includes all earthquake data from the last month, chances are that the newest data of the set are from
# last night or this morning
quakeData = pd.read_csv('https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_month.csv')
quakeData.head()
# This is required as the data we have now is in dataframe, and HeatMap-function reads lists
# First let's make long enough list, in this variable we're going to save the data
dat = [0]*len(quakeData)
# The list is going to consist of tuples containing latitude, longitude and magnitude
# (magnitude is not required, but it's nice to have in case you want to plot only quakes above
# a certain magnitude for example)
for i in range(0, len(quakeData)):
dat[i] = [quakeData['latitude'][i], quakeData['longitude'][i], quakeData['mag'][i]]
# There's some (one) data about earthquakes that don't include magnitude (saved as NaN) so
# we have to remove these values
dat = [x for x in dat if ~np.isnan(x[2])]
# Different map tiles: https://deparkes.co.uk/2016/06/10/folium-map-tiles/
# world_copy_jump = True tells us that the map can be scrolled to the side and the data can be seen there as well
# If you want the map to be a 'single' map you can put an extra argument no_wrap = True
# With control_scale you can see the scale on the bottom left corner
m = folium.Map([15., -75.], tiles='openstreetmap', zoom_start=3, world_copy_jump = True, control_scale = True)
HeatMap(dat, radius = 15).add_to(m)
m
```
Let's check another example where we have to chance the coordinate system. This dataset uses a easting-northing system which isn't too different from easting-northing known from the [UTM](https://en.wikipedia.org/wiki/Universal_Transverse_Mercator_coordinate_system). You can find more about the coordinate system on the page 38 on [this](https://www.ordnancesurvey.co.uk/docs/support/guide-coordinate-systems-great-britain.pdf) and see that the conversion isn't too trivial, if it interests you.
```
collData = pd.read_csv('https://files.datapress.com/london/dataset/road-casualties-severity-borough/TFL-road-casualty-data-since-2005.csv')
collData.head()
# Luckily someone has encountered this grid system before and we don't have to do the conversion ourselves
from OSGridConverter import grid2latlong
# Ignore collisions where the severity is 'slight'
part = collData[collData['Casualty_Severity'] != '3 Slight']
# In this example the conversion is done in two steps just to show how it's done, makes
# the code more readable
# coords is used to temporarily store the lat&lon data as grid2latlong function returns just one row
coords = [0]*len(part)
# And we have to iterate the whole dataset row by row..
# Plus since the coordinates in the datasets don't have the area (TQ, where London is located in), but
# the area is told in the first 2 numbers in each easting and northing values we have
# to choose everything else in them using syntax (name)[1:], which ignores the first 2 numbers
# ALSO they are saved as integers and grid2latlong takes in strings, so we have to chance
# the datatype by using str(value)
# This cell might run for a while
i = 0
for index, row in part.iterrows():
coords[i] = grid2latlong('TQ' + str(row['Easting'])[1:] + str(row['Northing'])[1:])
i += 1
# Because of the type grid2latlong returns, we have to create a new variable (list) so we can use
# the values with the map
latlong = [[0,0]]*len(coords)
# for each value in coords we choose it's latitude and longitude values and save them
# in i:th row of latlong
for i in range (0,len(coords)):
latlong[i] = [coords[i].latitude,coords[i].longitude]
m = folium.Map([51.5,-0.1], zoom_start=9, world_copy_jump = True, control_scale = True)
HeatMap(latlong, radius = 10).add_to(m)
m
```
Wouldn't it be nice if everyone used the same coordinate system?
<a id="prblm"></a>
### 9. Problems? Check here
**Summary:**
Bohoo, I can't?
Cell seems stuck and doesn't draw the plot or run the code?
I get an error 'name is not defined' or 'name does not exist'?
I tried to save something in to a variable but print(name) tells me None?
My data won't load?
The data I loaded contains some NaN values?
I combined pieces of data but now I can't do things with the new variable?
My code doesn't work, even if it's correctly written?
The dates in the data are confusing the program, how do I fix this?
I copied the data in to a new variable, but the changes to it also changes the original data?
#### Bohoo, I can't?
No problem, nobody starts as a champion. You learn by doing and errors are part of it (some say 90% of coding is fixing errors..).
Using Python there's this one great thing: there are A LOT of users. No matter the problem, chances are someone has faced it already and posted a solution online. Googling the problem usually gives the right answer within the first few results.
Here's fixes to some common problematic situations (which we faced when making this document).
#### Cell seems stuck and doesn't draw the plot or run the code?
If running the cell takes longer than a few seconds, without it being needlessly complicated or handling **large** datasets, it's probably stuck in an infinite loop. You should stop the kernel (by choosing ***Kernel $\rightarrow$ Interrupt*** from the top bar or pressing the square right below it) and check your code for possible errors. If you can't find the problem try to simplify the syntax, until you're positibe there's nothing wrong with your code. (Sometimes also just resetting the kernel and running the cells again makes it work.)
One common problem is that a syntax-error makes the program do something wrong. For example: you're drawing a histogram but forgot to choose a specific column. Now the program tries to create a histogram of the whole data, which it obviously can't do without further specifications.
#### I get an error 'name is not defined' or 'name does not exist'?
The variable you're referring to doesn't exist. Check that you've run the cell where the variable is defined during this session. Also make sure that the variable name is correct, as they are case-sensitive.
#### I tried to save something in to a variable but print(name) tells me None?
There really isn't anything in the variable. Remember to save the changes you make in to a variable, for example
```Python
var = load(data)
var = var*2
```
and not
```Python
var = load(data)
var*2
```
Make sure that the operation you want to make is right so it doesn't delete the data by accident (or do anything unexpected).
#### My data won't load?
You can check what text-based data (such as .csv) looks like using the most basic text editors. Now you can see how the data is separated, what rows contain the information you want or is the dataset even the one you wanted.
Separators, headers and such can be defined in the arguments of the read_csv and read_table functions, for example
```Python
pd.read_csv('file.csv', sep = ';')
```
would load a csv file named file.csv (in the same folder), with ';' as a separator. More on this you can find in chapter 3 of this document.
#### The data I loaded contains some NaN values?
NaN stands for Not-A-Number, and it's commonly used in computer sciences. Either the data at that point is strange (like sqrt(-1)) or it simply doesn't exist.
Functions usually don't care about these NaN-values, or you can put in an argument so the function ignores these.
#### I combined pieces of data but now I can't do things with the new variable?
Did you combine different kinds of data types? Usually this isn't problem with Python as it automatically decides what the type is for variables, but sometimes it might create some problems if you combine integers, float or string type variables. In datasets sometimes even numbers are saved as a 'string', which is unfortunate to notice after doing something with the data. In Python there's different operators which can check what kind of type the variable is, such as isstring().
Did you combine the data correctly? If you wanted the columns next to each other, you probably shouldn't combine them the way where they are top of each other. You can check what your variable holds in with varname.shape() or varname.head() commands.
#### My code doesn't work, even if it's correctly written?
Check the code once more. If there's a comma at the wrong place or a character in the variable's name is the wrong size, it creates problems.
If the code _really_ doesn't work, even if it should, the reason might be found from the kernel. Try ***Restart & Clear Output*** from the Kernel menu in the top bar, this usually fixes this.
#### The dates in the data are confusing the program, how do I fix this?
As you're probably aware, different kind of syntax for dates are being used all over the world. If the default settings don't make the data behave correctly, you can try to check from the [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) of pandas.read_csv() how you can change the date settings. **dayfirst** or **date_parser** might solve the problem. There's also a Python module named **[time](https://docs.python.org/3/library/time.html)**, in which you can surely find the solutions for these kind of situations.
#### I copied the data in to a new variable, but the changes to it also changes the original data?
Instead of saving the actualy data in to the new variable, Python copies a _pointer_ there. Pointers tell where the data is saved in the memory. When creating a new variable like this
```Python
new_var = old_var
```
Python just copies the the pointer, and the two variables are practically the same. However, if you only take part of the original data and save it in to a new variable, it creates a copy of the actual data instead of pointers. If you want the whole data in two places (if for example you want one variable to have the data multiplied and compare it to the original data, not sure why someone would want to do this but you never know when working with humans), you should use the command .copy():
```Python
new_var = old_var.copy()
```
This copies the actualy data in to a new memory location and changes to new_var won't affect old_var.
| github_jupyter |
This notebook can be run on Google Colab.
[](https://colab.research.google.com/github/nanotheorygroup/kaldo/blob/master/docs/docsource/crystal_presentation.ipynb)
In Colab, you can enable the GPU acceleration from `Edit` > `Notebook Settings` > `Accelerator` > `GPU`.
# Silicon diamond tutorial
## Remote fetch source code from Github
```
! pip install git+https://github.com/nanotheorygroup/kaldo
```
## Remote fetch supplyment files
```
# Remote fetch kaldo resources from drop box
! wget https://www.dropbox.com/s/bvw0qcxy397g25q/kaldo_resources.zip?dl=0
! mv kaldo_resources.zip?dl=0 kaldo_resources.zip
! unzip kaldo_resources.zip
# Copy files from to the workspace
! cp -r kaldo_resources/forcefields.zip ./
# Unzip files
!unzip forcefields.zip
# Clean workspace
! rm -r forcefields.zip
! rm -r structure_a_si_512.zip
! rm -r kaldo_resources.zip
```
## Remote fetch precomplied LAMMPS
```
# Fetch prebuild lammps binary files from dropbox
! wget https://www.dropbox.com/s/cqqhh1ax5er0u8b/lmp-tesla-t4-intel-xeon.gz?dl=0
! mv lmp-tesla-t4-intel-xeon.gz?dl=0 lmp-tesla-t4-intel-xeon.gz
! tar xvzf lmp-tesla-t4-intel-xeon.gz
! rm /content/lmp-tesla-t4-intel-xeon.gz
# Navigate back to lammps source foder when for
# later LAMMPS-Python integration
%cd /content/lammps/src
print('\n')
print('Remote Fetching precomplied LAMMPS is finished!')
```
## Integrate LAMMPS Into Python
```
! make install-python
# Copy executable file to where the python Module locates
import shutil
src_path = '/usr/lib/python3/dist-packages/liblammps.so'
dist_path = '/usr/local/lib/python3.7/dist-packages/liblammps.so'
shutil.copyfile(src_path,dist_path)
# Naviagate back to main folder before simulation
%cd /content
print('\n')
print('LAMMPS-Python Integration is completed!')
```
## Thermal transport simulation for silicon-bulk
### Compute $2^{nd}$ and $3^{rd}$ order IFCS.
```
from ase.build import bulk
from ase.calculators.lammpslib import LAMMPSlib
from kaldo.forceconstants import ForceConstants
import numpy as np
# We start from the atoms object
atoms = bulk('Si', 'diamond', a=5.432)
# Config super cell and calculator input
supercell = np.array([3, 3, 3])
lammps_inputs = {
'lmpcmds': [
'pair_style tersoff',
'pair_coeff * * forcefields/Si.tersoff Si'],
'log_file': 'lammps-si-bulk.txt',
'keep_alive':True}
# Create a finite difference object
forceconstants_config = {'atoms':atoms,'supercell': supercell,'folder':'fd'}
forceconstants = ForceConstants(**forceconstants_config)
# Compute 2nd and 3rd IFCs with the defined calculators
forceconstants.second.calculate(LAMMPSlib(**lammps_inputs), delta_shift=1e-4)
forceconstants.third.calculate(LAMMPSlib(**lammps_inputs), delta_shift=1e-4)
```
### Create phonons object
```
from kaldo.phonons import Phonons
# Define k-point grids, temperature
# and the assumption for the
# phonon poluation (i.e classical vs. quantum)
k = 7
kpts = [k, k, k]
temperature = 300
is_classic = False
k_label = str(k) + '_' + str(k) + '_' + str(k)
# Create a phonon object
phonons = Phonons(forceconstants=forceconstants,
kpts=kpts,
is_classic=is_classic,
temperature=temperature,
folder='si-bulk-ald-' + k_label,
storage='numpy')
```
### Calculate conductivities for infinite-size sample
```
from kaldo.conductivity import Conductivity
# Calculate conductivity with direct inversion approach (inverse)
print('\n')
inv_cond_matrix = (Conductivity(phonons=phonons, method='inverse').conductivity.sum(axis=0))
print('Inverse conductivity (W/mK): %.3f'%(np.mean(np.diag(inv_cond_matrix))))
print(inv_cond_matrix)
# Calculate conductivity with relaxation time approximation (rta)
print('\n')
rta_cond_matrix = Conductivity(phonons=phonons, method='rta').conductivity.sum(axis=0)
print('Rta conductivity (W/mK): %.3f'%(np.mean(np.diag(rta_cond_matrix))))
print(rta_cond_matrix)
# Calculate conductivity with self-consistent approach (sc)
print('\n')
sc_cond_matrix = Conductivity(phonons=phonons, method='sc',n_iterations=20).conductivity.sum(axis=0)
print('Self-consistent conductivity (W/mK): %.3f'%(np.mean(np.diag(sc_cond_matrix))))
print(sc_cond_matrix)
```
### Visualize harmonic properties using built-in plotter
```
import kaldo.controllers.plotter as plotter
import matplotlib.pyplot as plt
plt.style.use('seaborn-poster')
# Plot dispersion relation and group velocity in each direction
plotter.plot_dispersion(phonons,n_k_points=int(k_label))
print('\n')
```
### Access and visualize properties calculated during simulations
```
# Direct access to properties
# calculated during the simulation
# Plot heat capacity vs frequency
freq_full = phonons.frequency.flatten(order='C')
cv_1d = phonons.heat_capacity.flatten(order='C')[3:]
print('\n')
plt.figure()
plt.scatter(freq_full[3:],1e23*cv_1d,s=15)
plt.ylabel (r"$C_{v}$ ($10^{23}$ J/K)", fontsize=25, fontweight='bold')
plt.xlabel ("$\\nu$ (Thz)", fontsize=25, fontweight='bold')
plt.ylim(0.9*1e23*cv_1d[cv_1d>0].min(),
1.05*1e23*cv_1d.max())
plt.show()
# Plot phonon bandwidth vs frequency
band_width_flatten = phonons.bandwidth.flatten(order='C')
freq = freq_full[band_width_flatten!=0]
print('\n')
plt.figure()
plt.scatter(freq,band_width_flatten[band_width_flatten!=0] ,s=15)
plt.ylabel (r"$\Gamma$ (Thz)", fontsize=25, fontweight='bold')
plt.xlabel ("$\\nu$ (Thz)", fontsize=25, fontweight='bold')
plt.ylim(0.95*band_width_flatten .min(), 1.05*band_width_flatten .max())
plt.show()
# Plot phase space vs frequency
print('\n')
plt.figure()
plt.scatter(freq_full[3:],phonons.phase_space.flatten(order='C')[3:],s=15)
plt.ylabel ("Phase space", fontsize=25, fontweight='bold')
plt.xlabel ("$\\nu$ (Thz)", fontsize=25, fontweight='bold')
plt.ylim(phonons.phase_space.min(), phonons.phase_space.max())
plt.show()
```
### Calculate and visualize $\kappa_{per \ mode}$ and $\kappa_{cum}$
```
def cumulative_cond_cal(freq,full_cond,n_phonons):
conductivity = np.einsum('maa->m', 1/3 * full_cond)
conductivity = conductivity.reshape(n_phonons)
cumulative_cond = np.zeros_like(conductivity)
freq_reshaped = freq.reshape(n_phonons)
for mu in range(cumulative_cond.size):
single_cumulative_cond = conductivity[(freq_reshaped < freq_reshaped[mu])].sum()
cumulative_cond[mu] = single_cumulative_cond
return cumulative_cond
# Compute conductivity with per phonon mode basis using different methods
kappa_rta_per_mode = np.einsum('maa->m',1/3*Conductivity(phonons=phonons, method='rta').conductivity)
kappa_inv_per_mode = np.einsum('maa->m',1/3*Conductivity(phonons=phonons, method='inverse').conductivity)
kappa_sc_per_mode = np.einsum('maa->m',1/3*Conductivity(phonons=phonons, method='sc',n_iterations=20).conductivity)
# Compute cumulative conductivity by frequency using different methods
kappa_rta_cum_freq = cumulative_cond_cal(phonons.frequency,Conductivity(phonons=phonons, method='rta').conductivity,phonons.n_phonons)
kappa_sc_cum_freq = cumulative_cond_cal(phonons.frequency,Conductivity(phonons=phonons, method='sc',n_iterations=20).conductivity,phonons.n_phonons)
kappa_inv_cum_freq = cumulative_cond_cal(phonons.frequency,Conductivity(phonons=phonons, method='inverse').conductivity,phonons.n_phonons)
kappa_qhgk_cum_freq = cumulative_cond_cal(phonons.frequency,Conductivity(phonons=phonons, method='qhgk').conductivity,phonons.n_phonons)
print('\n')
# Visualize the cumulative conductivity vs frequency
plt.figure()
plt.plot(freq_full,kappa_rta_per_mode,'r.',label='RTA')
plt.plot(freq_full,kappa_sc_per_mode,'mo',label='Self Consistent',ms=8)
plt.plot(freq_full,kappa_inv_per_mode,'k.',label='Direct Inversion')
plt.xlabel ("$\\nu$ (Thz)", fontsize=25, fontweight='bold')
plt.ylabel(r'$\kappa(W/m/K)$', fontsize=25, fontweight='bold')
plt.legend(loc=1,frameon=False)
#plt.grid()
plt.show()
print('\n')
# Visualize the cumulative conductivity vs frequency
plt.figure()
plt.plot(freq_full,kappa_rta_cum_freq,'r.',label='RTA')
plt.plot(freq_full,kappa_sc_cum_freq,'mo',label='Self Consistent',ms=8)
plt.plot(freq_full,kappa_inv_cum_freq,'k.',label='Direct Inversion')
plt.xlabel ("$\\nu$ (Thz)", fontsize=25, fontweight='bold')
plt.ylabel(r'$\kappa_{cum}(W/m/K)$', fontsize=25, fontweight='bold')
plt.legend(loc=4,frameon=False)
plt.grid()
plt.show()
```
| github_jupyter |
# Sequence classification by multi-layered LSTM with dropout
* Creating the **data pipeline** with `tf.data`
* Preprocessing word sequences (variable input sequence length) using `tf.keras.preprocessing`
* Using `tf.nn.embedding_lookup` for getting vector of tokens (eg. word, character)
* Creating the model as **Class**
* Reference
* https://github.com/golbin/TensorFlow-Tutorials/blob/master/10%20-%20RNN/02%20-%20Autocomplete.py
* https://github.com/aisolab/TF_code_examples_for_Deep_learning/blob/master/Tutorial%20of%20implementing%20Sequence%20classification%20with%20RNN%20series.ipynb
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import time
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import clear_output
import tensorflow as tf
slim = tf.contrib.slim
rnn = tf.contrib.rnn
tf.logging.set_verbosity(tf.logging.INFO)
sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))
os.environ["CUDA_VISIBLE_DEVICES"]="0"
from tensorflow.python.keras.preprocessing.text import Tokenizer
from tensorflow.python.keras.preprocessing.sequence import pad_sequences
```
## Prepare example data
```
x_train_words = ['good', 'bad', 'amazing', 'so good', 'bull shit',
'awesome', 'how dare', 'very much', 'nice', 'god damn it',
'very very very happy', 'what the fuck']
y_train = np.array([0, 1, 0, 0, 1,
0, 1, 0, 0, 1,
0, 1], dtype=np.int32)
# positive sample
index = 0
print("word: {}\nlabel: {}".format(x_train_words[index], y_train[index]))
# negative sample
index = 1
print("word: {}\nlabel: {}".format(x_train_words[index], y_train[index]))
```
## Tokenizer
```
tokenizer = Tokenizer(char_level=True)
%%time
tokenizer.fit_on_texts(x_train_words)
num_chars = len(tokenizer.word_index) + 1
print("number of characters: {}".format(num_chars))
tokenizer.word_index
x_train_tokens = tokenizer.texts_to_sequences(x_train_words)
index = 2
print("text: {}".format(x_train_words[index]))
print("token: {}".format(x_train_tokens[index]))
x_train_seq_length = np.array([len(tokens) for tokens in x_train_tokens], dtype=np.int32)
num_seq_length = x_train_seq_length
max_seq_length = np.max(num_seq_length)
print(max_seq_length)
```
### Create pad_seq data
```
#pad = 'pre'
pad = 'post'
x_train_pad = pad_sequences(sequences=x_train_tokens, maxlen=max_seq_length,
padding=pad, truncating=pad)
index = 7
print("text: {}\n".format(x_train_words[index]))
print("token: {}\n".format(x_train_tokens[index]))
print("pad: {}".format(x_train_pad[index]))
```
### Tokenizer Inverse Map
```
idx = tokenizer.word_index
inverse_map = dict(zip(idx.values(), idx.keys()))
print(inverse_map)
def tokens_to_string(tokens):
# Map from tokens back to words.
words = [inverse_map[token] for token in tokens if token != 0]
# Concatenate all words.
text = "".join(words)
return text
index = 10
print("original text:\n{}\n".format(x_train_words[index]))
print("tokens to string:\n{}".format(tokens_to_string(x_train_tokens[index])))
```
## Create the Recurrent Neural Network
We are now ready to create the Recurrent Neural Network (RNN). We will use the TensorFlow API.
```
# Set the hyperparameter set
batch_size = 4
max_epochs = 50
#embedding_size = 8
num_units = [32, 16] # the number of nodes in RNN hidden layer
num_classes = 2 # Two classes [True, False]
initializer_scale = 0.1
learning_rate = 1e-3
```
### Set up dataset with `tf.data`
#### create input pipeline with `tf.data.Dataset`
```
## create data pipeline with tf.data
train_dataset = tf.data.Dataset.from_tensor_slices((x_train_pad, x_train_seq_length, y_train))
train_dataset = train_dataset.shuffle(buffer_size = 100)
train_dataset = train_dataset.repeat(max_epochs)
train_dataset = train_dataset.batch(batch_size = batch_size)
print(train_dataset)
```
#### Define Iterator
```
handle = tf.placeholder(tf.string, shape=[])
iterator = tf.data.Iterator.from_string_handle(handle,
train_dataset.output_types,
train_dataset.output_shapes)
seq_pad, seq_length, labels = iterator.get_next()
```
### Define CharRNN class
```
class CharRNN:
def __init__(self, num_chars,
seq_pad, seq_length, labels,
num_units=num_units, num_classes=num_classes):
self.num_chars = num_chars
self.seq_pad = seq_pad
self.seq_length = seq_length
self.labels = labels
self.num_units = num_units
self.num_classes = num_classes
self.keep_prob = tf.placeholder(tf.float32)
def build_embeddings(self):
with tf.variable_scope('embedding_layer'):
one_hot = tf.eye(self.num_chars, dtype=tf.float32)
one_hot_matrix = tf.get_variable(name='one_hot_embedding',
initializer=one_hot,
trainable=False) # embedding matrix: No training
self.embeddings = tf.nn.embedding_lookup(params=one_hot_matrix, ids=self.seq_pad)
def build_layers(self):
# MultiLayered LSTM cell with dropout
with tf.variable_scope('multi_lstm_cell_dropout'):
multi_cells = []
for num_units in self.num_units:
cell = rnn.BasicLSTMCell(num_units=num_units, state_is_tuple=True)
cell = rnn.DropoutWrapper(cell=cell, output_keep_prob=self.keep_prob)
multi_cells.append(cell)
multi_cells = rnn.MultiRNNCell(cells=multi_cells, state_is_tuple=True)
_, self.states = tf.nn.dynamic_rnn(cell=multi_cells, inputs=self.embeddings,
sequence_length=self.seq_length,
dtype=tf.float32)
def build_outputs(self):
logits = slim.fully_connected(inputs=self.states[-1].h,
num_outputs=self.num_classes,
activation_fn=None,
scope='logits')
return logits
def bce_loss(self):
one_hot_labels = tf.one_hot(self.labels, depth=self.num_classes)
loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=one_hot_labels,
logits=self.logits,
scope='binary_cross_entropy')
return loss
def predict(self):
with tf.variable_scope('predictions'):
predictions = tf.argmax(input=self.logits, axis=-1, output_type=tf.int32)
return predictions
def build(self):
self.global_step = tf.train.get_or_create_global_step()
self.build_embeddings()
self.build_layers()
self.logits = self.build_outputs()
self.loss = self.bce_loss()
self.predictions = self.predict()
print("complete model build.")
model = CharRNN(num_chars=num_chars,
seq_pad=seq_pad,
seq_length=seq_length,
labels=labels,
num_units=num_units,
num_classes=num_classes)
model.build()
# show info for trainable variables
t_vars = tf.trainable_variables()
slim.model_analyzer.analyze_vars(t_vars, print_info=True)
```
### Creat training op
```
# create training op
optimizer = tf.train.AdamOptimizer(learning_rate)
train_op = optimizer.minimize(model.loss, global_step=model.global_step)
```
### `tf.Session()` and train
```
train_dir = './train/seq_classification.multilstm.dropout/exp1'
if not tf.gfile.Exists(train_dir):
print("mkdir: {}".format(train_dir))
tf.gfile.MakeDirs(train_dir)
else:
print("already exist!")
saver = tf.train.Saver(tf.global_variables(), max_to_keep=1000)
tf.logging.info('Start Session.')
sess = tf.Session(config=sess_config)
sess.run(tf.global_variables_initializer())
train_iterator = train_dataset.make_one_shot_iterator()
train_handle = sess.run(train_iterator.string_handle())
tf.logging.info('Start train.')
# save loss values for plot
loss_history = []
pre_epochs = 0
while True:
try:
start_time = time.time()
_, global_step, loss = sess.run([train_op,
model.global_step,
model.loss],
feed_dict={handle: train_handle,
model.keep_prob: 0.5})
epochs = global_step * batch_size / float(len(x_train_words))
duration = time.time() - start_time
print_steps = 1
if global_step % print_steps == 0:
clear_output(wait=True)
examples_per_sec = batch_size / float(duration)
print("Epochs: {:.3f} global_step: {} loss: {:.3f} ({:.2f} examples/sec; {:.3f} sec/batch)".format(
epochs, global_step, loss, examples_per_sec, duration))
loss_history.append([epochs, loss])
# save model checkpoint periodically
save_epochs = 10
if int(epochs) % save_epochs == 0 and pre_epochs != int(epochs):
tf.logging.info('Saving model with global step {} (= {} epochs) to disk.'.format(global_step, int(epochs)))
saver.save(sess, train_dir + 'model.ckpt', global_step=global_step)
pre_epochs = int(epochs)
except tf.errors.OutOfRangeError:
print("End of dataset") # ==> "End of dataset"
tf.logging.info('Saving model with global step {} (= {} epochs) to disk.'.format(global_step, int(epochs)))
saver.save(sess, train_dir + 'model.ckpt', global_step=global_step)
break
tf.logging.info('complete training...')
```
### Plot the loss
```
loss_history = np.array(loss_history)
plt.plot(loss_history[:,0], loss_history[:,1], label='train')
```
### Train accuracy and predcition
```
train_dataset_eval = tf.data.Dataset.from_tensor_slices((x_train_pad, x_train_seq_length, y_train))
train_dataset_eval = train_dataset_eval.batch(batch_size = len(x_train_pad))
train_iterator_eval = train_dataset_eval.make_initializable_iterator()
train_handle_eval = sess.run(train_iterator_eval.string_handle())
sess.run(train_iterator_eval.initializer)
accuracy, acc_op = tf.metrics.accuracy(labels=labels, predictions=model.predictions, name='accuracy')
sess.run(tf.local_variables_initializer())
sess.run(acc_op, feed_dict={handle: train_handle_eval, model.keep_prob: 1.0})
print("test accuracy:", sess.run(accuracy))
sess.run(train_iterator_eval.initializer)
x_test_pad, y_pred = sess.run([model.seq_pad, model.predictions],
feed_dict={handle: train_handle_eval,
model.keep_prob: 1.0})
for x, y in zip(x_test_pad, y_pred):
if y == 0:
print("{} : good".format(tokens_to_string(x)))
else:
print("{} : bad".format(tokens_to_string(x)))
```
| github_jupyter |
<img src="../../../images/qiskit_header.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" align="middle">
## Quantum Volume
---
* **Last Updated:** August 6, 2019
* **Requires:** qiskit-terra 0.8, qiskit-ignis 0.1.1, qiskit-aer 0.2
## Introduction
**Quantum Volume (QV)** is a method to verify device performance and a metric to quantify the computational power of a quantum device. The method is based on the paper "Validating quantum computers using randomized model circuits" (https://arxiv.org/abs/1811.12926).
This notebook gives an example for how to use the ``ignis.verification.quantum_volume`` module. This particular example shows how to run up to depth 6 quantum volume circuits and will run them using the noisy Aer simulator.
```
#Import general libraries (needed for functions)
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
#Import Qiskit classes classes
import qiskit
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error, thermal_relaxation_error
#Import the qv function.
import qiskit.ignis.verification.quantum_volume as qv
```
## Select the Parameters of the QV Run
In this example we have 6 qubits Q0,Q1,Q3,Q5,Q7,Q10. We are going to look at subsets up to the full set.
```
#Qubit list
qubit_lists = [[0,1,3],[0,1,3,5],[0,1,3,5,7],[0,1,3,5,7,10]]
ntrials = 50
```
## Generate QV sequences
We generate the quantum volume sequences. We start with a small example (so it doesn't take too long to run).
```
qv_circs, qv_circs_nomeas = qv.qv_circuits(qubit_lists, ntrials)
#pass the first trial of the nomeas through the transpiler to illustrate the circuit
qv_circs_nomeas[0] = qiskit.compiler.transpile(qv_circs_nomeas[0], basis_gates=['u1','u2','u3','cx'])
```
As an example, we print the circuit corresponding to the first QV sequence. Note that the ideal circuits are run on the first n qubits (where n is the number of qubits in the subset).
```
print(qv_circs_nomeas[0][0])
```
## Simulate the ideal circuits
The quantum volume method requires that we know the ideal output for each circuit, so use the statevector simulator in Aer to get the ideal result.
```
#The Unitary is an identity (with a global phase)
backend = qiskit.Aer.get_backend('statevector_simulator')
ideal_results = []
for trial in range(ntrials):
print('Simulating trial %d'%trial)
ideal_results.append(qiskit.execute(qv_circs_nomeas[trial], backend=backend, optimization_level=0).result())
```
Next, load the ideal results into a quantum volume fitter:
```
qv_fitter = qv.QVFitter(qubit_lists=qubit_lists)
qv_fitter.add_statevectors(ideal_results)
```
## Define the noise model
We define a noise model for the simulator. To simulate decay, we add depolarizing error probabilities to the CNOT and U gates.
```
noise_model = NoiseModel()
p1Q = 0.002
p2Q = 0.02
noise_model.add_all_qubit_quantum_error(depolarizing_error(p1Q, 1), 'u2')
noise_model.add_all_qubit_quantum_error(depolarizing_error(2*p1Q, 1), 'u3')
noise_model.add_all_qubit_quantum_error(depolarizing_error(p2Q, 2), 'cx')
#noise_model = None
```
## Execute on Aer simulator
We can execute the QV sequences either using a Qiskit Aer Simulator (with some noise model) or using an IBMQ provider,
and obtain a list of results, `result_list`.
```
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 1024
exp_results = []
for trial in range(ntrials):
print('Running trial %d'%trial)
exp_results.append(qiskit.execute(qv_circs[trial], basis_gates=basis_gates, backend=backend, noise_model=noise_model, backend_options={'max_parallel_experiments': 0}).result())
```
Load the experimental data into the fitter. The data will keep accumulating if this is re-run (unless the fitter is re-instantiated).
```
qv_fitter.add_data(exp_results)
plt.figure(figsize=(10, 6))
ax = plt.gca()
# Plot the essence by calling plot_rb_data
qv_fitter.plot_qv_data(ax=ax, show_plt=False)
# Add title and label
ax.set_title('Quantum Volume for up to %d Qubits \n and %d Trials'%(len(qubit_lists[-1]), ntrials), fontsize=18)
plt.show()
```
## Quantum Volume
List statistics for each depth. For each depth list if the depth was successful or not and with what confidence interval. For a depth to be successful the confidence interval must be > 97.5%.
```
qv_success_list = qv_fitter.qv_success()
qv_list = qv_fitter.ydata
for qidx, qubit_list in enumerate(qubit_lists):
if qv_list[0][qidx]>2/3:
if qv_success_list[qidx][0]:
print("Width/depth %d greater than 2/3 (%f) with confidence %f (successful). Quantum volume %d"%
(len(qubit_list),qv_list[0][qidx],qv_success_list[qidx][1],qv_fitter.quantum_volume()[qidx]))
else:
print("Width/depth %d greater than 2/3 (%f) with confidence %f (unsuccessful)."%
(len(qubit_list),qv_list[0][qidx],qv_success_list[qidx][1]))
else:
print("Width/depth %d less than 2/3 (unsuccessful)."%len(qubit_list))
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
<a href="https://colab.research.google.com/github/isb-cgc/Community-Notebooks/blob/Staging-Notebooks/Notebooks/Quick_Start_Guide_to_ISB_CGC.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# ISB-CGC Community Notebooks
Check out more notebooks at our [Community Notebooks Repository](https://github.com/isb-cgc/Community-Notebooks)!
```
Title: Quick Start Guide to ISB-CGC
Author: Lauren Hagen
Created: 2019-06-20
Updated: 2021-07-27
Purpose: Painless intro to working in the cloud
URL: https://github.com/isb-cgc/Community-Notebooks/blob/master/Notebooks/Quick_Start_Guide_to_ISB_CGC.ipynb
Notes:
```
***
# Quick Start Guide to ISB-CGC
[ISB-CGC](https://isb-cgc.appspot.com/)
This Quick Start Guide gives an overview of the data available, account set-up overview, and getting started with a basic example in python. If you have read the R version, you can skip to the Example section.
## Access Requirements
* Google Account to access ISB-CGC
* [Google Cloud Account](https://console.cloud.google.com)
## Access Suggestions
* Favored Programming Language (R or Python)
* Favored IDE (RStudio or Jupyter)
* Some knowledge of SQL
## Outline for this Notebook
* Libraries Needed for this Notebook
* Overview of ISB-CGC
* Overview How to Access Data
* Example of Accessing Data with Python
* Where to go next
## Libraries needed for the Notebook
This notebook requires the BigQuery API to be loaded [(click here for more information)](https://googleapis.github.io/google-cloud-python/latest/bigquery/usage/client.html). This library will allow you to access BigQuery programmatically.
```
# Load BigQuery API
from google.cloud import bigquery
```
## Overview of ISB-CGC
The ISB-CGC provides interactive and programmatic access to data hosted by institutes such as the [Genomic Data Commons (GDC)](https://gdc.cancer.gov/) and [Proteomic Data Commons (PDC)](https://proteomic.datacommons.cancer.gov/pdc/) from the [National Cancer Institute (NCI)](https://www.cancer.gov/) while leveraging many aspects of the Google Cloud Platform. You can also import your data, analyze it side by side with the datasets, and share your data when you see fit.
### About the ISB-CGC Data in the Cloud
ISB-CGC hosts carefully curated, high-level clinical, biospecimen, and molecular datasets and tables in Google BigQuery, including data from programs such as [The Cancer Genome Atlas (TCGA)](https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga), [Therapeutically Applicable Research to Generate Effective Treatments (TARGET)](https://ocg.cancer.gov/programs/target), and [Clinical Proteomic Tumor Analysis Consortium (CPTAC)](https://proteomics.cancer.gov/programs/cptac). For more information about hosted data, please visit: [Programs and DataSets](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/Hosted-Data.html)
## Overview of How to Access Data
There are several ways to access and explore the data hosted by ISB-CGC. Though in this notebook, we will cover using Python and SQL to access the data.
* [ISB-CGC WebApp](https://isb-cgc.appspot.com/)
* Provides a graphical interface to file and case data
* Easy cohort creation
* Doesn't require knowledge of programming languages
* [ISB-CGC BigQuery Table Search](https://isb-cgc.appspot.com/bq_meta_search/)
* Provides a table search for available ISB-CGC BigQuery Tables
* [ISB-CGC APIs](https://api-dot-isb-cgc.appspot.com/v4/swagger/)
* Provides programmatic access to metadata
* [Google Cloud Platform](https://cloud.google.com/)
* Access and store data in [Google Cloud Storage](https://cloud.google.com/storage) and [BigQuery](https://cloud.google.com/bigquery) via User Interfaces or programmatically
* Suggested Programming Languages and Programs to use
* SQL
* Can be used directly in [BigQuery Console](https://console.cloud.google.com/bigquery)
* Or via API in Python or R
* [Python](https://www.python.org/)
* [gsutil tool](https://cloud.google.com/storage/docs/gsutil)
* [Jupyter Notebooks](https://jupyter.org/)
* [Google Colabratory](https://colab.research.google.com/)
* [Cloud Datalab](https://cloud.google.com/datalab/)
* [R](https://www.r-project.org/)
* [RStudio](https://rstudio.com/)
* [RStudio.Cloud](https://rstudio.cloud/)
* Command Line Interfaces
* Cloud Shell via Project Console
* [CLOUD SDK](https://cloud.google.com/sdk/)
### Account Set-up
To run this notebook, you will need to have your Google Cloud Account set up. If you need to set up a Google Cloud Account, follow the "Obtain a Google identity" and "Set up a Google Cloud Project" steps on our [Quick-Start Guide documentation](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/HowToGetStartedonISB-CGC.html) page.
### ISB-CGC Web Interface
The [ISB-CGC Web Interface](https://isb-cgc.appspot.com/) is an interactive web-based application to access and explore the rich TCGA, TARGET, and CCLE datasets with more datasets regularly added. Through WebApp, you can create Cohorts, lists of Favorite Genes, miRNA, and Variables. The Cohorts and Variables can be used in Workbooks to allow you to quickly analyze and export datasets by mixing and matching the selections.
### Google Cloud Platform and BigQuery Overview
The [Google Cloud Platform Console](https://console.cloud.google.com/) is the web-based interface to your GCP Project. From the Console, you can check the overall status of your project, create and delete Cloud Storage buckets, upload and download files, spin up and shut down VMs, add members to your project, access the [Cloud Shell command line](https://cloud.google.com/shell/docs/), etc. You'll want to remember that any costs that you incur are charged under your *current* project, so you will want to make sure you are on the correct one if you are part of multiple projects.
ISB-CGC has uploaded multiple cancer genomic and proteomic datasets into BigQuery tables that are open-source such as TCGA and TARGET Clinical, Biospecimen, and Molecular Data, along with case and file data. This data can be accessed from the Google Cloud Platform Console User Interface (UI), programmatically with R and python, or explored with our [BigQuery Table Search tool](https://isb-cgc.appspot.com/bq_meta_search/).
## Example of Accessing BigQuery Data with Python
### Log into Google Cloud Storage and Authenticate ourselves
1. Authenticate yourself with your Google Cloud Login
2. A second tab will open or follow the link provided
3. Follow prompts to Authorize your account to use Google Cloud SDK
4. Copy code provided and paste into the box under the Command
5. Press Enter
[Alternative authentication methods](https://googleapis.github.io/google-cloud-python/latest/core/auth.html)
```
!gcloud auth application-default login
```
### View ISB-CGC Datasets and Tables in BigQuery
Let us look at the datasets available through ISB-CGC that are in BigQuery.
```
# Create a client to access the data within BigQuery
# Note: you cannot use the project below as a billing project,
# it can only be used to view the tables and table schema
client = bigquery.Client('isb-cgc-bq')
# Create a variable of datasets
datasets = list(client.list_datasets())
# Create a variable for the name of the project
project = client.project
# If there are datasets available then print their names,
# else print that there are no datasets available
if datasets:
print("Datasets in project {}:".format(project))
for dataset in datasets: # API request(s)
print("\t{}".format(dataset.dataset_id))
else:
print("{} project does not contain any datasets.".format(project))
```
The ISB-CGC has two datasets for each Program. One dataset contains the most current data, and the other contains versioned tables, which serve as an archive for reproducibility. The current tables are labeled with "_current" and are updated when new data is released. For more information, visit our [ISB-CGC BigQuery Projects](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/BigQuery/ISBCGC-BQ-Projects.html) page.
Now, let us see which tables are under the TCGA dataset.
```
print("Tables:")
# Create a variable with the list of tables in the dataset
tables = list(client.list_tables('isb-cgc-bq.TCGA'))
# If there are tables then print their names,
# else print that there are no tables
if tables:
for table in tables:
print("\t{}".format(table.table_id))
else:
print("\tThis dataset does not contain any tables.")
```
### Query ISB-CGC BigQuery Tables
First, use a magic command to call to BigQuery. Then we can use Standard SQL to write your query. Click [here](https://googleapis.github.io/google-cloud-python/latest/bigquery/magics.html) for more on IPython Magic Commands for BigQuery. The result will be a [Pandas Dataframe](https://pandas.pydata.org/).
> Note: you will need to update PROJECT_ID in the next cell to your Google Cloud Project ID.
```
# Call to BigQuery with a magic command
# and replace PROJECT_ID with your project ID Number
%%bigquery --project PROJECT_ID
SELECT # Select a few columns to view
proj__project_id, # GDC project
submitter_id, # case barcode
proj__name # GDC project name
FROM # From the GDC TCGA Clinical Dataset
`isb-cgc-bq.TCGA.clinical_gdc_current`
LIMIT # Limit to 5 rows as the dataset is very large and we only want to see a few results
5
# Syntax for the above query
# SELECT *
# FROM `project_name.dataset_name.INFORMATION_SCHEMA.COLUMNS`
# Limit to the first 5 fields
```
Now that wasn't so difficult! Have fun exploring and analyzing the ISB-CGC Data!
## Where to Go Next
Access, Explore and Analyze Large-Scale Cancer Data Through the Google Cloud! :)
Getting Started for Free:
* [Free Cloud Credits from ISB-CGC for Cancer Research](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/HowtoRequestCloudCredits.html)
* [Google Free Tier with up to 1TB of free queries a month](https://cloud.google.com/free)
ISB-CGC Links:
* [ISB-CGC Landing Page](https://isb-cgc.appspot.com/)
* [ISB-CGC Documentation](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/)
* [How to Get Started on ISB-CGC](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/HowToGetStartedonISB-CGC.html)
* [How to access Google BigQuery](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/progapi/bigqueryGUI/HowToAccessBigQueryFromTheGoogleCloudPlatform.html)
* [Community Notebook Repository](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/HowTos.html)
* [Query of the Month](https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/QueryOfTheMonthClub.html)
Google Tutorials:
* [Google's What is BigQuery?](https://cloud.google.com/bigquery/docs/introduction)
* [Google Cloud Client Library for Python](https://googleapis.github.io/google-cloud-python/latest/index.html)
| github_jupyter |

# Callysto’s Weekly Data Visualization
## Twitch and Game Popularity
### Recommended Grade levels: 7-12

<br>
### Instructions
#### “Run” the cells to see the graphs
Click “Cell” and select “Run All”.<br> This will import the data and run all the code, so you can see this week's data visualization. Scroll to the top after you’ve run the cells.<br>

**You don’t need to do any coding to view the visualizations**.
The plots generated in this notebook are interactive. You can hover over and click on elements to see more information.
Email contact@callysto.ca if you experience issues.
### About this Notebook
Callysto's Weekly Data Visualization is a learning resource that aims to develop data literacy skills. We provide Grades 5-12 teachers and students with a data visualization, like a graph, to interpret. This companion resource walks learners through how the data visualization is created and interpreted by a data scientist.
The steps of the data analysis process are listed below and applied to each weekly topic.
1. Question - What are we trying to answer?
2. Gather - Find the data source(s) you will need.
3. Organize - Arrange the data, so that you can easily explore it.
4. Explore - Examine the data to look for evidence to answer the question. This includes creating visualizations.
5. Interpret - Describe what's happening in the data visualization.
6. Communicate - Explain how the evidence answers the question.
# Question
**Twitch** is an online streaming service that lets viewers anywhere in the world watch their favourite gamers play video games, live. It is extremely popular, with viewers spending hundreds of hours to watch someone else play a game.
Here is a screen shot from the Twitch website, showing a game in play as well as other channels to look at.

Why is Twitch so popular? I do not know!
Have you ever wondered which games on Twitch are most popular, and just how much watching is going on?
### Goal
Our goal is to show an overview of what games are most popular, based on counts of number of hours watched, and number of viewers watching.
We will use pie charts and bar graphs to visually represent this data in an informative way.
# Gather
### Code:
The code below will import the Python programming libraries we need to gather and organize the data to answer our question.
```
## import libraries
import pandas as pd
import plotly.express as px
```
### Data:
There are many sources for information about Twitch and its usage statistics. We used a site called Twitch Analytics hosted by SullyGnome.com The website is here: https://sullygnome.com/
This web page has several options for downloading information. They asked us not to "scrape" data from the site, so we had a choice of downloading their files in csv format (Comma Separated Values), or copying and pasting directly from the web page into our own spreadsheets and saving as csv. Here are our four files for this project:
- watch-time-30.csv The list amount of time (hours) watched, per game, in the last 30 days
- watch-time-365.csv Ditto, but in the last 365 days
- peak-viewers-30.csv The list of peak number of viewers, per game, in the last 30 days
- peak-viewers-365.csv Ditto, but in the last 365 days
We did discover that copying and pasting gave us better data. We suspect the CSV files directly downloaded from the website are flawed. (We sent a message to the author of the sullygnom.com webpage.) Our data was downloaded on February 28, 2021.
We then upload the .csv file on our Jupyter hub, where we can access it with our code. These files are all available when you access this code on the Callysto hub.
### Import the data
```
## import data, from csv into a data frame (df)
time30_df = pd.read_csv('data/watch-time-30.csv');
time365_df = pd.read_csv('data/watch-time-365.csv');
viewers30_df = pd.read_csv('data/peak-viewers-30.csv');
viewers365_df = pd.read_csv('data/peak-viewers-365.csv');
```
### Comment on the data
We can check the size of each data frame by using the "shape" command. This will tell us how many rows and columns are in each data frame.
```
time30_df.shape, time365_df.shape, viewers30_df.shape, viewers365_df.shape
```
From this "shape" inquiry, we see each data frame has 49 or 50 rows and 14 columns.
We can display the first few rows of each data frame using the "head" command, as in the following code:
```
time30_df.head()
time365_df.head()
viewers30_df.head()
viewers365_df.head()
```
# Organize
The code below will arrange the data cleanly so that we can do analysis on it. This is a quality control step for our data and involves examining the data to detect anything odd with the data (e.g. structure, missing values), fixing the oddities, and checking if the fixes worked.
One thing we notice is that **Just Chatting** shows up as the top item in each data frame. However, this is not really a game but rather a channel that viewers go to in order to chat, not play games. Since it is not a game, we will remove it from the data frame. Similarly, **Special Events** shows up in the "viewers30" and "viewers365" data frames, so we will remove that one as well.
```
# data cleaning
time30_df = time30_df.drop(index=0); ## drop row 0, which is Just Chatting
time365_df = time365_df.drop(index=0);
viewers30_df = viewers30_df.drop(index=[0,1]); ## drop rows 0 and 1, Just Chatting and Special Events
viewers365_df = viewers365_df.drop(index=[0,1]);
```
We also need to convert the columns 'Watch time' and 'Peak viewers' to numbers, rather than text.
We do this in Python by replacing the text 'hours' with a blank, replacing the commas with blank,
and then convert the text to an integer. We do this for all four data frames, and for the two columns.
```
# convert strings to numbers
time30_df['Watch time'] = time30_df['Watch time'].str.replace(',', '').str.replace('hours','').astype(int)
time365_df['Watch time'] = time365_df['Watch time'].str.replace(',', '').str.replace('hours','').astype(int)
viewers30_df['Watch time'] = viewers30_df['Watch time'].str.replace(',', '').str.replace('hours','').astype(int)
viewers365_df['Watch time'] = viewers365_df['Watch time'].str.replace(',', '').str.replace('hours','').astype(int)
time30_df['Peak viewers'] = time30_df['Peak viewers'].str.replace(',', '').str.replace('hours','').astype(int)
time365_df['Peak viewers'] = time365_df['Peak viewers'].str.replace(',', '').str.replace('hours','').astype(int)
viewers30_df['Peak viewers'] = viewers30_df['Peak viewers'].str.replace(',', '').str.replace('hours','').astype(int)
viewers365_df['Peak viewers'] = viewers365_df['Peak viewers'].str.replace(',', '').str.replace('hours','').astype(int)
```
### Comment on the data
We can look at the data head again, to ensure that we have removed those channels, Just Chatting and Special Events.
We also verify that the two columns "Watch time" and "Peak viewers" show up as plain numbers, not text.
```
time30_df.head()
viewers30_df.head()
```
# Explore
The code below will be used to help us look for evidence to answer our question. This can involve looking at data in table format, applying math and statistics, and creating different types of visualizations to represent our data.
We will start by displaying the 30 day data in a pie chart.
```
# data exploration
fig = px.pie(time30_df,names='Game',values='Watch time',title='Hours watched, by Game Title')
fig.show()
fig = px.pie(time30_df,names='Game',values='Peak viewers',title='Peak viewers, by Game Title')
fig.show()
```
# Interpret
The pie charts are overwhelming as they contain information about 50 games each. Let's display again, using only the top ten items in each chart.
Notice the code to plot this is very similar to the above, except we indicate the row range by introducing the index list [0:10]. We also sort the values for "Watch time" or "Peak viewers" so that they are decreasing from highest to lowest.
```
time30_df = time30_df.sort_values('Watch time', ascending = False) #sort the values first from high to low
fig = px.pie(time30_df[0:10],names='Game',values='Watch time',title='Hours watched, by Game Title')
fig.show()
time30_df = time30_df.sort_values('Peak viewers', ascending = False) #sort the values first from high to low
fig = px.pie(time30_df[0:10],names='Game',values='Peak viewers',title='Peak viewers, by Game Title')
fig.show()
```
## Interpreting the charts
We notice that the order of games depends on what you measure: hours watched, or peak number of viewers.
However, there is some consistency. For instance, in both charts, the top five games include these four:
> League of Legends, Minecraft, Fortnite, and Grand Theft Auto.
## Alternate visualizations
Perhaps you might find a bar chart to be more revealing. Not much code is needed for this; we just change the pie chart code above into a bar chart, being sure to plot the "Watch time" column here.
```
time30_df = time30_df.sort_values('Watch time', ascending = False) #sort the values first from high to low
fig = px.bar(time30_df[0:10],x='Game',y='Watch time',title='Hours watched, by Game Title')
fig.show()
```
We can also do a plot of the "Peak Viewers", in bar chart form.
```
time30_df = time30_df.sort_values('Peak viewers', ascending = False) #sort the values first from high to low
fig = px.bar(time30_df[0:10],x='Game',y='Peak viewers',title='Peak viewers, by Game Title')
fig.show()
```
## Sanity check
In our first attempt in making this notebook, we thought there is something weird about the numbers. In the first instance, we found that League of Nations had about 10 billion hours of viewing, and 700 thousand viewers at its peak.
10 billion divide by 700 thousand is about 14,000, which is the number of hours each viewer spends watching the game, over 30 days.
Per day, this is about 475 hours. Yet there are ony 24 hours in the day. How can this be?
After some investigation, we determined that the csv files downloaded directly from the sullygnome.com website were flawed. So we used a cut-and-paste message to get better data, which were used in this current version of our Python notebook.
```
## Here are the numbers
10000000000/700000, 10000000000/700000/30
```
## Sanity check number two
With this new, corrected data, we found that League of Nations had about 165 million hours of viewing, and 700 thousand viewers at its peak.
165 million divided by 700 thousand is about 236, which is the number of hours each viewer spends watching the game, over 30 days.
Per day, this is about 7.8 hours. That does fit into a 24 hour day, but it is like a full work day. So it seems a lot of people are viewing these games for a long time each day.
You might ask yourself why the number of hours is so high. Some possibilities:
- people turn on their computer onto Twitch and have the games playing in the background during the work day
- people create bots, or self-running programs that pretend to watch the game, to bring up the numbers
- game players like to "game" the numbers to increase the payments they get for having many viewers. What are some other ways they can game the system?
Here are the numbers, for numbers of hours per viewer, over 30 days, and per day:
```
165000000/700000, 165000000/700000/30
```
## Communicate
Below are some writing prompts to help you reflect on the new information that is presented from the data. When we look at the evidence, think about what you perceive about the information. Is this perception based on what the evidence shows? If others were to view it, what perceptions might they have?
- I used to think ____________________but now I know____________________.
- I wish I knew more about ____________________.
- This visualization reminds me of ____________________.
- I really like ____________________.
## Add your comments here
✏️
## Saving your work
You can download this notebook, with your comments, by using the **File** menu item in the toolbar above. You might like to download this in .html format so that the graphics remain active and can be viewed in your web browser.
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
| github_jupyter |
<a href="https://colab.research.google.com/github/khipu-ai/practicals-2019/blob/master/1b_optimization.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Practical 1b: Optimisation for Deep Learning
© Deep Learning Indaba. Apache License 2.0.
Adapted for Khipu.
## Introduction
In this practical, we will take a *deep* dive into an essential part of deep learning, and machine learning in general, **optimisation**. We'll take a look at the tools that allow us to turn a random collection of weights into a state-of-the-art model for any number of applications. More specifically, we'll implement a few standard optimisation algorithms for finding the minimum of [Rosenbrock's banana function](https://en.wikipedia.org/wiki/Rosenbrock_function) and then we'll try them out on FashionMNIST.
## Learning Objectives
* Understand **what optimisation algorithms are**, and **how they are used** in the context of deep learning.
* Understand gradient descent, stochastic gradient descent, and mini-batch stochastic gradient descent.
* Understand the roles of batch size, learning rate and other hyper-parameters.
* Implement, using TF2.0, gradient descent and a few variations of it.
* Understand the strengths and weaknesses of the various optimisation algorithms covered in this practical.
**IMPORTANT: Please fill out the exit ticket form before you leave the practical: https://forms.gle/J4i5wehZPUdggCc29**
```
#@title Imports (RUN ME!) { display-mode: "form" }
!pip install tensorflow-gpu==2.0.0-beta0 > /dev/null 2>&1
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from IPython import display
%matplotlib inline
display.clear_output()
print("TensorFlow executing eagerly: {}".format(tf.executing_eagerly()))
```
$$
\newcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\vechat}[1]{\hat{\mathbf{#1}}}
\newcommand{\x}{\vec{x}}
\newcommand{\utheta}{θ}
\newcommand{\th}{\vec{\utheta}}
\newcommand{\y}{\vec{y}}
\newcommand{\b}{\vec{b}}
\newcommand{\W}{\textrm{W}}
\newcommand{\L}{\mathcal{L}}
\newcommand{\xhat}{\vechat{x}}
\newcommand{\yhat}{\vechat{y}}
\newcommand{\bhat}{\vechat{b}}
\newcommand{\What}{\hat{\W}}
\newcommand{\partialfrac}[2]{\frac{\partial{#1}}{\partial{#2}}}
\newcommand{\ipartialfrac}[2]{{\partial{#1}}/{\partial{#2}}}
\newcommand{\dydx}{\partialfrac{\y}{\x}}
\newcommand{\dld}[1]{\partialfrac{\L}{#1}}
\newcommand{\dldx}{\dld{\x}}
\newcommand{\dldy}{\dld{\y}}
\newcommand{\dldw}{\dld{W}}
\newcommand{\idld}[1]{\ipartialfrac{\L}{#1}}
\newcommand{\idldx}{\idld{\x}}
\newcommand{\idldy}{\idld{\y}}
\newcommand{\idydx}{\ipartialfrac{\y}{\x}}
\newcommand{\red}[1]{\color{red}{#1}}
\newcommand{\green}[1]{\color{green}{#1}}
\newcommand{\blue}[1]{\color{blue}{#1}}
\newcommand{\because}[1]{&& \triangleright \textrm{#1}}
\newcommand{\relu}[1]{\textrm{relu}({#1})}
\newcommand{\step}[1]{\textrm{step}({#1})}
\newcommand{\gap}{\hspace{0.5mm}}
\newcommand{\gapp}{\hspace{1mm}}
\newcommand{\ngap}{\hspace{-0.5mm}}
\newcommand{\ngapp}{\hspace{-1mm}}
$$
## Rosenbrock's Banana Function 🍌
In practice, when evaluating the performance of various optimisation algorithms and hyper-parameters, what we really care about is the performance on a wide range of real-world problems, in our case the minimisation of the loss function of a neural network. However, we are not easily able to visualize what our algorithms are doing because the loss landscape for even simple neural networks trained on just about any real-world dataset will be very high-dimensional (dimension equal to the number of parameters).
To solve this problem, we will use Rosenbrock's (Banana) Function as a playground for investigating how these optimisation algorithms work. The banana function is easy to visualize because it is a function that takes a 2D ($x$ and $y$) input and returns a scalar output. It is defined as:
\begin{equation}
f(x,y) = (a-x)^2+b\times(y-x^2)^2
\end{equation}
where typical values for $a$ and $b$ are $1$ and $100$, respectively. For this practical we'll use $a = 1$ and $b = 20$. The global minimum of this function is at $(a, a^2)$ or $(1, 1)$ in our case.
We can easily define this function using TensorFlow:
```
def rosenbrock_banana(x, y, a=1., b=20.):
return tf.math.pow(a - x, 2.) + b * tf.math.pow(y - tf.math.pow(x, 2.), 2.)
```
Let's try visualizing the 🍌, first using a contour plot:
```
#@title Helper functions (RUN ME) (double click to unhide/hide the code)
def gen_2d_loss_surface(loss_func,
n_x=100, # number of discretization points along the x-axis
n_y=100, # number of discretization points along the x-axis
min_x=-2., max_x=2., # extreme points in the x-axis
min_y=-0.2, max_y=1.3 # extreme points in the y-axis
):
# create a mesh of points at which to evaluate our function
X, Y = np.meshgrid(np.linspace(min_x, max_x, n_x),
np.linspace(min_y, max_y, n_y))
# evaluate the func at all of the points
Z = loss_func(X, Y).numpy()
return X, Y, Z
def make_contour_plot(X, Y, Z, levels=None):
if levels == None:
# generate 20 levels on a log scale
levels = np.insert(np.logspace(0, 2.6, 20, True, base=10), 0, 0)
fig = plt.figure(figsize=(9.84, 3))
ax = fig.gca()
ax.contour(X, Y, Z, levels, alpha=0.5)
ax.contourf(X, Y, Z, levels, alpha=0.2)
ax.set_xlabel('x')
ax.set_ylabel('y')
return fig, ax
def make_surface_plot(X, Y, Z, elevation=0, azimuth_angle=0, levels=None):
if levels == None:
# generate 20 levels on a log scale
levels = np.insert(np.logspace(0, 2.6, 20, True, base=10), 0, 0)
fig = plt.figure(figsize=(10,6))
ax = fig.gca(projection='3d')
ax.view_init(elevation, azimuth_angle)
ax.plot_surface(X, Y, Z, cmap='viridis', alpha=0.2)
ax.contour(X, Y, Z, levels, cmap='viridis', alpha=0.5)
ax.set_xlabel('x')
ax.set_ylabel('y')
return fig, ax
X, Y, Z = gen_2d_loss_surface(rosenbrock_banana)
fig, ax = make_contour_plot(X, Y, Z)
# add a marker to show the minimum
ax.plot(1, 1, 'r*', ms=30, label='minimum')
ax.legend()
fig.show()
```
And now with a surface plot:
```
#@title {run: "auto"}
elevation = 62 #@param {type:"slider", min:0, max:360, step:1}
azimuth_angle = 102 #@param {type:"slider", min:0, max:360, step:1}
X, Y, Z = gen_2d_loss_surface(rosenbrock_banana)
fig, ax = make_surface_plot(X, Y, Z, elevation, azimuth_angle)
ax.plot([1], [1], 'r*', zs=[0], zdir='z', ms=30, label='minimum')
ax.legend()
fig.show()
```
As you can see, this is called the *banana* function because it contains a banana-shaped valley. Within the valley, we have a **global minimum**. Finding the valley is relatively easy, but finding the global minimum is difficult, which makes this a useful function for testing optimisation algorithms.
## Optimisation
### What is optimisation?
Optimisation is the process of comparing a set of items and selecting the best one, based on some metric. So, we can define an optimisation algorithm as a program (or series of instructions) that try to find the best item when compared to other items using a given method of comparison (or metric).
### How do we use it in machine learning?
Optimisation algorithms are used in machine learning to compare different values for the parameters of a model and to try to find the best ones. For the banana function, the optimisation algorithm tries to find the x and y values that lead to the lowest value of the function (here our metric is: "lower values are better"). More practically, optimisation algorithms are used to try to find the best values for our neural network weights and biases. In this case, we use a *loss function* as a metric and try to select the weights and biases that lead to the lowest loss function value. But, we don't need to worry about that yet. First, we need to build up some tools that will let us do this.
**Note:** You may be wondering why we need special optimisation algorithms at all. For example, the minimum for the banana function above can be computed analytically. And indeed, for many optimisation problems, including some problems in machine learning such as linear regression, it is easy to compute solutions analytically. So why are we spending a whole practical on this topic? The reason is **computational complexity**. There are a huge number of problems, including optimising deep neural networks, for which we cannot find an analytical solution in a reasonable amount of time. In these cases, we need to find approximate solutions to our optimisation problems, which is what we will be looking at in this practical.
## Gradients
### What are gradient vectors?
*Note: feel free to skip this if you already understand what a gradient vector represents.*
Let's start with a scalar function $f$, which maps the input $x$ to the output $y$: $$y = \green{f(x)}$$
Here, $f$ could be a polynomial, exponential, or whatever your favorite kind of mathematical function is.
Let's now _approximate_ the function $\green f$ around a particular point $x_0$ with a straight-line function $\blue {d\ngap f}$:
$$\blue{d\ngap f(x)} = f(x_0) + \red{f'(x_0)}(x - x_0)$$
**Reminder:** the equation for a straight-line function can be given as: $y = m x + c$, where $m$ is the slope of the line and $c$ raises the function. If we want the function to go through a given point $(x_0, y_0)$, then we can change the equation to the following: $y = m (x - x_0) + y_0$. Notice, we just shifted the line along the $x$-axis and made sure the function is raised by $y_0$.
<center><img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAArgAAAH0CAMAAADhbK6XAAACuFBMVEUAAAD///+/Hi7DLDvBNUPHOkjLSFXPVmLTZG/Ub3nXcnzbgInejpbinKPmqrDquL3uxsry1Nf24uT68PG7L0LNpavPvL+3P1XQx8qoeZmmgaOhkrcUFBliYmRYWFp3d3lsbG6WlpiBgYPi4uTz8/Tw8PHl5ebKysu1tbarq6yhoaKMjI3a2+vIyuHg4e7j5O+jp82Pk7SkqMuprdCssNHLzeHR0+Tm5/Gdoso7PUyTmL2gpcuLj6uvs9OyttS1ude4vNi7v9q3us/N0OTs7fXBxd3T1ufa3OnS1N7y8/jv8PXHydL4+fvZ2tzW19nT1NbR0tROT1BbXF3q6+zn6Onf4OHc3d7k6u/D1uTS3+kLWI0PdbwGLEcEHS8be74efcAthsQ8jshAjMJCj8RLl8xOlcVaoNFpqNV4sdl5rM+Gud2FsdGVwuGTutWky+Wjw9mz0+nC3O7g7fbe5erR5fISebVQV1vP1trv9voUfa4jhrJBl7oXgacahaCMwc/F4Odhq7sciZhKnKY6mqArk5M0NTXt7u48nZkhkYoklYPI5N8nmXy739QpnXWTzblEqoQ4pXhgt5MsoW4AplEwtnFQwYdgx5KP17K/6NPf8+nv+fQQq1sgsWYvpWdAvHxwzZ1/0qef3b2v48jP7t5JsnW+480xqV/I0Mtlv4Wl2rbY4ds0rVg/sWON0aB0yInD1Mc2sVFAs1pCtVxPuWeFxI+Yx6CiyamhwKY1qkU5tUoynkFFuVVMuVtRvmBewmxfvG1qx3eD0I58woaP1Zmb2aTA58bm9egZTyALIg53zIJefWKo3q+047rN7NHZ8dzg8OLy+vP8/Pz5+fn29vb09PTv7+/q6urf39/V1dXPz8/AwMC/v7+vr6+fn5+Pj4+Hh4d/f39wcHBgYGBQUFBAQEAwMDAgICAQEBAewcC9AAAACXBIWXMAADAkAAAwJAELuPDgAAAgAElEQVR4nO2d+X8c513Hd3aRZGTZli3KHZGUND3SlkCBmqOE0gNSckEOjoGuFhZJLEJoxQY20AIFWlrKUbKKLymybEM4CiXcCCGxQiuBkFbISVzXlR0UH9G/wWvumX1m5pnZfWbmeWY/75+s2dGx9ttffZ/v9zvPk5EAEBCIC4QE4gIhgbhASCAuEBKIC4QE4gIhgbhASCAuEBKIC4QE4gIhgbhASCAuEBKIC4QE4gIhgbhASCAuEBKIC4QE4gIhgbhASCAuEBKIC4QE4gIhgbhASCAuEBKIC4QE4gIhgbhASCAuEBKIC4QE4gIhgbhASCAuEBKI271Mjni89bHRKnGNNyBu9zBVmrQLOSmXPd56tVDg3lyIm2Kmi0VZNvSsFPLjxXzFfLeT8qTnO6/mi8Q1zoC4KWZ6vJiX5Wn1DVYLcrkyIptCVuRRnzdelkvENb6AuOmmKOuJ7IScl8ZlS9xC3jcbGPPMIzgB4qabvDymvb+CPCqVZHlKf7clSkidlgvENa6AuKmmLOuZ7LSsmFqe1t9sNe8fcJWQ650C8wDETTUlI8Wdkh2/+yeNQOxJmfOQC3FTjZnimgZrFMycwZMRueL1Eg9A3FQjG5G1KOdt73Pa8ZE7Y5QsOGEgbkqZLpenrRRXysv2yuyE7KzTlstqHqF+jsEk37kCxE0llaJcKMqTWoIwIRtM6O911BFNyyMjxXy+Ik3KxaKV+1Zkmee/GYibRiblfFlRL6+muNVyeUKWS+Vy2agkOFLcKcXnaTk/ka9IY7KV2cpcl3IhbgqZ1PUbN1PcCdmx1LI7OZ1Xs4miklVUZNsrI1wXxCBu+ijLeiZgpbhjjt/70/YSw5iW7haU9VrVngoXuV6dQdz0MSLLWk4wYQpacKzGynaN9RCrBeeqLS5DXBArk2aCMGpUca2ymIpD3NbgbAJxQawUTAdNXctWQUFqFVejJBP9BogL4qQqG5mCFUUnnRUCF3GLZEsC4oI4KctG58Bq8447TXURVybHcyEuiJOyrc07IknVKfVPji5YlcgLzFxiyhoag7ggTsximJbiToy6xFOit2CkuNO2OtkIfRAnQSBu2qgY4k6qKW5xwnbJoGCt1UrySFVd0Gkf2apmRFjmCoibOkY0Syt5Ja6qIXSqNcKOyeP6n6bVXpmRF1fy1n0uiTBPQNzUManG0HJ+XFFyvKglAs7nHazJr4qibLUwllcEr+THrXsmyOUaT0Dc9DEuF0rF/KSi55j6PHqxdUKxapk8Jo+VRsakyfxIyTmBO+oo/XIHxE0h06XShCJmuVRS11p5ebzlTY5afbJyqaTkB9MTpQn7MxJSXqY8lZYsEDf9TMtEfWCSmgfQn0pLFoibbsqlqjRFdsWkEccjaC4UOd9YAeKmmkmlgjtOZApKocE/oJb5XppB3JRTVJoQ+bxLdC36V2mLbp/DExA31YzJxWqJGFiU1C1B/J6FnHD9HJ6AuKmmWswX8u4OVvLeyUKFkkhwAMRNOZWyV1VrMu9VqPVzmhcgbvdS8Vp/lXjfYxTiAlGBuEBIIC4QEogLhATiAiGBuKlmgv9zn9oE4qaYSqH1mZ30AHHTS0ndXNR3JEFcIG5aqRa0TXHJybBUAHFTy6iirWdbV3QgbmqZzstykfPhxPaBuOllwmMuLBVA3BST1lKYBHGBqEBcICQQN1VMc37mOTsgbpqYoJ4tnRogbnqoFGWX/ZlTCsRNDVqHl9i0JqVA3JRQ0Tu8hZTOJrQCcVNBdVzTNrUdXgKImwbKI5q36e3wEkDcFDDZbeEW4qYDZZxGlke7pRKmAnHTwIQs57ukmmAAcVNBYayrwi3EBaICcYGQQFwgJBBXVCa6Lat1AnHFpFJI6+O7AYG4IqJ1eLtm9tYNiCsgeod3pJv/DiCucHTfQI0bEFc0pvSBmu7q8BJAXLGojurhtss6vAQQVyzymrdd1+ElgLhioT6fM9LV9QQNiCsYBVke7/pwC3HFo4xwqwJxgZBAXCAkEBcICcTln3JaDyDpBIjLO0qHt0s2+QgDxOUctcNb6Pa/BRKIyzVGh7er52lcgbg8M6V3eLt8oMYNiMsv00VNW7QcXIC43DKhh1t0eN2AuLwyiXDrB8Tllao6MI4SrgcQl1vK6dul+RxxhfaCJxCXX0oswu1u0wbxarxcmD3r8f1mF4hLFCBu6thtNrcajc16fWXRlXq9vtbYisPiswsLF2yx9ELtAnGLzqXZU8Q1fyBuemiuN9a8bHWlvtlY34nu7S/U5hdmZkxzL9Z8wurF2mnimi8QNwXsNrfW6ktuagZhZXUrkuh7sTYnna+ZUfbs7Dxxi43ztTPENT8gLk9Ux8KuxnbWG+0ra2NlbXuD+OKdMVdbkGqWuPM1rwRX5dzsXKgVGsTliKm8XAz+02ysN+qEgBpL9fpqo9FQV2X2z1A+3mo0Vuuusi+vbu8S36VtztRqZ6SZmpEqXKxRstgLfpkECcTlhunR4OM0O1uby4R4i4srdcXWYPbtNLcaZEa80mCV9C7Uauck6ZLx4Zx/wFVC7myYkAtxeUHv8NIP4226BNql+tpWs63f9TvbrbnG8hoTd+drc7aPztR8M1xJzXLDrM8gLh8EHKjZ2SKkXVltBIyx3uxsr66wdnfWoeop71KYwRmH6TQgLheU9IGakk+83djebMlMl1nWA3bX1+zZx/JWZ4u1szVHzjpLyxQC3mMCcTkgwDm8zbWWdLTeeZwl2djatH2LzXXihuBcrNUuWje3RtOLC6cVSc+eXrDdNB8mV4C4HGCEW4+fZHd71RFqlza3ImwbrNu+2XKjvf8c8zUD/cJC7Xnby5fm5hbma+elC7VT52etDoXzJgoQlwOmfM7h3WjJaiOVVmd91fp+q+1kDJfOnJmp1c6cOWMUFZ635w2XZk+rZYbztUtKh8IMsxdqM8RX8gTi8sCoxy7NG1uOBGFlLa4xmd1t67/LZlvfdMaxNpuxZQHn5tQ/n1eT4JpN3DNmgA4AxOWB6qhLuHVau8S0OxCAjTUzZai3oW6tdt7xkdXQXdDS3edrylpsvjZvpgqX1MpvQCAunzitXY4t1DrYXmlb3TM1R/3LLu6c9sIcmRjUQswrQFwO2XVYu9JhYaoTmmaVoR7uhzjtlND+0Wk1rJ5zhmTiLhoQlzd2tx0lKeazLyHZMBdqoZZpp5z5Kqmks1zmdZc3EDchJkZdv2/TXvnajDmtdcdUdylEcWzemQiQSp53SWjJu7yBuImgdHjJOsJGw9a7qnNhrYqp7nLglkTNOQxGKjlDprhSrXaJuOYFxE0CtcPbOk5jq0Almte6YaobMNW9VHN2wWZb84KzRop70XIV5TDOMTq89sN4d9Zs/ao1vqxVadaNfIF4yYULLSF2vnXY9oKR4s5aRkNcvinJRIfXqjstLq0m/SyuF9t6HrMSoHXXmsGeskoImtOn9Bsu2GYYTtNHHy0gbszo5/DaOry2Sj9PiS3JbkP/KelBd75lqMbm5KySRZzVxxjO2QKu3W46EDdWyHN4bZntcoPDFMHBRj1g0J1tmZc5a2UBs7W5c+dmnp9RGhSXZuxLuBmyQOYNxI2VSWe43bWVEToaIoyNLe23w9KW7zc8W2udUJwzc94Ltfnzc/Pnzs7UTjkz33MuBTJvIG68FG3n8O6sChRsDXb19simX05zkShsLVhpwNmLC+qLZxYuOAbHL9Aep3QAceNlOm/s0myfvxIi2Bps60HXZxG5QDyFc7Y2S9zVwjxR6/UD4sbMpBpubTnCkjDB1sDIdF3ThbMLpxUHiWUW9aGzs2FqChA3ETasvu7KtohvoOGdLswp1a4a+fTY2VnKo5DPh9vKBuLGTtNKbbmt2dLYWfaqLtRqtUuu2epp/yfKzpBB2heIGzPrdXFzBBv6Gm2JyM5naxfPzc24lQfmPTcZVR+LcP0cbyBu5FQKU+a32DZT22UhcwQbW+6J7qW5mu2hBjvnZnzcfH42+HyNCsSNmpI5TrO7ZWrbzsMwvNHUEvXVwD/WuRlyIEznVFhvIW7ElNWBmjG1kGCuyNp6cpY/NrQJi7rLEs2dc6e8kgXPFzyBuFFidninLW2FTm2d7GrLzJUk5isgboSYAzW/bxYSwjxFIABr2psiiwuRA3Ejozqmh9vfMLUVfkVGsJ2UuRA3KoxzeH/9C+nV1lyixW8uxI0Kbd/QZz5nFhKIkmc62NGS97j/U0LcqKgo3n76pRTVvzzYWUnCXIgbGSX52RfSr61SXEjCXIgbFRurn32pG7RNyFyIGw1W3Tb12iZjLsSNgu7SNhFzIS57uk5by9z43i/EZUh1fKRqmwDrHm1Nc+Or50Jcdigd3nFT25Vu0jZ+cyEuK/QO7wsp7pL5o5sb0wgRxGWE0eH9TJdqa5ob06wYxGVCdVTT9pnnAu8Ll0K07m885kJcFkzq4Vbp8KZscDEUmrnBn4noAIjbOcY5vGqHd7V7tVXMDfk0T/tA3M6pWOF2MzVPN7TJdlyNCIjbOTu/I8vyx17sssKtB1sxNSIgbqdsrC6+9KxSTAh+QEKqWY2nnAtxO0Nr7774Im3jzS6iHktpAeJ2hNkn6+JSQitaOXeTuM4WiNsBxoEeadkogRFaUSziYjbEbZMfe2LXPEIJazIn6+pfS7QpP8Rti8u/9NHfXuraqQQqjeinFiBuO/zyz8my/GIKN/hgxWbkCzSIG56Xf/GjSsPhY0huPdEWaFF20JiJm8AuPAnxEz+rd3g/320jtyHQFmgRZlGsxN2OZ7QieV75hY/qHd6/+6enu+Mtt8V2xGkuI3G34xoKSpof/xlN24+98J9//vjjT3TDW24TteayEtmXZyNufENByXLvz+vn8H6m/r+PP/7444+9nPq33DZamrsW1ZdnFHFXu8Hcy/f98+9q4fZP1yXpScXcp4ibgIGW5ka1DGCV46bV3Guv7e/vX1f/eO/f/ocyTiM/81mtBPaIYu5biM8ABlqaG1FNjFlVQTPX7eQrgdm7nVE52Jcu/9U/Km/wOfnjf6GvOF5Rk4XLqXrDbNmMcGiBXR1X25w6kW3Vo2I/Y3LnX/5da5T9sdXJRLJAYXc5upoYwwbEdtrMtXmbyfz3vxJTYJcfVcy9l/g8YNCMribGsnOWMnP3Mg7+Sxmncf4b3KuI+yiSBW/UX8N1z5c7gGnLN7EDASLhtlPczL8tEb/07lPMvS8dbzca1JpYFDP2bGcV0mTu1RZvM/9D/ipBskBjJ6pkgfGQjX6UBRGaBOR6q7i3Xd7DW9RkgbgMTBoRJQusp8MSOsoiAvaDiCs9pZj7JHEZmESULDAfa9QPhI+s1RcbwcS9/Jhi7ivEdWAQUbLAfh5X3+JX+B1dAqUKerLwCHEZmESTLEQwSL5bT0VZ7Fdbxd0nblF5GskChZUonkCL5AkIrf27LHRx4TdHfrpF3KvEPSovP4YxMX92ophZiObRnYZWFhN3a5fdtZee+Umnt7eIm3SeUEIuZsp9WItg1RPRM2daQTeSynMcrC8r4zQ/Zff24Irn91WTBcyUe6PNLLAdcIzqYUm9LCbkEm1XnWpa/Hj+Vyxvv/F7HybuM0CyQKPJ/mmIyJ7y3dCKCyviPQa7pW+Y8IWqdOMN3dvv+u6THyBuNHjlUSQLFDaZ//6N7vF0PW4tCfYk7I6+rdKy9nNf/9FPfvJH/u9bT548efKDxL0ql9WBBYRcXzaWWK/PotxXYU3fD454gV92G/q2SmvGX7I6jPCDirjvc/2p731U8/ZpeOvHFusHZCLdEGR7SbDHIv7gHzRtbRsmaFM036+Y65IsmOH2U8RLwMEK4/VZtDvZ6P1fQSq6G78lf2Kx9dQcTdyPfM/JkycfeKj1M97ymObtU5jJpcF6fRbxFkx6oitEXeyPfk2W5edap8X1ucXvVELug85PuPyUHm7xxGQAVtl6EPneYXrSWOc9Xfiz39MOKvv7lq6JMXD7fYq5H7a/8ik93H4zwm0Qdtmuz6Lf9E4b0eW8jbbxJ89qO30Uqy2vGOJ+RBH3AauY+/LTmraPYow8IA2m/bMYdmvUh26shTp37P7NpzVt8xPEj2Y+4vAdirnvNy4/oYfb+xBuA7PMsrAQyzajW47SKHdsfU4Pt6PT5I9minv52xVz361efEUPt48g3IagyXLv9nj2x9WrC1yWdNeXjXA7RbxmF1d6ryLuex42NlTAMGNoWGaLMW3sbJyXwF3QVc4f+awWbluzWw3b05DvUsx9l/TKI3q4xXMPCRLbjuTrSxxmuhva/6ePy/JImXhRwybuw+9RzP0hvQaGcbBEiW8rfaOky88BjBvGsTl/LY+7h1unuNK3KOJ+Gzq8PBDnGRBG0OXjpGZzLGGxseuyKDNwbJzwfsXcH0C45YBYDy8xgu5S8os07SjTRfr5Iw5xH35AMfeH0eFNnphP3Vlf5mKRZmlbp41ROLeqebci7rcTN4HYifu4qN01wxj/QBfpz6Bo+4eLwc6EbNlj6UGi8wsSIf5zznYSPgBXXZK98Kz8wmKg055axH3oAdcxMRA3SRzQp4/pJnIq446i7UtKy+HZvyRedKN1V7sPu4yJgfhJ5GRJM1+IW911Ndq/oHV4x4mX3SC2Y3yf1fkFyZHQkagbm/Gru7utrgxf+oRfh5eAEPeDLWNiIBESO8u3WY9X3Y01LUF57hnZr8NLQIgrfcAxJgaSIcFDqI3SWBzLtG39v8nn9XA7EizcuoorqZ3f9xJ3gjhJ9PT0bVPdepR94J01o2r7oh5uvTu8BC7iWmNiIDESPvbfUnd5K5qMYWNrZdH6HgU13HoN1LjhIq4xJgYSJGFxjYW+ljEwD7u725vmV1/cXJekiizLpeDh1kPch5EsJE7i4kpS05JreY3hg+wOa5cbWh5dKlSIG31xE1fr/LpvEALigQNxrRU/Q3c3tqxIrgXbNnEVVxsTc9kgBMQFF+Iq0XHFFh07zBl219eWbdaudJQ9u4urjYmh85scnIjrWPurZYat9gLv7vqa7b+AlSK0jbu46PwmDT/iKgu1Vbtzi/VGM1SsbG6tOqRdXFHTjnLIrNaJh7gYE0sYrsRtWVCpEXOzsU6PvTvNRouzSsxWY211VB4JVUZowUtcjIklC2fiSmrcXW5xcHG5vtrYajZbDd5pNrcaq3XidiVL1mP1VF4pgBHfIzhe4vp3fr/ocdIJYAWH4ipGbm0SMloJQL3u4qpN2m0zrZ0e1TplPs+U0fAU13dM7MvDXyauAZbwKa6kyeujpzv1tXX7WmwiH7rDS+Atru+YGMyNGH7FVdhoNjZbc1d3luuN7ZZMYrqoD9SE6fASeIurJQsfIi5r3DN8D3ENsINvcTV2mo1Gvb7kIWy90SCSX4WSHm7DdXgJfMT16/y++o5hmBslIohrstNUaDQaykqt2Wz6lWj17DZsh5fAR1yfMbG33j08fNdbicuAGUKJG4ayFm47/jp+4kofcu/8vnr/8PDwO14lrgN2pFZcaVyWix1UEwx8xdU6v63nSL3truHhu79E3AxYkl5xqwVyl+Y28BXXbUzs6puHh4ffTtwJ2JJecRnhL659TOx+Nad9+/Dw8Ju/SNwHGANxKVDEtXV+337XF6UvKYuytxF3AeZAXAoUcW1jYq/e/U33DA8P349FWRykStxqsD0+QkET1zYm9qVhLMpiI03iTuQZlL9aoYr7kNX5vX/4HoTbmEiPuFqHl0EBzAlVXNuY2Kt3YUAhLlIjrj5QUyRe6BC6uLYxsS/fjYgbEykRt1Jg1OElCCDue61k4c1o88ZEOsQtyTKjDi9BAHFtG4Qg4MZFGsQt6+G2yDzcBhTXb0wMREMKxJ32PIeXBUHE1cfEiMsgOtIQcdUJRhYDNW4EEhcbhMROGsSt5uX8JHGVEcHEdR8TA9GRisXZVNBdmtsgmLhuY2IgSjCrQCGguNggJGYgLoWg4mKDkHiBuBSCiovdxOJFVHEnIluNtRBYXK3z+07iMogEMcWtFOR8ROWvVoKL67tBCGCMkOKWIhmncSe4uP67iQG2CCiuMVDT0QY1gQkhLs6RihHhxK2OR9rhJQgjrvcGIYA1oolbHtEHamJKcUOJi3Ok4kMscc1wG1dNIaS4GBOLDbHEHQt3Di8LQomLzm9siCXudD74seeMCCcuxsTiQrAcd0KWx2IMt+HFxZhYTIi2OBuLpwhmEVJc6Z3o/MYCZhUohBUXY2LxAHEphBYXY2KxAHEphBYXnd9Y4F/cSpzFL5Lw4vqeIwUYwbu4SsthlLgaI22IizGxGOBcXK3DG2vhtoU2xPU/RwowgWtxjQ5vgXglPtoRF53f6OFZ3Kl8/B1egnbExZhY9PArblU/qSzeDi9BW+JiTCxyuBV3Ug+3MXd4CdoTF53fqOFV3Ok8g3N4WdCeuBgTixpuI+5Ep8eeM6JNcTEmFjH85rgFuZB4uO1AXHR+o4VfcSsR7NLcBu2Kiw1CogWzChTaFhdjYpECcSm0L+5D6PxGCMSl0L64GBOLEp7ErY5HcDJkp3QgLsbEIoQjcadG4tqdJgydiIsxsejgRlytwzuSfOG2hU7ERec3OngR1xioiW+nj4B0JC7GxCKDD3G1c3h56PASdCQuzpGKDC7E1c/h5aHDS9CZuOj8RgUH4nIcbjsXF2NiEcGBuKXIzuFlQYfiYkwsInhIFQqRHHvOiE7FRec3GngQtyznOQ23LMTFmFgkcLE4m4xrl+Y26FhcjIlFAmYVKHQurpYs4BwptkBcCgzERec3AiAuhTbFHTxh+wBjYuxJSNypPK9VhFbaFHcgOzBkfYTOL3MSEVdpOSS5O00Y2hY3mztqfoQNQpiThLhah5ffCpiD9sXNZnsHjQ8xJsaaBMTVd6gZIV7gkk7EzWb79VQXY2KsSUDcCZ47vARf+6Y3fXX74mZzeqqrdn6Vei7KYmxIIlUoctzhbWGo3/krPzCGuNlszzH1c96vm4vtR9mQhLjTI6KE26Fe3b6jxEsULHGz2b7jxvrsJFJdVqCO60efKd8Jn7vcsIubzfYPWRH35EnMLTAA4vpw3FKvpy8cPQ5xs7kBy1uMirEA4vpwJMuOr/p6U1w8EMEAiOvDAENxs9mvg7gMgbg+sIy4X/MNZsTFFiEMgLg+nLC8OzQQjj6ntj2DHzC9fcD7G4LAQFw/+g3xckM+d7nhyDJyA5L08PsQcFkCcf0w6ri54z43uWIXt1+1/uEHtXiL1hkTIK4/A1+RzWa/MmwV1y5unyn9u9/14Ic+jBExNkBcCp0N2RgNX8AaiEuhI3FzA2FzYxAQiEuhE3H7w6cYICAQl0L74rYxUwYCA3EptCtuLvRAGQgDxKXQprjHkdxGC8SlwGBfBRABEJcCxOUTiEsB4vIJxKUAcfkE4lKAuHwCcSlAXD6BuBSiEvd43wBxDQQH4lKIStzebBadtQ6AuBQiEvdE6yPvRwPONRyD7ioQl0JE4h7NZvvtH/fnAop7NPzmJKkE4lJgKu5AT7bniPqnQ85MoT/4MxYwVwXiUmApbl/2yFCf6t1QNttne2EgG2LePITkKQbiUmAo7kC2V3n8UqkmHMvaVT2ePUTc7M1QTw8meCAuDYbi9mQPK7uRHVUfH+6xvdAbNMHVOJpFJQ3i0mAn7nHF2WNa+TZnT1SPZg8TN/vSE3oPvvQBcSmwE/dINmskp8ezOdsLoT084qxIdCUQlwI7cfuzWeOPh+2/7I9le4l7/RkKv0FJ6oC4FNiJ22sVEnrs4vVnjxD3UuhDSQziUmAnbtZMZYccv+pz2Tb2yen6XAHiUmAj7uDAQH822zcwoG20YH8izZnvSoOHcj3KY+1HenN9nmF10Pk53QjEpcBG3AF1j/Levr4+Yh12xNmKyA0MHsvlTvT3Hjve45kQDIXf2z9tQFwKzFKFw9barOW67df+UbX1ezjbmxsyehWuYLQM4lJgJm6fR/Ggz6bnUE7tpw2oyvZlvVdtfd5OdwkQlwIzcbMefV27gwNa1nBYzQSOH9K9Pdbf1zdwwvOTuhOIS4GVuCe8fvHbm2j6cqzX3hAe6sseHjzWUgCDuBCXAitxj3mlpS7XHbO6h7WEoc9xG8SFuBRYiTuQzbp3u0hxB+0nWZ7Qo++gI0WGuBCXAitx+xzzYDZ6iSXYgL3YNWCkxll7nwLiQlwKrMTNOcq1NkgHHSmu+bKjxkB+UrcBcSkwEnfIa21GOjhkpLiHj6jC6y/32u/r9exNdAsQlwIjcQezXk/nDNjKZEeUjvAxPcXVirqm8H32FRuZGHcbEJcCI3EHPJu0R61V12HV0n7dygE1YbCJa+UaJ7xWet0DxKXASNxDnmMxJ6xWsNorO57NqcH5uKavq7jhR3hTB8SlwEjcHq+1mfKS8Wv/ULZPGswNHM32njgxoGexNnGtB3wOh33YJ31AXAqMxPVcm9ktHOrPqfsuHOvNZg/pNvdYVQXrK/SGH+FNGxCXAqN53Ky3ase9Krwqh4w1mW11d8L3M7oDiEuBjbhH/FTr8ysRHNVzDPt67HDXF8MgLpWOxR3M5QalQ35J6aB3/qsUxbRyhO1pnaEcdgSBuDQ6FrdHSU5zXsUwlUP+Ibd3SJKO56znK0Nt2JRWIC6FjsXNZXMnjngvzSQ1hPpVt4715Pr7stYzP+E2bEorEJdCx+Iezh4+mqM8lDvol0lI0uDgoBWwh3q7PlGQIC6dzhdnRw95P65r3hN8uYXNGlUgLoWYDi856l0uc3K4B95KEJdOXKfunAj4+z/ofWkH4lLAcVF8AnEpQFw+gbgUIC6fQFwKEJdPIC4FiMsnEJcCxOUTiEsB4vIJxKUAcfkE4lKAuHwCcSlAXD6BuBQgLp9AXAoQl08gLgWIyycQlwLE5ROISwHi8gnEpQBx+QTiUoC4fAJxKUBcPoG4FCAun0BcChCXTyAuBYjLJxCXAsTlE4hLAeLyCcSlAHH5BOJSgLh8Av4NOPwAAAMBSURBVHEpQFw+gbgUIC6fQFwKEJdPIC4FiMsnEJcCxOUTiEsB4vIJxKUAcfkE4lKAuHwCcSlAXD6BuBQgLp9AXAoQl08gLgWIyycQlwLE5ROISwHi8gnEpQBx+QTiUoC4fAJxKTz55JNPvux/C0gAiAuEBOICIYG4QEggLhASiAuEBOICIYG4QEggLhASiAuEBOICIYG4QEggLhASiAuEBOICIYG4rlx5/fbtW3uStHfr9u39K253gGSBuG5cO7i1t3cnc/36wWt7tzI3Xe4ACQNxXbh6cEOSpOuZg4Mb0o1MBn9HHIJ/FBdu31Ku7WUydyRpP5O5Td4BkgbiklzLXJU0cV+TpCs3b2o57tX927df3yNuBskAcUn21YCrxNpr1ms3Mrev791EvssLENeT25kD66XrGdXm1zKve90OYgXienKguarxhh59D7Q0AiQNxPXimpri6lwzSgu3bRdBgkBcLxwp7mtKgUHhpj0Mg+SAuF4YKe6esh573aiJ7RsGg2SBuCR7+0qozeix9eZN1WJTXPyNcQH+GQj21JbDjUxmXzKLuhCXN/DPQLCvxNorbxyola8rd9T6F8TlDfwzENzIHFy7euvOtYODa9L1N7SOg03cA+ITQAJAXJL9NzKZm1eka7cymTduaK/uG+K+jskFPoC4gbhh1XHR9OUCiBuIq5mMNmpzJ3OD+x+2K4C4wbitCXsVKS4nQNxgXFWWatKVO5nrIvy0XQDEDci1O5lbNw/egLecAHEDc21vD3Pk3ABxgZBAXCAkEBcICcQFQgJxgZBAXCAkEBcICcQFQgJxgZBAXCAkEBcICcQFQgJxgZBAXCAkEBcICcQFQgJxgZBAXCAkEBcICcQFQgJxgZBAXCAkEBcICcQFQgJxgZBAXCAkEBcICcQFQgJxgZBAXCAkEBcICcQFQgJxgZBAXCAkEBcICcQFQgJxgZBAXCAkEBcICcQFQgJxgZBAXCAkEBcICcQFQgJxgZBAXCAkEBcICcQFQgJxgZBAXCAekiT9P/ngMusBXbenAAAAAElFTkSuQmCC"
height="250"></center>
As the picture above shows, we can use the derivative $f' = \frac{dy}{dx}$ to construct the best linear approximation to a function $f$ around a specific point $x_0$. Specifically, the derivative gives us the _slope_. If the function above was horizontal at $x_0$, you can see that the gradient $\red{f'(x_0)}$ would be zero.
Notice the _crucial_ fact that the gradient $\red{f'(x_0)}$ points in the _direction_ in which $\green{f(x)}$ _increases_, and the magnitude tells us _how quickly_ it increases. Here, the gradient is positive, which tells us the function is increasing to the *right*, and the large magnitude tells us it is increasing relatively quickly.
#### What did we just do?
Before we continue, let's make sure we know how this fits into the bigger picture. We just approximated some function (any function) $\green f$ around a point $x_0$ with a straight line. Doing this gave us some important information: the direction in which the function is increasing $\red{f'(x_0)}$. In other words, what direction we should move $x_0$ in to find a higher value of our function $\green f$ - and an idea of how much we should move $x_0$.
In optimization, this is useful because we can use our function $\green f$ to *compare* different items, and we can use the direction $\red{f'(x_0)}$ to get *better* items. In this case, $\green f$ is our *metric* and the value $x_0$ is our current *item*. We then follow the direction $\red{f'(x_0)}$ (near $x_0$) to get a *better* item than $x_0$.
Below is an interactive version of the graphic. Play around with $x_0$ and build an intuition of how the gradient $\red{f'(x_0)}$ changes with the slope of the function $\green{f(x)}$.
```
#@title Helper functions (RUN ME) (double click to unhide/hide the code)
def f(x):
return -np.cos(x)
def tangent_f(x):
return np.sin(x)
def df(x, x_0):
return tangent_f(x_0) * (x - x_0) + f(x_0)
def perpindicular_unit_f(x_0):
slope_f = tangent_f(x_0)
y_0 = f(x_0)
x_1 = slope_f / np.sqrt(2) + x_0
y_1 = -x_1 / slope_f + y_0 + x_0 / slope_f
return [[x_0, x_1], [y_0, y_1]]
def interactive_gradient_visual(x_0):
# change the fontsize for better visibility
init_size = plt.rcParams["font.size"] # store initial font size
plt.rcParams.update({'font.size': 22}) # update the size
plt.figure(figsize=(12, 8))
x = np.linspace(-np.pi, 2 * np.pi)
f_x = f(x)
# plot f(x)
plt.plot(x, f_x, label=r"$f(x)$", color="green")
# add a point showing where x_0 falls on f(x)
plt.plot(x_0, f(x_0), marker="o", color="black")
# plot the tangent line to f(x) at x_0
plt.plot(x, df(x, x_0), linestyle="--", color="cornflowerblue", label=r"$df(x)$")
# plot the normal vector to the tangent
perp_unit_vector = perpindicular_unit_f(x_0)
plt.plot(perp_unit_vector[0], perp_unit_vector[1], color="dimgray")
# drop a vertical line from x_0
plt.plot([x_0, x_0], [f(x_0), -3.1], color="silver")
# plot the positive direction of change vector
[[x_0, x_1], [y_0, y_1]] = perp_unit_vector
dx = x_1 - x_0
dy = 0 # y_1 - y_1
arrow = plt.arrow(
x_0, y_1, dx, dy,
color="red", label=r"$f'(x_0)$",
lw=3, head_width=np.abs(x_1 - x_0)/10, length_includes_head=True
)
plt.plot([x_0, x_1], [y_1, y_1], color="red", label=r"$f'(x_0)$")
plt.legend(loc="upper left")
plt.xlim(-3.1, 6.2)
plt.ylim(-3.1, 3.1)
plt.xlabel(r"$x_0$")
plt.show()
# reset to initial font size
plt.rcParams.update({'font.size': init_size})
#@title Double click to unhide/hide the code {run: "auto"}
x_0 = 0.81 #@param {type:"slider", min:-3.1, max:6.2, step:0.01}
interactive_gradient_visual(x_0)
```
Again, notice how the gradient $\red{f'(x_0)}$ (the red arrow) always points in the direction where the funciton $\green{f(x)}$ is increasing. Also notice how the magnitude (length of the arrow) of the gradient $\red{f'(x_0)}$ increases when the slope increases.
### Moving to higher dimensions
In fact, a similar thing happens if we are working with a function whose input is a *vector* rather than a scalar. The same idea applies, but we must replace the multiplication between the *scalar* derivative $\red{ f'(x_0) }$ and the *scalar* difference $(x - x_0)$, with a dot product between the *vector* gradient $\red{\nabla f(\vec x_0)}$ and the _vector_ difference $(\vec x - \vec x_0)$:
$$\blue{d\hspace{-0.3ex}f(\x)} = f(\mathbf x_0) + \red{\nabla f(\mathbf x_0)}\gap\cdot \gap (\mathbf x - \mathbf x_0)$$
<center><img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAArgAAAH0CAMAAADhbK6XAAADAFBMVEUAAAD////GqKrCLjq+Hi68Jja0NkLIQE/RYm3hmKDtw8f03eGoU2OMi4yihrLm4/CalsavrNJ/fK2SjsWWksaem8pua6t3daSMisBubJRfXp+GhbiSksZeXoD6+v7Y2NlARIh9gbhQU2tLWZNAUIx0grJCR1hrfaxgdqUSExVKbqJEaZpRcJtUeKJihq1ffJtCaI5HbZLy9vpagqZiiq5ehqhLaYJRcIlZeZQ5i8lWhqpLc5FiiqpehKGfxuQ4PUEOdr4efsJGcpJijq6Bttzb6vRWoNFejapijqpDYHI7aYNBcIxGdpJLepVHdI1Of5pUgp1XhqFci6Zikq692+xGepY0TlxikqofJytGepJYiqJekqpCd44qNTlARkdLUVL2+vrd4eHKzMzAwsK4urqvsbF0dXXs7e2nqKiYmZlmj4nu9vRWW1ml4MPH7NoFp1RoyZeE1KyfoaAptGtGv4Dh8+jR1NIeezslikIvi0pka2Y1lE0aMCArlkQ1m00xo0Y2o0o8m091oH02sko2rko1nEY0lEUzjEEzhEA6sko2o0Y6rko6qko6pkk6okovezo6kkdFn1I+qko6nkY+pko6mkY+oko6lkYtajVAiUlGkk8+tko+sko8rEY+rko3l0E6kkI+mkY6jkJLplRSnVl+p4KQuJSkxac+okY+nkZQjVS60rxEtko+jkJQslY+ikI3dztWqVpEd0Y8gz4tUS5Mgk5fmWBurG9KukpUulVxtHF+xn5+wX5rmms+Vz6Knor6/vr2+vbk5eSBgoFgs19rrWl6v3h2uXRtu2mCyn4pPChevVZouGN+xnl+wnmCxn53vnCCynqGxYB1wmxzvGp8xHKCxnqGyn5rwV2GynqGxnqKyn5zwmB9xWyDxnSGtnuGynSKynpIZz9+xmReiVCGvnKOynqMynVUeEaIyWySynpsmVZ1pl6SznaSynaSznKIv2p/smWSzm6Pym2SynKWznaRxXKWznKWzm7Wzs768vL+/v76+vr19fWSkpICAgLnkulhAAAACXBIWXMAADAkAAAwJAELuPDgAAAgAElEQVR4nOy9CVhUV5733zxv///zzsyb7h41rcZ2YqMhiUsAlQIXAoJaUotVlFIjFODrbtzXRIl2Ok6zxEhIQCMx4gZGJUZBEbUgdEeqAAGBsUCRWNUsBmRQo4GkSEcz73PuVufec+6tlTX1O88JRVFUVL5+/dzv+Z1zf+XhLncNwHIL110DstzCddeALLdw3TUgyy1cdw3IcgvXXQOy3MJ114Ast3DdNSDLLVx3DchyC9ddA7LcwnXXgCy3cN01IMstXHcNyHIL110DstzCddeALLdw3TUgyy1cdw3IcgvXXQOy3MJ114Ast3DdNSDLLVx3DchyC9ddA7LcwnXXgCy3cN01IMstXHcNyHIL110DstzCddeALLdw3TUgyy1cdw3IcgvXXQOy3MJ114Ast3DdNSDLLVx3DchyC9ddA7LcwnXXgCy3cN01IMstXHcNyHIL110DstzC/UXWnNk8v+uFryNP9c9yC/eXWHNmzeX5XfN/pZ+VW7iDtF6fPXvutIXU7232rODguZOZ3+icWdN4f9MLgweGct3CHZw1JxgUJdzZwbM9ZlsEOXnarMn8v+nX6e/q3+UW7qCtucHB5G9tYXDw2teDgxmXnS0szWmz5iDP9b9yC3fQ1jTaY+cGT/OYHRxMX4/NsQIDc4L5rtz6U7mFO1hrMiPVWcFzPSbPfp3Gg2nChguEbuUF/aHcwh2s9TqNuHOCg+GMa2Ew/5UZ/QphS+4X5RbuYK3ZNOIyCiZrdrDVqHZWcP+nXLdwB2tNo52VUTBZNqjSBm33ebmFO0hrLYO404JnQb/FhazP8PW6VZro+3ILd9DVwmnBwXPnLCQAYfLcudOCg2dNmzuXWYtgZQaTqaWJyXODg5mXACru938obuEOspozLXjuHI/Zs6YR4pszdxoQ7txpjCqnwcKdDJ5/PXjanFmzJ78ebGGI4P6fK7iFO7hqzixSmMBByd/YwmAW1LIQdy7xmmnBoEFhFvS6af0fct3CHVQ1mb4kW8gIdzZbuLCZvk4u/c4lXhEcbFkxm9v/1yDcwh1UNZvW5UJGoOxrs8mwjKeRvjqLEPvrr0NfcAvXXb1Za4Nplc4ODl5LPprGWk5YCF93kYY7JxiR6ez+vwThFu5gqrlQBkYlWpPZslyIBgYL0Uux2f0/D3MLdzDVLFqDFrlyZIkR7tzgYG6Xo9tx3dWbtTYY6mSkcgHOtdkcVKVzUXud6xauu3qxLFGCxUXnciwW4YK1KOK6UwV39WrNgRB3rofHwrkY8wzmtiowPTjTLIp2C9ddvVkLaeHOIax39lxCqGzhQknXnLmzSUsm8oeF0H6eWcjlWr8rt3AHU1HLZh7TCBedtRCGXaqgXgVirWzyLIolplleNxmx5f5XbuEOpiJjrMlzCeG+PgttxoU7vxYGA33Onkbi8FzoCs2WDrK+LrdwB1NNnhY8e+HCWXMnzw2evXDW62gzLuGm9MNZwbPnzJ41Zzb4nmnQ7nWP1wfArjO3cAdXzZ42F/ybP3n2tLlUsw032LJsOQM9j7PneHi8Pm3utNlrWS9xN5K7q2+Li7jATq1ltHMGACm4hTuIazJ5pALnNzh5ltBxIB4DZOeOW7iDtojOXJy9vm5Fl5NnTbOi7P5QbuEO1poLzq7BnqEgeALTADFct3BdUU+f9cNf1Nzg4IX4ExKEj7LBf0+/K7dwXVDLFgUVL+lvv6g5s+bO5vlHXwgWJk/Df09/K7dwXVErh4/0jF35Yv/6RU2e/TqfPufyr4zNFuaIflNu4bqilnoOGzpspKfnytXXBsYveDZf4PX6NF5J969yC9clVTxkGKjhoz1XvtHPjBdffLbK93y/K7dwXVI/ev1h2NBhYA4B2l06CH5L/bzcwnVNxZGWS9So0V5xP785GH5X/bjcwnVNLYkZNhQew71in1vWH1OywVJu4bqofh41jFNDvIKeczNDT5VbuC6qVTEk47Lm0NExQSvcvtsT5Rauq2rlEK7lkjUyJmjFqsHxW+xP5Rauq+ofi9iUC42RQTE/u7Xr2nIL12X183DEbS01ZHRQ3LJB8hvtF+UWrsvq2SKEcVlz+GivlW7tuqrcwnVdbRmJGC3iuwNlZa3fl1u4rqunQUNRvuUOoF13SuZ8uYXrwnrOquWSK2sjvVYWL306aH7bfVJu4bqwnnmNwLAtbo4Y6RW30r2y5kS5hevKsk65cI0JWrncnZI5WG7hurS8RqBUKzBGeMWMdmvXoXIL16W1ZTTiq9ZqZFD5zwOk/bw/lVu4Lq1VcaOwTCs8h3sFPecOyewrt3BdWw5YLlFDxsT0tz1r/bvcwnVtPfUcIsC0gmPI6KDFWwbTn0WPllu4Lq4Vnoib2l6jRnqtWz2o/jh6rNzCdXE9XWlrloufYN9PsXtlzWq5hevqWuaM5RI1YqTnymL36oRwuYXr8lpsX5aLHWBlrdid8AqUW7gur6VeiIc6VMM941aucCe8POUWrutr3QgefrV/jg6KGyiH4/RyuYXr+nrTacqFa3TQYrd20XILtwcq1gWUC48hXouK3QeMsMst3B6oJUGIbzpbQzxjVv446P6gnCi3cHui1o1yEePCc5TnonXuPWt0uYVrX21ff+rUqc0lWqLKcnNPn0k7k3bmzMdgZOpytdpNWq325mcxL7804g8ut12QksUWuwNeD7dwbaqnL57668k1+VlZKSkpSYkJCQcTwIxPOJAQD8bB+IQDByjtgvHB6TMf54WN9fX1Hes78ZWJr7z88ksuZN5RIOF1X6y5hStUfzt16rPLuW/vSNqxOzGB0OnBY4mJKUkHDpxJS0sDE9QHaR+kpR04EJ8WH58QHx8fn5a2//SZGF9OvTLp5ZdfGop4qCM16gWvoJX/EPh1/xLKLVxMbdhwavPJy/mJiYk7EhMPHj1w/MzxM2doTz2ddjAhMTHt49Mfn/54/2l4JIP5wQd798bvjd+l9vXx9fH1DSGGpSYC/S4YgWNY2+eQoUNH/tJXJ9zCZdXaN9dv1urfBTxw8EBCQsKBNEqt5KC1+vGxxKNp+0+nJpMzY//+5IxkemYkZ+xPPUpYrpQYhHZnQHPixFdeeemlESMcRl3imLJf9sqaW7h0bf/81MnLWQmACQ4cP5B2/GBCwgHIZwG7fnya8tfk0/sTEhM/ID02eT8Yp4n/JqcSMzU5ddc42m2lvj7EpD4P8Q2Z4EtM4L6jRiAUa8MYQn3XcM/Ro3+hB4y4hQuuvt489dmJxMSE4weBYtOOg6QgYTfjtB9bFEsMyltPJ8Xvz9ifuj8jNTU5Y1/q/ox9xEd6xswICfGFJ61d5jmKIohrN/tqKHQy5Egvr1/i0f2/eOFeW/pZflJCwoGDQLXHgWoBzx7YfQBKCT5mqRZ47GnCV9N2p6UmU2Nf6r7UVOK/+1JTM/alpuapQ3yRQbov8rzvxJde+oM9nDtkFPz5SK+gn39p2v0lC/en7Z9vPpSQACv2Y4JpTx9NIJwWvvIiWNbCsMBn92WkJiRkpJLjSOqR1Pcz3k99n/54VO07MSQENwntzmC+NoOYM3wjXraDerknQw4ZE/vzL2p14pcq3J+WnMpOSjx48MDR4weOE0x7hhiEx+5MO3P6A4hn4UkzLOWzyTuSU1NTM1LfZ+b7GUfIx3lqxFmh4ePjg3HekFdesuH8MXAiwxDkuVGjFy3+5Wj3FyncVacuf3ICsAHNtGcA1QK3BUz78W7Ia5msIBXwbDIxSYbdR3lt4l7gr2C+f+T9I0csc2+Mb8iMkBD+CaiX/Dr4QEzwWP6KLcw7BPeaESMXxa74ZRxK9osT7qpTX2aDbJZkWtJr084cZ9Ztz6Tt/uBjyGsJpk1NTt0PsWwyybGUx6bF73s/9UgGOfbtfSf+nfj43Qm7E3anx/ggRaS7kFJ9pVIf+jGUQkx8+SWEa9lz6BD0OWIO7383Z+2J+kUJ96cln+WfOHGQ0mwam2upBOGDBJppMyxeS/As6bEMyzLzyN7d8Xvj4xPjwX/3/iV5396/7Mt4//CRw3vzQkJCJkITCJVSpo8vqWLwvNRXCr5Opr7UeoWPr2/AK4LIO4T/q8M9vVYO9jbIX45wn35+MvvEsYPHDhw8cPQA4bXHKa5lstoPTn+ctpvFtckQ0xJ5gYVlj2RkEM4aH7/36O74jIwjzDh8hBrp6hCEbeEBlAoseZyPj49EKkWI12fiC39E6JYZv0eegcbwMYP86P5fhnCfbgBUe+w4xbUgQyDJluZaIq09vf/0B/Ew1+7fDzEt4baExyYng5aE+Hfik/dmHDny/uH338uI3/se+AjU+h4xiMrYFSLEuPSUSgn/BQW/nqSHiRNf4rHWUcid1Ti+Ozpu5ZZBCw2/AOE+XXLq8iHgtQcPHgdsa0kR0iwrY2Ramxa/n+Ha/RauJZkWjL17Cc3u3bsv472MI5a5Lz7tvfcpn4UqJSaEViBf+fpOkNK8S6mXSBss3Osz8ZWXsGc1EGfmYJ63zCGenqPfGJz954NduE+XnDqZfezgwaPHGbIlFJuGWxc7HZ8M8lraayGuzUh9n9JsRvKR9w8feY+ewGHfBzM+mfTZT+GZERMiD5GFyHlEGwL4lq1pX18f0n+Bbik+9p3hM/GVUajvjrBiuUT9ZrRn3F8H4erEoBbuv19bnZ90jPDaA0fJxDYNw7V0hvBBPM22rKw2NTVtb3xCYsJeS3bAoVni4zt7D2cAPGDme5++92mKGpEqS7YTkKdCQsi+Bov3krw7zucVpK9huI19vsO9Yp5bPshSskEs3Gef7yFUi3LtGW63F8G1yfHJ5JpYMtN3sC8j9TS4AksDbEtmtATLkjwL0SyoPyfTjz49/Ol71Hg1JCQiZAYxQ6iPlin1hT9n+a4UvJpIHnwp5vX1mRDyMifhFbqzGrvAfstlg0m7g1W4z5Z+lvThh1iuhd2WWdFNTt7/JxbXpqYmp+6N352YkEzmCITDWrgW5dlPD8fvO3wYaPlTaOwgLFdGDE75IF7L8l3KjX1J9YZQXTozXoJ7eUfZs7cN7LccPIfjDErhPn3xs08Yr4Xc9uMzrBThY6gPITkj/jSLa5PjE5OSU5msFlx5WdiW8loLzxJKPfzOPkqxUO2SykIiQixTTs0QqeUxflr4d6Kvj88EwLxSH9+JIQEvW3azYdfP+GvoyJjYQXJTbFcLd8N65Cmy1m9Anuqhevb5pRMnjn1x8BhYH2O5LcS2H0NsS8z4DyxUuzd+R0LaPiKvJa7L2GRLc62FZxmHjd/36XucsSOG8Fp6WgwV8VhcSX1pCvYlr9iIqzbfSQvo/oRRKNFaGcNjggbD0f0uFu76jXz6XL+RT9IuradLv/zixMFjx48dxGS2eLYlMtsP/kxxbXJiQvyfk6l1sQy694Di2ve5XHv4MNtj4//yKbd2iUJgzyV41tcX4V2+GSJlUt0QXybrDZn48ghHLJdCY8/Fcct746fRg+WkcNdv3bh1E/SpgDqFvuaq+sdfs8gUwRrbfgyxLZHZxhNe+8Hu3fEf7LOsjbG5lkoR4OL6a8Y7lsdUnY6RhbAH0CL3OXrMQG1XynJnsM5G5ryT/viHYUOHDkdY1qY5ZLTXwD6kwTnhbt647attGxnlbt+4GXmJpTZt3I4858r6aemTT44dO34QZdszMNue4bAtmPvfSc5I3ZuQcDQZrI4xvV6WvPbwe1Rm+x4rp+Xw7Keffpqx9zD7ibOf7pIwfiuTg0HoNsLKlLN42BfOg8HVGpHxKl7+wzBMvmtjjRrtNYCP7ndKuBs2bly7eePGbdSnazduXYu8xlJrt24U+rKTteRU/vkTlNsKsW0ah22J3q8P4uMTj8YngyyB7kXA5bVHkJyWPUjlclkhMYZ0U7qkMpmMxb1Ck6kJvrDrgjUKwne9XxpuhWmFxqjRnqN/HpgpmVPC3bRxm8emjYzjbrICA+sFDdmZerrhs+xEIkfAsy2nI4HcgcP0I+yNT4xPS03OIPRqYVsW13LqU5zbkrV376dnPz1LeC01dklkclkENaNDZOCjFb9F5wwi26W415foMiMzXv+XHfbcYYTvesY9NwChwSnhAs1u2LaN8tHtG7cir2DX1p6Bha+WnfzkxAmUbY/TbJuGJrd0r23qB/EJCbuTk1P3ZVh6v2C2RfJaDNmyNHx2bwalW3oejWHcVhYtpR6QHoyf2BFCsDF9mUZwL7nGNnbGy86d0zBipOfo4oG2suaMcDdsZFmsNcMFlrsJec7pWnUqG6iWn20Rt7Ww7emjiUc/SI0HiHCE8VvAtgzfonntYTSrtfAsMeLPfsoeb0nkJN3KZFL6kS0jQsb13miCj4Ff+0rplMxXNDHE+4+IldpXQz1jilcMpNUJZ4S7mSXctRutIqwNL7Gzftq++uKJD2m3hXttabql+r/YKw4k36Ylkrt0/wyEy8kSILbF5LV4r6Xr8G5axdTHxBjGbyNlETJ62lhsBp5A264vvfLmIxoLVipecuiEBnh4Llq3esBo1xnhbtq4EfpsPXORxl/bXByJXfsMMMKHLLY9QO0w581tyez2g92J8WSv7Tsk3Vr2jeEzWx6VsryWGofjz5L1KfUxT0LwLfDbSBl7Rtg4Lb5Ls26ID8G7ISG+InClFiJ/zfnzIUfGxK1egvwx98dyRrjbWFC7yYZLr82uZIWf3sy/eOIYoVt7c9vTCYlH6Q6wP+8le23hLIHgWzbdok4L6ZjNtGfPvp9MqZaaR2OAx8pYfktOO4ukX6mUZF7ac8GlGqHdQGzfrn1zeEzMQFhZc1S42zdv3rxx49bNmzfTS2UbWYtm2zdt27Ztu4fHhk3btlkEvYHl0U4VkC1Q7XGoJyHNNrZN252QvI/eR5Ycn5F6hNo/9pe97xwFYwdZb721Y8eO3e/sjd/7zjvvZPC6Lcdvifrz4Stnz569QhnulbNvqe3lW/4BfBd0OsgI5RL5rohkB9+QkFdccSrvEK/+f2NhR4W7fuu2rRs3bt26bSv1jz9bkxs2btuwfuvG9Zs3bt6w1WKzaze6KFd4ujwuEbDthwzbwkkCnNsySQKV26btTkyD9zXs3pGSkv5W+ls7srOzsnJ1elA1ZDU1GWoMerJ0WVnZWYd2vAXEnLgbkS6i4L2HWZ67Nw/rtw54Lp1ISKVSGeO5M0TkGhvoI5v0R5Re7R9gZa1f35zVCVTYwPLYDTA3bCDkvH7j1q0bPNZvhOB3K28vgz31dMWl9Jycizk5KRcvIkkCN7dlKAHkCPGJCafJfWTvpOxIf+ut7OysB1qt6XbT7aY7TV833WlqYk2mWunRaqqtBSomFLybw7ZnLXXkz2dZlSfB8C09ac7FPcc3wXpaNKFcwLsiKcW7Pj7SGZOcCnaZGuXVn+NdJ4S7iZURbIaFu43w2A0bwXOb2cJ1fg1i1epLSUkfAro9djDxYlJKEuO2BzC5LbROtjcxIX7//lTgsCnZWXr9rYY7DXcavm74uqEJHZxqRUajVp+b9SHQ72GGbSHt7j3CEu7pRajXghmJec5mTwZhrg/BDL4iS8Tr4+P7ykuuuO/EqCDkz77/lBPCZV+bbYbkuYFU9HoiLtuwadNX0Pc4K9xrf81OAkkC02977ERSIkO3BNti18mS4xNTdqQkfpKdrdffun375te3ydFEjq+bvuZqlSxIr60PWy2Trrayyqysd9/e8c7eTymepeZe+jEYZ/8CKJc1IiItg5dnrY1oabQEeK5MNJ/uayCz3Ul/dMHp5yPfQP74+005IdyNLOFCJOuxiXy4CSXarU7GCqs+u3DxBKHbLw7Se8kOHkhKTIToFrPD4XRiSvqFT7Lz9SW3G27fvN1AjjsNtxu+brjT0IR6Lqrdr6HJHq2tTa3GJ/lZHx7csfsdC9cejof8971PT+/i+Cx34LwWnUhJpWGE0Y5ldZT5+PhMesl50o3tv/GC48Jdu5Hlnhgv3YauAWNeZUf931P3Uki3Pc7e33AwKYkIE3DZ7dHElJScS1n6gts3mXH769u3LW6L8i2kWUq3razJFOnB1Girzc16d/fujAxSrBmpV2i/fe/s2SvpEpJfI7AD4V6hyeLeEKlEJosI8RdZ+n2pjOGVUc5GDEPWOfHT6tlyXLicBd9tqJduRJ9yhnGffX4p5QRNCQePAa89fpSm2wMpx1jZLfDb/aeP7khJ/+SSVnu7+no15bU3abflpVuc3+JHa2sbNaFq/OjtnbvfOXz2ytl9RyhueA9EDFcW8XltpI2sy8e90rBomSzEX8rZ10a6LoZd7ZieK5AfQz8px4W7mQ0CmxCVcpRNlOOO+9PS/HQL21LrZHB2ezHhOCu7PZr4Vs4nube+0ZZU3755u/r2beC0xLSbbnFs28r1W9bIfXv37r0ZeynH/U/ivylqrNcyg+LZSHkEM60QLj2kkmiZ3N+Xta+Ncl1nE4aY/rpDzXHhbmMvJmxCuACDuI4z7ptxOSnEOhnFtkTP7VFWdpuQxPjtAcC0+derb5Vqa65Xo34rwLc49eL5lkMN9MUaPduz3n4rcS/w3P+krtd2IR7LjMjIiMhI5Fkb/JgskShEKhPNh/oa6N0TPj6vONXDMHI08qPoH+W4cLeyexM2I60KNOJu2AQ/h5iwLbXky6QTSSc+JNcbjlvoFkoTzqR9fCAxDbjt0fT0C3qg1QKtvrr6NjlYfHsH9luB7BbhWxv9lhmVuUk73vlPynevpKgRl42MjIyUWyb6dcSVcewbIZNIZVKpCO5pYM5q8PEVPvXRSnn2081pjguXQ7DrIQNeD9aB19Iv2LYJ/iYHhPtsz8WLJ1LOnzh24iDac0v7bRo4TTzpaGJeTnZJ9fXq69W39NoCwmsRvxXgW9Rt8QPDtuDzR6z5qLWt7FFr7c4duw+Tnvsq1mfhifgsbuB8VyKVhYhEzLoa7Lq+Pj4vO8G5Qf2zYcxh4XIJdrtFuORa2XoqddgAAcPajRu/8rCzfjqVc+LEiaQTx1g9t+yuWzK7PZiYnp5/g3TYgsvaapJtbzJse5OmWzv4Fpvd2uS3j8BsNIGPxo927Dhy9sqVoxDlyiNxA/VYvsHlXLDLQiSV0X2/0L41wnVfEvZVgRpe7KhEerQcFu7mjZzVW8vnxFrZ2q2k467dCl2ObUBA2GqtvpQEdJvI9CWQ2e3xA5w9ZQnp6Vn66wX6QuC2JfqaQsprEb8FbHu7ockq37ay+RbNbhG/hSfwWzD11MfH+W/v2Ht213zCQbk+a7fnYrg3RCKTSUVSZmsbm3V9fcYKnLQrPPrnyq/Dwt3EbfSytDVu3rh1+/ZtW7dvBJ0K8O51+9saN+QT6w2JiWTPLeS3aXBvwsGU9CwtUOz1W/qq2yXa0ls027Lo1h6+beXlWxvYFsp2DbT/PmrPTdiljmAxLW7yOCzvoFlXKpJFiqKjQ5iMl826vj6v2HK4I6ZGreuP9w12WLhbuea53vLEZuC2az02bNu4cet69jfZhbif7wGUcOJDAhQs2S17T9nxlJQL2XpCtWAU6PWlBderGb9t4PItD93i/RY3MHzLZVto1pnAR2rUxEgkUhzbOuy5kPdKRDLZ6AhZiJQJeNm0C7IxDMNanyP7Iyw4LFz0Msv6Vkjr2ynhWvXZpYsniE7xxBNEdnsUTRPOJKYkQaqtvl1aq9XeopwWy7dNDvCt/WwLzcpHjx4y44E6QCqVRGPoFhp83iowANdGSqJl/vMB30Zb2JfFur6+fKebC9ZQz37YnOuocDegIa11DrBlkwRTy7Mvngd+e+xEYuJBzJ6y48fPJKbkXMqvLqhmdFui1RZcL9HeYvz2pov5FmXbVjzbMrO1sY7y2/9++N8P/7t8hneA/ziJBPFZpz03UiYLi44WAe8FOyrxrOszEb0/mvUxZDHy0+nzckC420EYuxk1z6/Qp9i1dqvtmcKbe1JOkOu7Hx5LgvaUMWlC2oHEtz7JL6m2uO31qlLtDaDXklJevm3qTb6l2NbQZnHchw/U4tDAwImiMIkUw7f0FPBWoREtkfvPJ5g3OsTSz0DuFaZZ12eiA6br2f96yh0Q7jYQGuBWEjZjnrPn65Za+9ccym2PnTi2A8oSmPtAHk1Jz64uAIPW7Q29llawtobw29ssvxVIb/F+iw6EbXHZLdtvwccnlNsSo0g5VSwOnRoYIBonika8FvZcBzw4OixCJCN7GaSsdgYW6xJrafZxblC/20HpmHDXb0fWycivCO0+x38Ppv72Zg5FCceOfXg88ZglSzhIuu3xpLdy9LDXVl+vqtHeqGbYtrTKGt/ylEv5lh51jRDldqhDxeQQTxSJZsgQwo0kC32eGgKOGxkhksp9abqNhnNeuIfB18fuUxj6Hyw4INz1G7et34o9JUz44svWo8N+zE45b/HbE0l0bwLdmZCYnpdfUF0A++2ty9pC6Aqtulp7HfHbHuBbVnb7CO+3wG0NrbTfPnr4qMg/lPBcMFUzwsb5z4e8lb0aLDQRtyWHZL5oPp3thkRD/bss1vX1tfHuEcwY3d/axBy5ONuwadtmvAY3bOW/Ptu01abtZn/bk3ORzMDITrBEmm+pLCEx71DpzQKW21bX6KuY/IscN/S8fGvj3jJUqXbyLeSyxlrosw61OBQainESkWh+JKYw7GudhWUSmT+peuKcMlb/Ltyv6/OKnTsk+lubmIsPdt7AmxvYqNvPcy7SuiXWyhISqd4Equs2PbuAcVvKb0v0JbCKSf2WlPYu3/L5Lem0xkbSbYlRpBJbPFcsDp0ikcz3EfkjGhTyW37+lUpF85lsF5BuJId1GdP9I4Zl+efwOORn1afl6qP01/JludvxHs150Zc5J85TaQLZm5AEZQmJZNcXe5ToS+BVMia/ra3C8u3XvZTfskejEfr8SQzsuJrQeWJVhEQa6S8SzeDYLsK3fANWuyRaBHX4skmXxbq+9uULnv3rZmn96uYlq3OSSN1Se3iPHUxMYLLbpLxsfXVBNYGtcW8AACAASURBVNtvC7XagmrUb4F69X3Ht61sv3343w9rTZByiyZCfksOP/k4aUSkVCTyj3TAc8lJe656vghy4OgQ9p4JmHUBL9g8hvavPb/9R7h/+5xacSDTW7LzNoHy22MpF7JLCq4XXK+G+faGtrQK6qVh9ydU1aB8y5vftrqSb7l+++jho2/rLI+/fxJD6XUeNMQKqSQyMlLq7ztfCnuuXZNQu0ziL4Wpl8x0IzE9DCG+E+3oGutfK7/9RrjXTqZcPM/x24OJCUR2m5iSk1/CdtvqgusFpfoSDjfchvPb0qoe41ur2S2zUkZnCSYTpOPm+VPF4nmEcuGp8JGAZFfq7+8vinbEcynflUr82emuVMY+HxJKdX1fsb1/Iag/wUI/Ee7fPs+n6RY6M+FYIvhvYnpSfgHhtqxRoi29DvvsTaQ/QXsdzW9dz7fc3gTEbYnRZgBeS43HMSTbIlMhlRCbIqLni+b7R9vHuZYcOCJaNJ99lRci43Tv0v264P49Np/AMGpxP2oT6x/C3X4yJ+m8hRPofWU7Pzx4ICn9gr6A67bVBTe0cHKL8duG2zertD3UnwDzbSveb2G3JYYJvj5rDggV44dCSrhuBNCuv2h+NJI02OLDUomIkzWERHPP5LWwru8rtp42NrofHRDSL4T7+QVACV+QfnvsIHMvh4SDiekXSq4XUMPCtwUl+gJIqwjfUvltbVWv9yegXksPA8G35PxmF4tvWUMplchoxJX6+8+PRhNeqwwsQtI1WTQ4O0TiI5FQUyqNplg3JMT394i74qsftYn1A+Fu/zL94kWGE45Z9jkkJqXnlCJeW11QTVGCsN8Cry271cv5LZ/fPnr4qA1ehcgLwDAuPadIpBanlfqLiLU1oYF6bpiI3TcmlUgkkpAQ5VTx1NCp4tfEflNCQiSSceA5oiaOwDAtOoeM7jd3iuh74Z7KufjF+RMnviB7E6B9ZYl52ZDbWkaVvvQWrVVeviXz2xtlDvHtQ3v81jrfUnRrZBj3+0cPFyF8C88oiQT21Rk8a2v8iW+kej6TMkRLJD5KkktCQ2fOE4fOnDdTPFUMpp9SKpFIAev6vmQT6Y7+Gfn59VH1tXDX5udcPGHxW8u+sqS8SyWo1xZUF5QS67t4v0X6b7VVfZDf4vwWXJ/BlHtehGdceijVUpbHzvcRiQQ8F/Hd6DAZ+XGcRK6C1+mmiokx02/ma+T0k0vU0pCQCRNtOn/BC/kJ9lH1sXA/v5BDZ2Af0mcrEXlC3icl1QXXCxC+rbpcytEqD9+S+e2d0n7BtyTbGhjGffT9oxiUbtlDIZFybNVfJJIjXsvHvBLwWolEruL+nZg6byaY4qnQkAPftWkncL9Z+e1T4W7fA+j2Iiu7JdbL0j8pKShAvLa6oFqv5aZiAn4LEtySGizh8tECd7iIb8nxqAz6rEiEsC2S60qiua463wbipfJciUQiDSfeR8zujSA5d2ro1Jmvif3o6TdRMs53og2Zbn85IKQvhft5Nshuz39BZbc03x5Lv6AlVxi4fFsCrTjcZjstyrdkgttgMDl+Ppjr+JYc7W3M467vYzBsy5mq+RI0U5g/H2hXsADlRoeJ/Czvh7AIwbpimnVniqfOnKqQSGww3d/0k5XfvhPu9pM5oO/2PMdvL6Zf0BcW4Py2RFtaLeS3yPkJRI9CXY0VvrXjfDDH+Rah3O8f3QtD1ISOKTLUdSMi58/3hdbWcEMiCYny92YRAtt3KdJ9zcK6U/1mTvQZZz3THd4/jh7tM+F+np0DrsnOU2tlFN8eS7+YX02olsu3BciKAy/fss8Hq2kQ5Fv+88FcyrfkrG0DH7vIkacU+4n9/KaoAgOn+E0BHy0zMEDprVSCOUMikUZzaRcYr/98//ky5GmioiUyv3lisUjMomZrrDtz6sypSh8fq6br+SPyw+yD6iPhPjt14SKlW2qnA6HblPT8kgKs35ZoSwsF/RY5P4FOcE1aYcJFuJYegucnOOC3jx4+6jI8bPvBZCTrfhjQq5+YHCjjqlSBUeF+gYFSSZhUIpKAKZFGWwg3Mhp05KDEG02zrTKA+5441rVwLvDd1yb6vGAl0x0V2x/C3L4R7rU9OYnnub0JB1NysgsL6Q4aFt8WlCLdi0iGi/ItlSfUVFkjXBzftrqYb7saG43txspao7Hd1NbYCXw3LwrDtZgZJSNQVzYf3PEBDJEkmqLcaOl8//ls4pVKFPOo9wgI5L4f23WJTFds4VzCdf1mjLVy5E2/CHP7RLincnLOXzxvoVvCbxPzLhQWFGL9tvRyCbyfF+O3eL6l8tsya35r6/m3fOcnCPtt+w9Go6G2xmgEnxkhdrinRpiWZ6gkEnbXAnDfMIlECvoXo/0lUNIgCbG8R7i/WKFWx6jDwvlYlxx+bNZ9bdwLwpluTD+4NUQfCPcfl3NyTsB+C1YdTqQ3lxQUFGL5tlSPaPY2mivg+JbcY1Zj6Knzb63wbZvJZLxrAIplONfQ3kUz7vdtRUokveUbaKoLShIdJpFER8si54v8/Um6VcHvIFr0K6L+Rx3Ow7ozSdZlk+5r8nGC+4BH9IPrs94X7obzrLUy8tTb9PTLBTx+W/VlYYH9fsvqUNAJ+i1+CPCtDX7b1mg03DUY21ByMFro4ft7aiS75Z9yiQyXH0RERkdLJGqJOsjLUyQJYX2/+tVf0fWqvzDrwpnuzNem+o0dOxTHt9QcvQX5sfZ29bpwT+WQHTVfQHx7Ii+/8Cbht9Ucv71e8HctqlZ+vkUJF/CtqQbBW5gIePhWwG8F+bbNaDAY2tsgvoXS26729u8tnluEZVqeqcKaLl3qMIlaHRsW97KY+f6wX0H1ajgv67L7F4hMd+Zrr4wR2pHW97DQy8IlO8EuQuntiYMn0i+UFPL4LVhycMBvuT24ZVi/dbRHAZspEG7bajKUoT7LysXajFA+1mEz5RJjkoSvPzdCFBQplfgFhnkGLfb09iMJ91VYuL9axEnDUNZlZbrTp4z5PUq39BjS5yu/vSvczy/lnKdUy/Bteo6WYFsM31ZrSwtu8usV5dubX+PPUDAZsIzrkh4F5hrMaKg1trVxdApP0mWNjRbH7crjY1r8mCmTIFZLlleYREZ+T0Cc12LPAO954phfsUvEeU9WvoBmuq9NnTiGv2Osz08T61XhfpaTcp7Dt0l5+YWFPH5beJnpD0PUi/Vb/j1mehPGcbFs69AeszaDwWAyIXkCNhnrgim3Q83LtJgZqgmdjltJi4gMC5IoLK8P9F8c6yX6H45wY7jviWa6oaxMd+qkMfxnL3j18WlivSjc7V/mMGsO5Llgx5LSL5UUVuP5tkB7GdUrhm+vW3oUMHxLJbgNeh7N2t+Dy+HbNlNtnRGhA4RuH1lc1tQIU66Sl2mhOU8Tygy5FLMnQq2ePo/9PWvUHN3+6lX0fYX6F4DrThnLuzdi+MqfkJ9wb1bvCffzSzlJ51l8eyI9R1/A57dEZ4INfmvjGWH6Wzx+i/JtkxDfsnsUGuuMT4zf8Ca4+HXfNijL7XoQg5AsOkI18FBKJVzHlQRFga+w/v0XcYX7K+R9OayL618Y+8JQlHCJ0cc3newt4f7jsxzSb79gTk64mJ5dWl3Iw7el2pLr1fbxreAZCg0GV/coANXWNbahGhXgW4py26DP0oWzXLZmySGWc5rGooPioK/TGS4iXPTduakumum+NnEsT7owqm/D3F4S7oaTZApm4dvE9JzLhWy/tThugR7aa4aol99vkQTXMsq4kQLisnw9Cji+bTTWGusacQkudrCpocsA0cMToSwX0Sw1wiXw/ghpWGwgV9sYx30VfX9iCmS6M/1m+k0cOwrHuH8YNvI55Mfci9U7wqU6bxm/PXYsJV1fUHgT8G0hyreF2N05NuwxQxJcS7XVoprF8W2r9R6FRpOhthGf4fIQbhd7tLdBn+cFouxJTMGhsJhudMi6MNxrucJd5D0T+X+QUzjTnTKGp3chqC/3/PaKcD+jehOY7tsTeRcIr2X8FibcwtLLnBPCEP3y+a3APXobyqoQv3WkR8FUV1bXxDn/1orfIuzQZiD49vsuMJ/wUC6qRNaYLqFIVyILDFKhXw/VcOMwtWK0f0DgTPT/FCoWznSnjMHfI21ULPKT7r3qBeH+48sctt9+mJJXSmm2mtFsAe23JZdLbnLOCEP0iiVca/cwq2E0y8+3VhjXVFvW3oYkuEIa5fItRbkPoc+KAlDyRFSIGUoi05XKNWFhmK9qQpUcUgDvG+jv7++tQv9/sOuime5r4/DhwsiVyA97EAl3+yfNF8/DOVhS3qVCqjB8W3q5UNhvIQ0jfMtLuABwtSaEFTCDP8M1GSpNDvTgYsjhUVsdpOLHSJaLaBA/VJLoCIlMowxS4r++CNbt/8yn3l8JtBuIsi43050JZ7rjfo8y7tBhf/Dqu/v89rhwN1BpAuO3Kela2m1vEqplEe7lUkSlDvAtQrigyI4FmAnwfIv12zZDvalNuEeBT6WI33Z932WAP8sLYDMnokDeIZPINKFhnviv+wdBsPA/csv7qxQzAvyt9OoC1oV7F8Zi28VG9N2tIXpauJfTcy5SfEtWXnYpx28hvi35sop7Ai5mYNbMrPAtqVzDLQd7FL4xGIwN+DUzWwgXYQeQ5XbRjPuw6zGLcu3QbahGrAkNDApEnwcjNmwC0x72qpxNtYFKeYB3QECgrZmu38xx2M4Fzz5rE+tZ4W7X51xk+S1ht7x8q9VWV9vMt5weXGyPArsP94bBoR4Fo8HYiDgwr+NyNYq6LTEM0GNzkb99dEtqViEDm3Q0nuvQr4ER4DV+/Pjx6kWvLlqkFnn5azhMKw5XBgQS2sWTLpLp4hfRYvoKFnpUuJ9np19k8a2FbjF8W3i5tJrlt7gEV8hvBQm3yUK5GLKFM1yW35rKDN/w9ygwhIuoV4AcCC03Mt2N5kfmx3n28K0cnF4nlckUynCNSirnI9zF48aPHz/BKywiYnqoxt8rUINkuFHeAUqV0tvfW6B/wRrnjuorWOhB4X516hLbb9MvXLa4LcK3YK0McVqkQ8HmHlxcmWpQvhXqUWg01Bnb8H0KPAOnUfxoYfzW3PX9BZG17Hb6dCVxrq1MLlMqoyy9C+J1auS1xFgTCwxXFBQYIQfKVseBd5/H7VUIDAgMnBfo7e0dOJMn07WcvYD33NF9dEBIzwl31cmclIsQ3yamXzoJ+20hi2/B7nMH/VawR4HdE1ZrR4bbVlNnaOTpUbA9w8Xw7SOCbY3tBN+aH5m7zI8688Q82a1KoZTJZshkshkRwGHRrwfND8GmuF6eE8ZPGB8rmk44bmhgkHcouaIWyqXdgEAV+K+3twqb6UK9C1jO7aNkoceEu+rLnJzzkN/mpOdDjMDl2wJtacH1AjzfYhmXvwcXm+FSVVdna4ZrMhhMKNdaJVxUo3zjoYHxW7O5634Ypy9hulIpp45gViqjMIqmhihocVhYFPp8QNCE8ePHj4tVRZGOGyrypN4fSXDFSqVSJZ6nAsarEsx0sZ7bR7eG6CnhvnmI6b0Ffnsx/UJhIeK3DN+W6AuRkxRQ9fKvmQnyLdykUCZAuJYehSZjWV1bq1AfrpOEC4ax/XvKb81d5id5KpIUwpVK4jRbqUyuQtSIG0r/xbFBi8MCuISrHj9h/HivME1UZEQ48YzXSJou0AxXNUMZDj6KAwICAv3QTFeQc4OWIT/9XqgeEu6Gizm0aoHfJjF2S/FtNYtvSy5Xo/d4wBEupgfXdsIFmq2rs77PrNFgwKcI9me4PG5LKZfx2+/N3zZLFHJZtEQSIldg/+XHjAUaeqwRqWODFsf5/wfzqoCgiQThzgxVyUjH1QR4Qp04aL8CyBeIxwEBAS/7cTNdpncBl+d69sVNJ3tGuJ/lgN5bhm/T0y8X8vBtAS5NsNNvbclwLZZrhW/rDAYmWuDbZ2ZHlwLKDkx+a+h6ZO5seVCZlfnBzp2HYmQyhVypVIarposxKrUyZLJA/zCvcq8wf1Kdi8MA4XqNDtWER8imk9+r9rcQB8q64lBvhZK671q4t793IP7shdfGjEIod3RftIn1hHC/+jIn5yLEt3n5kGbZfFtdcL1QX4Xc44FPryjh3saco/A1lm/JHoXaRl7Gfdja2lRX1oh6LToQfeI1yj8e1tbq3k7avfPt3NrabwnPvUd1eImjVAqFXKFQRqnwZKuxOC00oiSqBZp5AXGLg2Lj/APDYwHhTohVaUKjGOaI8lLC74Ow7jyxSq5kHgcGBHhTOmZxrh9m969nH8BCDwh31b0cIr2l+DYp73Ihlm9J9ZZeRtwWnyjw+60g4XKrDGFbZp/ZHYOhgb1mhuVbZzLcNkOlLmHn7p07dZUtnUYL4z4y/xDD1qdYNV0hl8sV4SrbWFclVVCUELY4KAgQ7gQ1+LsQHiGj2UMUxvJyDOvO8/MPEDN9un4BASP9LKxLca7fmKFczh0S1/uw4Hrhfn6p+SLEt29dgDVL8y1DuJdLkVNwrfCt3T0K7HXemkaeDLfOUNeGeiuOcXkGolLIb9t0uR/t3PlOVm5lreFbOr9tb2cY19z1/T18j5dYFaWcThgwTLWYIZbJmGeVgHB9g9ZoFgDHpd8rbHEA991R1hXLA1SWPl2xtzfZCQllulPGIJbbB7DgcuF+folYLaP4Fmzi/TuWbwn1Fn6Jvc8Dol/eDJfnHAU+x21tam0wYBm3rswksNcM57c2ZbjtuVkf7dz5p6ysSj3cg/uw6/tH5q6u2m8hz+2KQVVrGarp4UqlXKHEJF/MUEjpR9MJwiX+JigiZNSzgUEif657o6QrDlUpA+jz9wmvneLt7+0H9y5MGsOl3D4Ic10t3M8uWlZ5z3+Rnl5aKMC3JZcLWWfg2rDHDOnBtaFHAWbcptYyDN/WGExojwLvQBSKMm5bbua7b+/8M3BYI550gc+2t38PeW4HzxoYOQgnFauipisUiulRGMfVLNBMl1AEDBbNxnv5E89FyDXkO8Qt1gR4I++Lku48scpboWL1LgQGBnh7W3oXJiJxbu9vQHOtcNeezIH49mL6pb8XCvBtqRY5l5E3URDyW9sJF3hrYw2Hbk11tQ2CZ+FiMwWc39bW6nKzMrOyMjOzKiuNguxA+qzRwrhd5rY822g2VBw1XRmuJOmB48tS4vprCiDc6aJY8FgppxxXFRQQGuWP6STDsW6gQh7I7tP1A9r1o3oXxv6e27PQ6zeddKlwwWrZCYZvL1rCW8pvWXxbqC29Kci3VnsUuOco2EC4wGfLmmC+NRnqTLg+BR6+xfltW8uTyszcTKBZwLCoSrED+KzxWyjLNXfgKZcnSYhSKaPAYLmvRqrQLNAoxo8fHxG6INZfs0ATFSEnvxa2GFy5IZTLx7ozlQGBnLMXZhJNDYAaZiKnOY7obVhwpXCX3CP9luTb9PTLJwX4tgSX3trlt9Z7FFDCBVxb942Fb28Yahr4+3BxGS7Uh9v25IkuMwuMrKzKB0ZE0Wg+ZmFc0mfbas2w5y6yde2BGSqVUqFUKiDvnSEXE4Q7CaxCqEI1ikjKcWPnA84N8EbeA0+64lA/hnWhdTS/SQEB3lNmThnHvVfE8F7exuNC4W64CPyW7mJ86xKXEVhnKJRcLmTf4wHNcDF862iPgmWfGUm5lFK/KTXw9IUJjEcP22r1lVmZf9q586MsXTufRq0Mymdr4VzB/ARLuYjXokMZrlAqVSrysZQgXODSi8NIxgXP+6sJRw4IwKfDCOcSXQwBShXauyAO9B7pHfgCN8716t0w13XC/TyH2MtLksJFIk3A8S3pt/j0ljdRcE2GSyu30kS7rSXDRf0W5du22jJdZlZ80s6PsnIbUcK1luHChEt7rImV5XbtwvR/2TyUihBluEqsmTQOEC54JiA2SqOQRxDvEOtPki7m+gwMDOcSuW4AketyehdeE/t5j35hFCfPjfkK0UQPlsuEe4rsTiD9NiX98mcYvmXOCNOWQqcooDpFCNdFGS7Vh3unrrW19Y6hBO0Jw4824xN9VuZuwmFNqEaxOrXJb783mw1wrvD9E5RyMf4qMMRKhTJiQvCE8WKSa8M00yMA9S7w96L7GgJ49gZxe3WpqQwInAnvR2POXRj7wpAhQ+DjHIf0apuYi4T79LOc5osM3+ZdAGzLx7eFBfj01gG/FexRwBEuybZlTTfK2NkClnHbnpRl7nxrR9LbuU8qjbU1BmNtfcXVc1fPnvvo6rlzV69ePXfu3JWrmVev0J+c+5R4dJX47GpF/V2j0dhiNLZ3UeeEwRku7bHt35hhz90llNPaNvwmTBg3bgL52sCgQEWknCBcEfXdKqU/3/sglEsOZYB3KO7chbEjhg4dMuT3IyyduUsRXfRcuUa4RAzG8G1efiG3qqEMt1BfCPstlm95GJf/HAXbMlxKoXW5dRydsjy3rVZ/bufOt956a/fuvwA9nruaWVlhqDUajSZ7zgrrbG8xglFfAaRccbXi6pX6FmNL+7ewx3ZVwv7bdZdDuULuyjMIwlVIyK/GLVYQjusfy7xaHhi4QIOqlp9054kDZ6hg0gW9C0TXAuG3o4YMGUUa79DevOmkS4S7PT9v1668cpJvc/Iu/12Ib0sv49NbvkTBol/cOQr2E25rXVmdAelTAOo1ZmW9vTNpx1s7MnNzK8uMtUYjT4Zr9SwFhBwodmgDSq6vr68/d/VqfX19i/mR2dgF5wpdeTx5lc1jOpHhhkZJiC7cqCCJXGEhXDDEfNdnoTyZLrHyG6Cah/ToThlD8e2IUb8fMmrY0GEjezHMdYVwlxaVX4jLj7uUtyv9xPmc5tLCzziqvQntMdNqC1zAtwjhfm0b4ba23imra3pY8w1st4asrENJOxOysvT6WgPSryCc4eJ1ilItZ83M3PVte7vRUF9fX5FZUX/37nfEc2az+YnaOb/VTAeGKwaJbhSZ3gLHDfCCXhEQGEhmvqhuiYFyLjiLQenP3Y8mnvrKCwzgDh0xZMgfe/M0MRcI9+eiuJN7Tu45+deTxet25a0rFeRb7Ul+v8UTLjbDtd6jgCPcr1vvGGqbmlq/JlfPjB8Bh333oyxdZW0bh28F+xSs+C2eHbgZLj0NXe3AhSuuXq2ob+ky58kRJdkzxCDDHU98RwjxTkFhCk3oYhH0HlGBIwXyYpRxqREoV4q5PbpjWWnuiCEjRyLq6KlyWrjP1l06efKvJ8n6snnlhZMCfFtyuQRzHzPbehRckOE2Guoa2rS1uqysTw5lHQIOa3Dsfg/IsOksBTRTIHrEjNTj7vb2u3frr+RNV4ZHqRz02wWTCMMlHyuixQs0oiCFJiDoP+DXyFXezEobqlxe0p2nVAZCrgt6dP3GcdLc3rvppLPCXVX0JfDaPSdPntxzMm7xjx4rmr9ELs1ozyV0y+u4iHp5/Za1zwxxWozfNjW1NtQ+yD106MMPP3z7w6zcSn0tbq8Zpk8B47c29YXxswPbb81d5kqab8nRLFKFhyvCleHKqChEU9YGQbiT6FcpQsI1Cq8wjVrE+j6l0pt7ni4nXcBxLsjB/Kaw9wDPnDR2KPs+v569BQtOCvfZ4vyTTMXFgfXqNxdfhhUL7TGjzrOzi3CxPQr2EG5jbZkuKyvrww/fSvkkM7eyxkR6LLM/x6rf2s+4qMfy+y3oEWujH5vBfEBRrgp0IiiV01Vi1Fd5B0241IiSKhWiWP9YFfv1a1TeltcguuXPdDWh4ikqNudyG8VGruylO1Q7J9ynsZTfAlaIpbqJrwXtwfptqR65r44dPQr2ZrimWr3uUNaOpJS3P0nQZeaeO2Ri9eFSuyZ5M1xsn4LNOx+Qic1w6WmC/NbcZW6WsP7dDlcoFOHhPPt4OAMiXGrIQiJjg7jnPoeHe2O6xGwi3VA/bz/W+WLjOGfn9tZpYs4J9zmL3+4JYu58tapcy3Zcgm9BnIDjW0Sxt3GMi7nXAz7DvaOvzM06lJiz81CWrrGmXnfuXGVdU1WZgbPPrMHAyNTZe/YiOkVdVsBvzV3fGx5a/NZs7iI6FjieGqVUKKLCVda4FyZccogVEfOD5nFfp9B4w88huuXp0yVH+BQxxLmhnD1oQ3vpTCanhPtmEMm24LIsCLrv1ZJyJFMovFwq6Ld4wuX3WyTBrSvT52Zn70y6kH1IZ2pqulNTpsvMrawjaKGGk9uCYWjj6VPg7cO1OVNACBfSMuq35kftj80sz23GnoyvCVUpFXJluMDpIGIW4ZLfI5NHcM9c0IQGhqtwXWKI62I4F/TeTPKDOHfSGHZv7vDeuemkM8JdG8T4bX4QC21WlVOaZfaYXa4q4PAt4rQYwsVmuF+zCNek13+0Mycp50NdJX2XhzpD7rlcYx2dKRjK2jBnKdwx2OC3NvAtRqeoz1rx3BbIb81m8+MiAWfVhCsViunYDl2ScP1Yz6nkEUqJivvKecpQ75mc98UoF2VcavpNmWnhXC7m9s6eX2eE+1wclSecbObu3FiezvLbkvwSzn117PNb7prZ7YY7dXp9ljon5cJOnb4Wcl6DLvPjyhooDgObyeA+BWafWZlVvsVmuI6cF8ab4VLT2NbFptz5qIpYilIpZPJwZP9D6Hgu4QLHjVRqpAruOwQoVagPIwMlXOa6bIolX5jKTnOHjeiVlV8nhPviYhpv89D0rjib4Vuwt4xznwf7zgljEW5ZdtYnF3IOZWfptbXsDPdOpS63stbEynBN5B5I3FkKdSb7Mlxez+XoFPVYK377vflbo5nluT8sQt0UGSqCe2FvJgiX464qeUSUZoFcyvVtpYZaP4MGqlwB0hXTpDt13pQX2Pf8Hd4bbWJOCDcun+LbPNxJk0QoRvpt6eVCYb/F6peT4ZZmZSflJH2SnaXX16CE21CWe05XVsfJcBtqS5t4z8NtK7OW4brsvDCLbnF+C/aetbEclzkdxNoQg43r0/kJV0OfwKSQcvxZGaAZacP/AeFbW8weOAAAIABJREFUuH9hCu25Y19gn7MQ0wv3+XVcuMspw91T/ibyNQ8Pj39dR/fgarWc+zzYxbeG7KwLO3MOZefr9TXwPjMowzXoMnVlRiTBbaqtbbDoFD1LoQ7iXp7h6j4FxGup2dYCM25X9w+7UIflG6oohUKpoQl3ygKc44JEl53lqtZoZnqj74gol693Adwzwm+KH8W5nBPFftMLsOCwcFfFFRN+GxfH89dryzrSby9rMXfqtUK4N/T6/OzsQ9mHDmVn6/WGBt4M90aNTqerrMGtmoH7kbH6FLhnKZjqHGFcp/oUUK+lptHMptx7IkQ/QkMF9u4gGS7DuODRdJmc9RWlKjQAoWTcmIcSLpOG+U0RE547ZSx7B1ovHD3qsHBXk4Ybt45vd+ezlfmAcYn4lsO3vD0KBZdBpnUo+1A2QIJSlHBZGW5dmU6nM5iwfQoNhpo2pA+XczVWZsVve75PgZnt7TDjdps7d6FuKDw04ydMmDBhOvc1UZTjahZoIuVwFjEvYIHYG/eOqHYRvrXcMUI8xY9QMNQnRm5A43Ez15XDwo37EvhtUfFa5Ct0/dhcWlB4uZQ+lZHXcauvV5fo9VnZlw5lZ18CgjXYkuHWVOp0lXXoihnptzUG5qZmKN0yp9c02rbXrAf7FOjZbmQxLjjpGdGPlQEIdwLYM8l6XZRMzjCwgnV2uVKl8RboEoMGyrfQvdGmTCE8d+wQds9Cj4e5jgp3CTDcJ0WCncOrLxWS3WC8hFuiBw6bA1ICPV+PAibD/a+aylxdHX+XAsgSOJrFnRfWVNvnfQpMj4LxW8hvwdglRryQGGtU6HNMhjt9gUZDES81wuURls9UcKKrUmoWBKDvgyddHONS90abSZDu1LHsZMFzBSIH15ajwo1bCS7LcHEC/KIvkTuZkaMkHzhsTlHOpex89ApN2G9vl+oydXUCfbimWi3SiYucFUbtUxdkXIzf2rD3AZmCfQr0mpnxCey3Zv7TQcK84N5ay4AzXIXldOhwWSRMvBLopNEATWigYJcYTAsI3zKcKxYTme6kF9g9C4v4ENJF5ahwg0B3grUlkhfzoHNwiTvr6PX5ly41NzdnZ+dikwXhDPf27a9rKjMrbwn24ZaWmhDVonxLVGOdo4zb2d4O9kFiZouxsw1xWma9F+e3hOd2mWjFUo7bvQi/OuYZq1YrkWcXTCEMF8ppKd9VyiNYryNOuqH8V7lAE8Dj64hyBRLdUDLT5bTm9vQBIQ4Kd2X2yfxY6wBefI88J6wmX38hvSg9PSk/67yuRDhR4KyZNUB+W5ObWYbJcOHRyGQJ1giXOI3Jxgy3zWg0GivBTt6rVzPBTl6whdcAnmT9l3yigtgcebXi6tVzV6/eNbYQ99fh61NgPNfYBfttl9n8AHs6iCpocVBsUBiSBwDCHc9+bnqEQhUaLuPsqFBImV4HeahGhT9lATdwjEueoysmTZfTs9DDsOCgcINO5i+24TDfJbu+/CQ9Pb05J1dvAI6bm6UXWm3A8K2lD7dE91FZA5Lhcgi3rqwNo1kM35J9CugOM3iYjMb6c+eAXCuAMNswO3yFrsTAeAic+W7F1atXKq5erW9vaUe8lu4L6zRSGS7tuOZyHM3OV0cp1epYdRyGcCchvhkul3McV7MgXEK7bFSUZkEA0sfAmy7gGJe5N5pfoHgSO1kYwZs3uaQcE+6Sdfnr/oE8a6lry5Z5xsbGxsYVX9IyiUKV/pAWyRRQ/SJrZsBrb+hO66rwfbiQ997i2C1/hkvRQ1sdhm9NNYar566eu3Ku3mA0ueC+ZgzjEhrOvHruXH0LZ28vGAY243aZsSc9h6kJzg2K9WL1GhCEi7srZbgskns6jkpC34vSnyRdWwfCtxDnhk6d4jduFItyR/domOuYcOOKiv+GPAn2+z63cl3s4pXFbyxbRv51+6mcznCrsrJK7T5HgSJcfaa+xPr9Hu6U1vHsNcMzLvBfA8th6yt05yoq6msx9+5FFMqjU4RrcZlCJ7E7sqKisr7Tsl5mbIcyBWKUY1hWHQ3+G7gu1is2jnDkqECUcKGhlEeolAoF+1mpnPwYqFqgUQYi38NPugjfwvcA9ps0ltMmZuXa3alyTLivcv4yLSsuXhcbWwwEe439ldUXyDs9dGSVIm5rg99W375Zkplb1oD04aKMW6M34faa8Q/gr+DekeBDRS44esZgbMQzrrPnKeD7FLpa2u8C/da3AKftMrAZ19yFo1zaZwPUXkFEvrAYrLFNAoSL7dMlGFc8Xc6+a5o8hNRhQKhG7I3wssDAMS7FuaBn4fe9d59fh4T7jODuZ0uXbSkm6rkttMOiVVRaXWDI1ZUiGS6iVQzjXq/T6XU3sH24nNPCqgw1d/gUi8twKc81VdZX6nT1ZUYTxmPt78VFXdaGTKGzBZwRcre+3sjyW3O3OV2BuiAzROpY9WJl3GJwYhiWcImhkJPuqpqujIISBCV56oJSpVkQiPF1ftJF+BbiXCRZGNmDt4ZwzHEpwRZvQRwWqWXN2iys2/IlCpDfanW5ZdgeBSTDrSmrwu7tFWBcY22lrrIit9bYxmFc/j4Fq46LYwcrvbg04z5uqagAh9s8hjz3nprf+0I1gWFBsUGxciHC1YQqLalCuBK699904uzyeYB2bVw/IwfCt2zOfYFzn9+e2/PrmHAXLVsmdG0G17PybK095ygw3nurTKevEbjfA5Th3qmpaxA4TwHh27aael1lZSW4W6/JiKYJuIEoFKtTxGXxjItmCsQ0mNvb7wLzbaeyhW+LAlAPhIYyVh22QLNgOi/hAseFXBucYU6nw2Ii0VWKF2C7xIQ8FyVcC+dydkMM6TlYcEy4q23F7mfLli9fhzitIOGS2jXodLobDbadpVBXZuI7TQEdJgPQbFldE9WnUNOjfQrwbjOc17L2mhk7iauzlpb6u/V3azu7zOaOmBmI58EjLDZKiHDBqfoRrN0P4nDFdJppI2RiTSA4YcwuyuXnXHDOzZSxI1g9C6N77PrMwRx3nU13ZPtp6ZZrHh6LrfItkuGW6LJu8NzvASHchpoarl45a2YM3zYaKjN11KIDvWZWhyYIjvYpCDMuxmu/Z+8162y38G1X/dWKivbOB0XlahFPR8ECzTwvEfBRfsLlOC5Ju3LadRVS8QJwZx6xsK8jnsub6YaKQydw2sQW9dSt+xwU7pbVyFNoLVlNHJi64hLitfyEC/y2Jje3BHd2DTbDrSu7xXd6DYtxGwyVunNlnBwXtJNDUS5fn0JP9uLC4y57B8939RUVd1vM9/N2qUVYDg0jGBgQ7gT4npGw6shTRtn5llihID12uiQqCqS83ravn9Geix9icehY9jkLPZYsONqrEGv1b9Kq1UuprDe2xJ59ZmWZuluYLgV8hnunstbKeWGEOo2V564auC2MlA/XIV6LDkSfeJ0iLmun5za2d3Nyhe7OlqtX73Y+7ijfpUZ9MVC5QLPAT4BwNQsUkbhkQhxFdumGSpTEu/J6OlvxsOPiR6h4CifMHd1DK7+OCncZuj+SVU+XL2ekvewC4ra8hGs4XlmAPysMR7gmbQOy8wHxW4Mus7IOYV1mrxl5tyj+PgX7MgVk2MG4xOoZmx6I2V5fUdFuftK8K2Y+hkaJe+xMR70W57jQv/3hCuJkMplERdwZAn1fwYHwrYVzJ3LugRbDF5Q6Vw43kq8UzMGWvAH/cotKbCTcmky9UB8ul3ANZQgccAm3VnfO0IjPcKm+MFMd6rB9w7hd3ZWI41Kjk9Bud0feq2EzOF4oRLg4xmVGFLEjQk7cjZKvS4zfc3GMS3DuVM6tfnvo1hAOC3eFwK9n1Rb2/snlKOWi6r1ebTikLxDow+VmuFVUdsvruHdyz+qM+F5cOB2znI3Lx7g93ItrGZ0m1G/p+V1L/dW75u4HHXm7YqIt+oEIF1Ubj+PSSiRYVy6ZTq6huYpzJ3I6c4N65NYQju/yXckbLkOUQNW6Kus9Cobc3FtIl4JAhltjaEB6cWHGrTyXa8DvNePs7cWsmqHuixmITlGXta0XF9r70IJ4LWu01Fe0mLs7H9wvigmjaJUwXESVzIjEMi7DDOFyVZRCOl2zwJu3S4zXc/GUOw9ZPxsW1BMHODou3KX4TuGnL65AFf1GnLVEoSZXZ+A9u4bJcC2duKayKtxeM6pMlRUVhgaEay0DPk+BOv/OecZFyMG2/WbwXjNjJ8ZrIWV3dbeABYrurscdzWp1WCSZ4U5HT8mzOK7Qii5IGJRylUwWatMpC5yBY1wwmXtDULNHVn6dOBCkGLcBYtWyZT8hT3pcu1CCOi3EtyX6XANfHy4uw22oq2lAdprR1VZWqSszWe9TYDy3FuOwvXJmGIdxAcsaEZdFRufd+ooWsLb24F5zs1oyYfx4VJHMiJArkefYQxUwXyOXiecJr59RDo26Lm5O4NzptydOE3NCuG9iLPfN5ajdgirOFyJcXa4WWfdFMgUow72lrcHsNSOrrlKnq2uy0qfAJgeTqYfODLObcc3mWozPQldvwHPN3V3t9fV3O8HjPf7qMWq1iM9vF2giI8KR5ziuq5qhCFdKw207ZYHrudgBbiTFus9vnOuTBacOveMeYfNs+VKeePfFZv4MtySXvibjI1xOhltXU4XZa0bKVldZdkewTwHx29aHdwwo1fZYLy6v35K9uMZOxGGxo6ulvr6lu2vPmjVrvrzXXKQO4yGCyEhrjgt6xMLDlVIF9pQFK56LEi4xkAMcBXeDO1TOCHcJZ1XkxTf4/2KtLOQh3Cpd1i3cOgQmw6UIt7QW6cUl/dZUdk6HScTQ0co5M8wgyLiu6VRAvRbjt2DvGeqzLMaFZv3Vy2vWrFnzd/D4XnmeWoTZ6xPJnAfCP0KngPtCSPGrc1bGPAzjhornjRnBvs/vYpcnC04d7PwG65ezVOgQ9WXpiNMSSi3LNGD6cAXuadZQVoXZa0ZckOUauIq1zrfEaOTmCn3GuN3mlu+wDosd5vysyDV7us3dRLU0l6vDohHGDedXLD0ATWgUUn9+4uD3XBzjakKncFoWhrj8gBCnhPssFnq8BU+3dAXhCPdGVi6ai2H81pLh1mmxZyk01OVe5ew3EzhPgUsPTYYeZ1zEa1HPJdXb3o74LN5vu83dJ9esWROta4eee3Jh16IwmGqtMy7I1KYAPSpnSBzzXBznjhvC7swd7WpYcO4eEKsZy12yhYdumZdeQP22LKsO7cPlEi5zTzPguLU16F4zgLbndCZIscJ7zTCea0C4tq8Y19xtQ65Ad5ABUNjTWV/R2Q3Xg7xdMfPtYVzNAvKMR5W/xDpXYDwXZVzxvJkcyx0WY0Uf9paTd91ZRD1YbrXv8lk5l3ALcnWYdQjuWQpwn8KtshsI24JM9yPdLd41X9xAzsT9ps7mc3EFHRcZ9jOuuaUN9Vp2pkBPLUm4Xd9VXG1nf+3x/fJFYaBzJsoWxl2gERMrvprAiRLk7HJHORfcjQfm3OHcQ+v7VLgeq4l/AZ5h1hyQ+rmD7bdloHcRy7d89zSrqcGdplB3rpJvt5kNGS41DP2AcSkfbW9HvRU7CMNdQxDud3crurn14F55jFoZIY9CdYqOKeQz3oHSaORruIFqF5mcw8SGjXbtNh5n7ywZRHTUCDbcUHWtnEW4uboqHr/ly3DL6tC9Zg0mcPod0qUgeJ4CyritrTrePoXe2G/GPk+BN1fgMO7f14xcs+bv1Oed9RXtCAM/7mjeRa8NC48p5HWZ2FsTIUEc1THO9eXsPxu6CJGEM+WscJcWe7yItCbgayW0elaSWca9HuMhXJpvq0pNyF6zpppKXR2uT4HLt9YYlzhFjGcg+uTRKeqx9vaGUb7a3ojzV3SADHePmfHZlvr6TsR27145fb+5SC3hSXmZoaFOMg8MXKCU2kLFXM9FKddv7Ci25br2PlJO34R68c/LbeyhWHaJIdyyLANypSacKdSUNqBZQmVuLdqnYJVxsfd9MOD51kX7zWxdNyNzBSPitTjG/TsAhVL4uRZyJRie9VeutHd1dtwvUqvnY1Jey5hC9papAjQalVSBeqoDnDvxBXaWO9TLlSu/Tgv3RZv/Hv1UTl+V6XNvcV1WOMO9VVNz5zb3vLA6XZmwYm3pU2ByhTY7GVdApZiBei3quYyn2pYrWAiXqe/u1rdzHfdKC/HgSUdzszpMjiqWGvQdg8G9eEKj5bYkuqh2Off95VruqCBbt4bbUE4L12OdLYBLVPEDgnBLsnRct+X1W7Iv7EZZCbLXzFSZW8daN8OoF8+4GHIAJ45aYVyn9pvZxbjEYUxWM4Una9asGXmSy70t9XdZn9dfyWihH3c+uA86ynj6F+m7nozUaBaEKqRixFPt59xJnDtDDHNlmOu8cHnaGzF1LQ9otCa3BnM9xkO4RIZbUobsNWsqPVeD68Ntwme4vL24vLmC1RQX0SnGZ+1fNyP3mdmSKwDCXdOFQG13y7lOnOPS9bi5PE8tR3WrUVF7IOYR+8+ipLakaNY4l2u5rryPlPPC9Vhn868mtqr6emkmpsNRMMPV1iB7ze7orqJ9YTjt4gaWcVsNbf2FcXlyBba3EoR7EskRwKy4+x3zmGBc7ms6isqDwrhXYKG0E3sTLKyShCOeajfnKjn3P/vDiMUu6yl3gXCf2my5S+9dL82tgvpwUb5FMtzr+v+C/ZZgXFMucpwoTrP2MG5r2zd9u98MPjesHd3tiyVcsxkxXMJ0r3byOi5dD5p3xYj+g8UKNNeSu4k1UpwvW/FcJMvlNOa6EBZcIFyPYpuPKynS4U7QF8pwq/S3kPPCanNN7HQBKTsZl/DgWjzj2uy4CDvYd6YC62QQXK6AZriliJdSs6WihXpcceVKJ/p1cj7u2LUI2vMuplMH6l6pGqn9iS6Xc/3GcCh3aIyrliFcIdwXbbbcdYeweuUj3Nu3a7TIXrM7Oh2+F7fJsT4FfK5gN+MiLus445q7DYjDcgZor9mDIVyqOuvvkg/qr1Sg4S5UT9LLY0QUNTDdOAGU98olVvf+Ip7LodyJXMsd5apbQ7hCuB7P2XKuDUhyV69D/VYgU7hVWoOeppDJvY8k4rgYroXVjGfcVtMTIcbt+TMVYBW3G4UZ9weScHE+Ss27FSTvXrnyHf0c05vDnp0d93fFqAErUImYhtnFo5DYnehyONdvDCfLHeYp1PxqR7lEuKtW2sTcby73KKrGEC5fhnur9gZymkLDFX68dbBPgZMr2NSngNEp6rIOM273d1Z2+xKRwg94wqVNlwDdiitoEwOmHnc0L1LLp9Mu6k1Tg0IqtEcY77lszkUo9zeerkkWXCJcjzesnGtD1IvLPDx+voe4Le/e3prSW9zzwu7U5t5A1s+QEuBbAcYlcwWeXlwbHNeFjIvNFSCf/Ds2w+XMzrMtBONCz3MH63segOU1sq9RxZCvQmp3osvh3DFDOZQ7fDSiDEfKNcJdZQO5XAOXcE+LEK3yEW5pSQNyXpg+14TwrasYt7XRiD6HqpNPp4jLOsO45k7BXOEkYbjdgo7b3f1dRaetjktW+12wNhw2Hz5lQS63IdFFaQGiXM72s2EuOnrUNcL1WG2VXK6tJlpx1mkF/BbKFBq0Jdy9Zg1NNbkmhG/5HJdvIHzbyuw3q+s3jGvu4uYKsDcShLsH9Vju/K6i/eqVCtbzuMF8vbO2u6v7wb2iIrVoNJ2NaSKVUnsTXXbPwpgR7Cx32ChbbjRmtVwk3GfW2oRXrSY3Uq64h+YImAz3VlkVel5YzTkTZq8ZXrF2ZbisXMGxXlxhx8V4rSDjCucKBOG2ma05bnd3Z4VdjtvdTnY6EGvDYf6UciOiZHynOPF5LotzJw7hWu5IgdO7bC4XCddjmXCvzVdM6+NiAcZl/LZKz91r9nXDnZpMtD8M47iCGa4Q47Ya63gZ1wbCdSnjgkOe+TKFH0CGexLxV+y8Isi43Jyh02h5/KB5lzpMQeyi1EREIq4qPIQpd5gX91yDPhSulTPKl/5IP1r5JW+GyzBuTSl8XhjluCU6dK8Zpk/B/l5cZjRizm1E1MmjUWHHRbwW47kcNRlRfdlFuGTZ57jdRvan9/LK1VJVlHKBQirYE4l6rjDljopFBGJ3uUy4PwoFCy9aTvddcZ/Pb+kMt7q0FD0Tt+lWLprfYvzWKuGijGvZa1bb5tS6GTLsYVwuORi+42HcH+h+RrzHsueVKxXfWWNcyHPbjdz3eNxcHqP210yXOHGfiDEjOFnuH0Y6v/LrMuEKnVH+0wpLzvs0rpqPcUm+rS6rwp0Xpqth3dcMW04ybquprt8wLn+uQGe4NjquXZb7XTvyFFheyylfpJZYS3QR9TKcOwmx3GFBNvfC8pXrhLuE//psC/zLLL6H91uKcQu1VbjzFHS5+PMUUP0KMi6GHKC1tIYankzBoTMVnGJcc5cBz7gNIMPdg3orbgLHrb/bgmgfN8jvN2Lfp+Vc54Ny/HnovAOm3LEjuJTrPCy4Trj8B+a+yTrwZlkzH+MCzy0prWbvNSP7FEy6O+ReMwHG5clwbejFFehXQBTKp1PEZZ1k3G7jtzh9kYT7924bHffKlbvf1X+HPM1b3xqxX7nb0t1t1uU171KLEKfl9VyIcrmHLAwbNtrGLgHecqFwr/HdYGUFe0E4TsBvS0sb2H5L9SkcN6FeK+y4+MHTp0BPUx3Wbx3rxnWOcYkoF2Vcak86zhdx8+yV+u72q4j6sYP4fiP+fc6Bjy27Wx7cy4tRR/G5LB/nzhz7x2HcnoUg/nPmbCoXCtejGG+5qzmL0z9fxvAtmeHq67DnhTVVlt22j3AdY9yHbXV9dW4Y5soJmyucZAjXVsft7m7Bt+Riqx3fTNYJSNnc/m69ubuzozmGb887ol46y0Updwg/Wfa6cJ95Ik95eHj8g7vC92IRlnFv37ylLeGcQU4x7q1c5DwFq37rCOO2tRrwjOvAmQrw2bgOMa7Z2IUyLpHh7sF5Im4SjAt6xL5D/wbgRle3+TsD5n26us31j4m1tcxc4vMn98DuNaun3ljuCzF2FJdyh3o6dx8pVwoXf0b5aiRtKMLwbfXtm1XaW9jzwppuV9ZgzgtzaZ8Ckytw7zOJKJRHp6jL2um5qI5wucLfLYRru+N2d9YjX+CtdnyXL2G53eZuXSb1qwI3oihSR3PzXY5yacqdhFLuMC+nkgWXCnfVOrS9cRXaUvHGPdRvr98s0d/CnRfW8HXDrUyEbB0jXKuM+6jBgPFbZzsVHGNcdq5A+l6XPRkuzbjmbnNFJ/p3gGd0onvUiFlPPV/50WPmuccdzUVBYf6I06KcO3PsUG6WO8y5W0O4VLgeb6D2jznF8cU8lHFvlpTizgsjHPcjE3piGFKwPoX5VtBzDX3FuBjPRXMFMlKgP7NendSWM3ss14A8QxaZB5u729NqWb+mjuYYtSiQ13PpLBdnuain2V6uFa5HHFemq1Ape3jEIn57q1SLO7+GzHDfhc9T6JleXEu/wjc9wLg4r7XOuOY2I4dxwaLZyDU4P+SZLQAVzFQqYONo6cS/Vz29j838pweW54lf3+P7YHkN9Vu4awHtWBg6arETyYKLhfsGN577GYUHD48tl7iMW1vCf1+zujLsmbiu7VPA9ysg+uTVKeqyzua4YLcvnnAfP+l40NFx715zc/P95uaiC0XNRUWxRIHHzffBuH+v40HHgw56k+9d7JIYtjp5MohOahubuducm4n+Wp80l6slSpzn8lPuaCfaxFws3Gcr2X+JVmH7dK+Vs/22QFuF81uKcTPvoGwr7Lh8A+FbhHHbHpWhfttXjGs2dloY9/GDjnux6hi1V2xs3BvLli1btkTw2mbVEvCa4vJFQXnl5eU5904/sPGuKGRHJeK33ebuq5bHd3d0sj2XnA+aF8WIkB0TFOeO4zLu0GF/cGIbj4uF67H6r6xP38T/6a5j8W1V2S3WXjNOn8I5GwjX+T4FetS29RvGNbe1d5vvdtwrLy8vio1bsWzF755//vnfIn+YArWsuPhfwVevLVu5fF1sUHn5Fx0PrHpvJ88r7tJWDP4t2M2zK+5xR3mMeg2OcychfblOtYm5WrhP2Qu/WMP18FiRDzFuqfYWel6YhXFrKnkyXH6/dYJxH5nqUMb9775g3AfNzeXlccWrly2laevffvf87/4N+bMUqmXF9KLQctJBXly24rniol15zfcEcgaDGeO3IMuFP39XRz1G/p0wP7kPtl4inIucsQBOE3M4WXC1cD2Ww9vPsJdm4Pk8C9+W1iKnKbD6FMrq6DNxe4Nx2feDQPTJq1PUZR1n3AfNeeXrit9Y9iP7+uC3zz///PMLkT9LoVpWXEz9Y7yM1bu9ZNnq4rjy8uYOxFZBGXmaGyxHNJi7zd/lZiKKt4zvOu7HxITR90Lh68sFyYKjsOBy4XrAwP0jX6t7ucVvSzHn18ApbiVun5mw4/INWxi3tQ5lXJscFxkOMG7n/ebyopVv4G/P+fzzv3v++X9HnhYqi3C/wqwNrVq2pTivvPkel36/I9cYEM9taWF9/uBPvJ5L/I6e3EtfFDODUi7Rl4sw7rChDsOC64X7I7SJh8dwPTzeyCf5trqsBnNeGKtPQddkX5+CM4wLuLaxsS8Y9/GD+/fjvixe/iMuhSFqCTBcrKL5i2Zc8JDnRU+Xrij+Mq753gOoSYEnV+iGWnuBsmv3VaJeSw+aGnKK1JIoMsvFWu5IXo0Il+uFC7U3/jtvo/uqPMJvq7UlvH5LMy5vnwK/4zrMuMBnW8sc2/+ATFsZF+xNjFtZvIJPWlQBwn3+/yJPC5aFcfmuNsh6umzLyrh19+89JlX3hPJgruce4Xz++COdoOcSPRo/dNxXq9XzQccCJssdNtTBNrEeEK7lTqnX+P82lQO+vaWtQs8L4/QpNH3Ek+Ha3Ytra65gaO09xu3suN98r3jLMl6fZWoOMNwfkaeFa0Xxc7QsrF8FvbhoHqTSAAAgAElEQVTiuZX3mzueoHvPUMilPLczNxf1WmowCv7ebH5yr6hZLVJiLdfBW0P0gHA9mJvwvMmHuB4eW/JvV1dpb6DnhXH6FL5uyESdVthv+QamTwHHuI9MJj7GRfQrwA7WGfdxx/2i2GLbLk5++9t/A4Rr7+myK4p/poVr42FzX/28rry5o5bussT3K0Cz4t1Oq55LTbC8NnIoyrleDt3ntyeEy3SUC6xFr7pws1pbjZ4XhjJupr0prpOM++hhZc8z7rcd94vWMfxpreb82/NEcTLcf/lnK9/nsbr4DXoNXhAVOD+b5xYX3X+AaczlNPaSq3t/4j0/HfZccjYvjvUajeS5ArsV+asnhOsRR4eHyFcsFVuirUbOU0D7FJoIx7WjT4GfcfH0gDDuo1ZDDzPug3vNsTxN99j6LaBbMCzC/dd/+d//9Otf/xPu1XBtKWb0auchiW+uLErveIL0K6Dc29W5o8VWz31kXufhsWJd7JiRI2DKdeiAkB4R7t+CyI9ClxvF97AnhnH6FCjH7ZVeXGaYGnuOcR93lAc9Z1828Ox5usjVh3/95//9v35NlFXHhYQrZCI89WNcefMDlsdym8xIZ/0oF3VbHs9tJiTxdNm6RZ7DoTDXAVjoEeF6bCH/mIT+sK5dEPRby4lhOpRuXce4bRjGbWur7SHGfdK8KM6G6zBO/dvztOP+7rf//M//9GtL/QvyWu7PwSJcIRPhryU/l5d3dOLXziwzN7XTRs9tY67EXtwSFDN6FMW5MfbfGqJnhOtBWq7gH1Ys1atwHXNfM/jEMN1/2UO4rF5cBxm31WCFcRGN2sK49/PWrcZ3blippaTf/uY3/+f//zWrrJICJNxrgj8LoXq2fGXefcph7yLYSzpr7bt8K8hczy2C/rV5tvSNxTGjiTBX6DQZfPWQcJcT7Y1CjuuxMh9zBjmGcWtqMXzL77h4vsVnuDyM+8jQ5vzZuCzGfdxcHrfC0Z0qYMXsd7/5/36NlFVSgIS73JnjlJ8tX1nezMe45F6Idytt89zHnHRj1YriGK+RDpwm1kPC9SDuCyT4t3zpfSzhss9TaLrddENnT5+CKxi3td0oTLiIRgUd19icV7zMiXsq/kgaLqJb66QACfe5n5Av2lWkdtFTcWhn7czkSXTNHM+9jwL+0jfiFtm957enhLscmL/woQ+xyBnkN+ndvdCZuA1NZxCvxfqt6xiXmyvY5rjIAKr9oaP5/kr7sZZVP5J8iyj3fyGvRGpLMf0jEPzHz7Z6tmJl8/0nOL8lZm5mpy2eew8bIazqN47rEXdNoFWBqJWXcfc1Q+76cFtX5YpeXHsYlzjRBtGm3Yz7w4Pme8VLnb4l3VMqU/g/HOFaJwWPn4upH8Eqh0J+pFavXgkaIrGe+//aO5feNrIzDfNAnOYtRCOLLGcKyM7b+QP8G9q4mKpTN6K5qgURME6A/gmexaD3ynIgRIbaEzdgps2qntbQzECApJm20yIkNnVpSO1IcSNq0yQzqCpequqcKtaNxSrrPIQ70dWS/Orle77zne/cfetQ0bV67tt+ku6AwMCVlwkXfu5YU7Dc+/BfO7iMi+Baw9VexqQHh4x7cXwcPuNea16LfNNB+NdfGJ6btyk3V1i2gzEXLh9ybswUDgB2IvSvUL/V/nz36Q3Ob22eOwh1uHfOyoQLhNKSqAD6aMK19SlMK7mvXGu4p9H14s4fxyEz7s1lS4hGKxqjn//8F7/4+ceggMTcXN5Vu8r07sSxu4V4ZvrZlMrgBue5P2mTbjAPq+deRvKlrE64UFjiuED4Al9TsN9sdqQifuvsuE4Px/vN7PlW/7N3HiLj/n3QqkcwcxtDDlFuNruRd16kKdMRLTD0VE+dxbkAvt66fot67k+Pd946+u3Mcy8jeR5anXCB0GDdk1VtsKxPYdqJ+0eMTh0065hxPcxUMPXhHp+h2sSr1J5x/3rdr/qvSnpkaE8Lc+3iqxaKERVGEX1B5gZ3UWhd3qCeqz7GzN+xeu7fI5hHvlLhsgLlbrlU/b8xfmu598HoUTjqrqYX15ZxTZ57cRws4972O5Mo7pRxYrjw3GLBouI8JvBSU+FGZLhAsqYf5peda8Rzb353s8xzL6O4L2qFwgUCt6Szo/m1U8Y1JVythrv1yiXjOtRwEZX6ybg/HAXJuFePhIgk4shwY5ZutRfmXQvTxZotNDBKWXtaHkezHMKUN8eTzrTIsHDX73+HPRdhquXeOM2j9cMqhdsW3B0XTJ5i/dZSxdUf+zuhq7hIvnXNuBfHPb8Zt9up86FLX8sp2hpsigWzdjcs2mWMjCtFtUzE/XPC5q69truzs8Rz+0m6dQfHZIJuk5hh+27zFBYZ9/R0+xARKl6xSzLukseiR6F3gmjTNeP2d8vR1CeXUjKEagq1JUvnzUK7w6KibUBQkdzkqOkWrzdKaF1ZPPfum99gHNfkuTchZ+OCVQu3XUGeXKw0XTOu+ayZeuTuuEt6cfEZ1z5TweS5Fwc+Mu71bhVnRiuiOEsKJqzazeYLQwAK2Y8U5WcbxSV5zTuS0xHjj+XKwOK5h789dvJcvZbbQj6Fb1YqXCAvSTPKmXufwmKewpHaw+XblWXc8+MLRJ/4jHs7qPtssA2LVs7NI5+jVLQs1nL5bPYjpfwzLy0N3mi7OPeIF/q3Jn99+1h189zr8Kl7tcIdL7kWCPZxNVxrFXf6OHwZporrO+O+Of7ek+Pe9puy+/e4AgrZLFpC0LOBZbGmO66nngZPyO5ZiGv2TTWGvz1W37p4boAGXBurFS7gsS0Vc4Yd9xqu+bRZt4uoFdFsmF5ca8Y9vzhCVYpk3Ku+wMWwIEMoOLfiFk2hwRBuVJa7NA3B5/1rU0V3x17RNdVyB6Gj1YqFCzruK9o6Lt+a+hTMZ812DmPMuG8ujpfNsTkbVDwenY0e1wabUj5nEa7rO3sGM1oeRVunzebbdP/91tlzK8iH+mTVwoXumzb852gN1+a4i17crYPAvbj+M+4PR28QpzV77s1lNbJVT/QM9SLZP330UTYy4WJGy2MQldbcdW8+PXT03H7YZcGqhQs2Xb9fqoLJuJiEqz9e/f7IwW8jmalwbp1FfnLk4ri3l9VoOr9WhmmlFklU8LyLMWq2pln37u2n106eaz8J4ZuVC5dy/wrrHjLunIudA1fFhjpvZsu4+skzh8dtZ+U7ZKEpLgq7+D4Gn5S9f5bRpDNzXdVW0V0oeDfkT3DlwgW0a8rVp99harh/wd/7YORcDEi+9T1TAakrHDg47ledpuu3lBDma7RIkoK4pCJvRZOukQxuf/sW77lX7hlyKasXbtu1sMD2kYxr61Ow9ijsHIY4b+Yr456f9HB+e9VR1lFH8M+7qXLRim8Qaj6/6bGya0zQ2/71W7zndsL9GFcvXFDF7xROqSzrU7A9uirGcZFMGyTj2u+U7B2gjnvZinGLLBQSB4r5fM6lWdcP0PXfEIuo7F7f/Xj37G+/6WI998pxlKcnYhAu49p/Wf4fh3kKTjc/HG73HDTrmG8DZdw3+p1nlsegxbsuNZPD2H6BsoFcrdRlzQ7piq8DGv6Cwoz2893vv/vu7qetHaznhtv3jUG4QHBbkHIDbMY9xWdc7bGvIh034XtxMRn3vNczO+5ZvymlIyQA0OBwshSrD8qwmqmLmxVuknmAexc8/xL0yFpD2NXqCtf/dotk3B/v+qGaf+IQbnsT++tvMOo49ingMq72p6d2F4qN/rzZ/PH9sWm34UkzjqbFaID4A/F1fTfoQeYhDcYPMhnv9bxG8DE4SrN/c/fT7eND1HPfuteblhCHcIHitgyvYO59QFzW2qdw0t3p4RzX+REk455P6wq62y5rLU4QbR5/YkoyPJbOZBjAZTIZz1sAwYKCDisCvj74+93NVhf13FCbELEId1RxcavJf1pquH9xT7hT9tVvUMVGnXHfTB337LIZ/J8ubt6zvMPz26ZR3nmQeajFN3r6mygrCvbS+wUjbO7whH66eDy5vPrp+23TpJupggd/Cvp54xIumNSdDYtqoX0K/+d878OspnC4vY84bhRzwyx1hZPzN+cXl/WoOrFjoO3Yw8VkdP1RmYypPsk+/IfACf9wvYeBD26MDeNDRaF1+2P38Vu759bd/tolxCPctuDS+1fx0qeAnjd79bL7yqmGi6g0UMZ9c3Hww5tBJUWyHbHSl8grZxgFEdkcbsXNB1qdq5HpIO89B7r7sRujecWLqg7efvfr722eOwjxRBaPcIE8URx/u6pfI724iMuijnt6enq0dWB1XOdHsIx7fjCoJLwlwUKj5mwPM+oZ0z+5kjEWSM2MY8OQywU0S4Em96cqg7e/+dbquTfB7i3RiUm4o+bYccOJHfio4lrpbvdWmXGvNtOzJNMCpZdfskcZU1l9M2N8g+WMU5uhiL3+3usXZHlRevTdp6q1T6zv6GZLiUm4QJqMBadJl31sn8KSjGvQU1UPvbi+5obN/1x1XDerkwbvqe+QMUdccRYbYMZBB+NaiB0Xzna71fiX/d9vWTz3OrjlOnzB0VMHotM+RMVXxrXVwXrP1JVk3OuO47NnEuHwW2UI84hbZXS9Gi9wmQx2T/cTb024eBh0cdDo/+HTG7PnBt89i024nAAYBwcTrlxPmzn77aw01nXMtjNF4zKuw2xc489u8hsXTXiRragPB2lOvZV/YOjVeBvMZFCVaYurMDvcHO6DpUe/vjZ5bvDRjbEJFzQZQOH3TN99HizhzpV5qHaPzpf24i7xXFO+ve0L8Z7bDYXIeWqBeai56tio4gJQ0UxkYhIuJh+L/CfI67zj8EW1ld3fL/bP/hrYcuMTbqMOAIN/6qngMi62TwHnuFpNbE9Vj50zLjY9OGbcs8smzn4Sysect+sdjL0yJZPZ1F5SNkV9TebmuO1QQ3VF7gXyuulXUu3Pe3R/DHxqMj7hAqWtfTu4H0b9a0+9uNiMO/PdY7V7EEXGPR9cys610KTBcF4H2jGZTBmUM7CTkQHTfKDvCkNt9xcYqkYctx1ucpPLXS2i0vpu5rmBh9rEKFxGW0KKNcyPQ5sh5i3hOjiu/jjqqt+i+dZnxr2qlzFfYEJpSJyzPOzwDzKZTQ6IzUwmUzWSEGeqKtj7G6hwQ8wdgsIUtjWY1XL72Ofg5cQoXP1aCHynaCtcxp2fN9vb7p6Eybi3/RAl8ZgRWTlslXk8Ey6PlMNguHkR4rIvjd69NTz3NuARnjiFK+pV7rGENi9VvPbiOjvuNNse7aj7FwEz7lm/in5pyWTE8DgD8MvD6Y6ZkrHNymqgkdcPo+UnfYR/XP5N91yX3WY34hQumBi6gMivo/CV115cRLNor8Lx45e9IBlXraRFtiInR1P1kDNGxNy0bfnWQv4kJA/P/3DzkTYw5G6ApGtPxCrc0TSJM/ap3fCJxyqus+OaH73u9rdoxrVhzbfn+50Qm5txMuIm2AppIAzFytYd3xeTkDVsb08GDN35w493PwYcahOrcMHs/qSxveXmSYCEi8m4iz8H29vHp9799qJVTcWa7D0lRzamGehTCTNVpZKpmn8TGL8neu0wHv1arAud27u7YBMW4hUumEWpEWcNUc0z55lh3vwWmalwsd/d6r72lnEvW2lICSNODnUlLxZ2olhzhxS2iE15blYclwXhs7vbQMd9YxZubZ6lKMteRPmpr15cNONi+xQuDrpGmcHdc7/YTcEBhxGUIHa2aLSIodd8op8eD7nKdW4CLc9iFi5YNL23JfZX81dz/egSruVxsdd9pn574pJxb+tKTEPwA/MJxXEcE8PJ+BEMfQVlG7vF5EijzgmPgvhG3MLlTGU7uPgex/VwVVxrn4K1jnux/436TfcYdVr9MWgmu1X8PcNyHGRiWTiKXOjE5HufWKzzgerRcQvX2IWYwkjz/RXl2ncv7pKMa6mS9Q7UbvfgDMm415eTJC/KxHc8B6m4vkLWZZvWI23/n2IopCHjAsCbG9nG7GyNXBugGTdADddhFaZ78HG32/3m4MRcS3iS4C4wkYIcZONrrmzD8Be9fyL5/XpHDVh+EMTnYxcuUCxaEaemy7ZWk3Ft581Ous+evTw4NjLuVSWpJ3PEBic7TEdYESMYwXSpdtmPbiFXFqq0oATzjviFC23t5KzxWzoIVcV1yrfYOu7ey2fdveOz/toG4bsxYqAkc3E/ETBRbMWNXY5yW96P5cpKlRYmYaok8QsXIOPWeK3RXvDQp+DmuPh8e+qUIHpPOmU5pjWPV0YUV5MlJv7QjWsf8Y+4/LyECOUqTdd5Nry7r0G4FNKCKU7giHsSRZ8CqlB8b9jZ7kSPkZIsw3YCJjCKFC+vRbNa3QLKUfy9DP54yxQRSjRNN/nwOXrKGoQLBLRu2pD51moSLnamwufVxZOa2OBlWWbjKJPi+BUF5do6f3satUhyicOxLN1labpShxFH9nUIt425b/I909mLK+Ned9BdTbHEQkmuOcw5XAVtyEk1iWdjq3ZhoSTKaWqAL9COPwDelQW6Xi7Dd5H8DTbWIVwwQYWj7fpu7S/JuBjmyvWecftNxzWEyDSgxMsyx7Erk9OYhcbfwKw/o4i1RjRfg22+GKs06bpS41yPQYRjLcIdYywXwP7/drf2V55xr7yMsGszFAshB3UaTNjsKTL655O0T8lx1HqiLMJ7iofRyFac3atOQVlRyoJSgysvi6xFuNgOJLHzl9d/VtXuK9Rpo8u4Z33Bz7/VqM0wFAchx0Fe1zHHUgzDUKKr9ESRYXSp6h/D6R+nyX9dMRrLiOKg63fhHe14WgPWFA0ZoguYlbAe4Q5x10I0Na89+ko9eO0/4XrMuNf94H1gQ5EpMUyDhRDyHD+zYw1e/+/sFRxv/H+WYiiGEZN5acToXWSyfSdVNcWWpbgka7Ae4YIaRkC1r/R8e6Sqe5h8G0HGPas3k94HFhMNPgLZjhvcRHdZ3sdZ48hYk3BBBV29s5/NagrdrcNXSL4NmXF/OP+qhQko95D3bNgjZV+ysKxUq8I6+0HXJdwGxnI7C2/t7nRfR5xxB/U0TQNbGSIbZndX260VqrSirP1nuS7hAhq9G7ZpruIebHXPEJk6KXZ5xt1vYX5T7h8i57t/a8oI8mWhWhUmCUlbaxMuhw6C4J9aenH/rG4dLPFbzxn3Mh1nIVfL+3e1IGM+RFai6UqVj29vxgtrEy7mcupG33be7ER9rL6KIOOedTD7OveNMT/x2ygwnDUYIG9ZP+sTLoVOO+ogqfb10c5O19lxEXfFPS4G1ZRcZLpC2InX6Xg6DaVZrSrlEBeXrJr1CRc0kUUCjetTeL2387KHSNaiUOfzZtqew26qRouvAlHy3pHQ0HdrJ5BK+HiUNQq3jTRy88/xfQr76lbXzXGdz5udnj9tIb8f94sxJ3sr2kLll4ZkkbckkTUKFyA3Gootpz6Fiz1V3Tuy59vlGfesn6KpoStAZOHS3QEm7t3aSFincBuC/emogmTcxaOnafc11nEdH/d7z+EFy3ENt2n4LJwogqIoMe/WRsI6hQv+ZF/sV3toxjVxvNNVD05QxTpl3ME93uIdUdC5N3Pc4AyXjWLrdz2sVbhIYQE+xWZcc1Whu71z2LM5Lj7jnrXubRFs3KjJ+BrWCMKyQtNNBf/m9LBW4c4G5s4RWw4Z1+y7F93tZ3snS/PtVeue2u24USvjZMlyE6FOK4kfOOWN9QpXtLc3Yiq52Cquur3dPXLLuKcpGoofJSIrSfY17wvIKU2aTsxubSSsV7iAt2381s/cMq5lVXbU3VL3L07xc8PONnGe84HzvsRPrKO7RFauV+gqF9nZ2uSwZuGObJYLr5wyLo7X+zv/ofYwGfeKvnedYCKUZdMBMpHTGgyaUUwwSCZrFi7grHNtRn2/5816e6oxA9eUcVvK/drj/RJKMmSme2NUWaCryiTq4+BJY93CBbbpILuo1zo77izP7ne3t9WLWb59/ijZg0MjhpVlidJ/UVlF0M7WwsaHFwxQ1i5c1mq51TMPCRczU+H44OX29jdaxr1PDeMNTua0gY7QOA7ufiveh8XahWuaUa4xGWDyrYPj2ntxTw67L9VdZDTZB8mIgjzP8ZKsb33JKWkwiJD1C5ey9NqIT0LNVLit89pJcvaDLuFqk3MVpSkoilBOV4NBhKxfuMD61N5CvNZHN+5T/XO9YPSJBuudbbQaRArKQp2uC1qDwb1uH0qAcEXLVkHza58zFRZ13LMn5rw8ZnmZY6kPpr4wZPhyhaZpoXzP+zQNEiBcUDf/S0yeom7rLeN+gelNEDlJ5mEj5cs1EUrKwwot2K81vM8kQbhWy+0HzLjPq05ONGY5eQIbSZiD65cxa+zW+ppRfy9IgnCBYC6WX6Jui/VbW2LoL9t0EN9JNW0ObkqKnJ9Afbf2nidZZxIhXGDe+BVuEYW6KlbPuBcdb2MT3rdLmnw5Jrn6HTFwQtN0nYtoAugHSjKEq5jOM04GiN8unalwW/VXFBoyDQglWeL8XYO4WkRWatKVijIJf03ePSAZwgWmcgDso/l2Sca9rAczpzFFQchJsgTXWcAXIScL9cqDijK5T1tfIUmIcCcLyx01Eb9d4rj9MvL5/DFmmAavzQmX+NjOsogcx0m1iaRUNjcr92u3NhISItyxaaNW+U9Eog6K1R+3fbQKFvjLYCjGPO/2HUOVGNHtvKFXPmlTTKOhD3rmIa/9lkz0va84hnd/kCREuEBaFAUmAz+O+7y6mkQ41ubfNziOg8ZD0v7Lc5BtNBh9KjnFiBjaFEUZb2eNKebax2gfq3UWsBTFfDyk9LO1wn1sMIiQpAh32JQ46kv9/8K+e761ZNyBEFOFcyRqY/UZqsGysznkhqLn6DPz9YHkUxqagBlxUb9oQGMSsuRrHFIUFD1+jiI6QzOhJEW4gBMAK3Pa4J9x07vjho638TDSJhhU6aaypiRbyCOvwlPIpUW5iREuqGrOxJY55n3TvZK7yLdnyZ/3MZ4O7y6Hubd2SqlULBaxf2zvOMzncgWTAj3rVnvXlCg3OcJtTDd+odT8HLl5B8P56W0nySFxqN8OXo2wwaCYdSBnfb/SRh6UcguxFjZ8iDHvXeRrJTnCBfO0Cj/b+uNhzzHjzuaGXTUTuvs1hny1QlejP6iYw+s2W7S/W2m4kc3OXizZ3+7KcKPg9ubEkCDhzi+nftE8fX2ws60euTruZwkcnDC9HZxb0dnaQjabKyLYPbKQ3dDMea6/DX8eWsxGkGpWT4KEC5TZT+y5rsyjrrrTdZqM29tN1olIpizQtLZbu9JngVwWo6qczVAL2TwAheLiRfRDXMmlIiwkSbji7KbUyRdTp+3tqY/VfUS6p6df7CZmP1+fhCxMVnlv7ZwCkme1QoDtFRtZi/J867DgK1qsiyQJFyjTxZbWrjDPt/t7O9vqK4vjnj5NxFwwdja8G3nLqtCiq11VGzZDHVrfpYh8wDKG2TSk3EQJd2a5YsdaUzg5UFW1O3feV5/7upA3crRJyGvarc1ns/ZEazfUojVO5LO+61u5DeRVySNRwgWTaV2WRrKBNrFG7epDyc+e/GldtUYKymXt3tq17dailmtPuPrazASaLZaST0NWSJZwx9ObUsv72Cru0Z7aVdXqOq7am01Cltdxb60Je8otWnVZKORz2Vy+MF+bFa3P+wVta8L437yjOotpyArJEi6QDMvlLh17Fb7alTnIxqgeVrtquZyU4+B2y7Ua7rBQKGxk84WFcAsWEeYLQ1DcyIN8CYC847ZEKYBLx07ChAtoPb2KHazjnp6eXuv3+jGcLMFVnzwfNzhteLegJGp/rmBJuUjCBcAqbMvTfqFgvCpf0CtrjpabTUHITZpwJSMHtJCMa/B8cTptDCWZo9pe7+/yw2jWYJDAg4qa5S4WX0jCBSXr2ixneqlkGGkhq63XNrLO/TQbWeRViSNpwp3elKr0sI6LjBlva7MKmej0pe3W0nSVh8k9qGhOuQXUGgtZi+rMGswbqSGnW3axMFN0qWhXfy4Fm2eJEy7UGxXhFS7j9rHLsvcM1CZthlTvCErabi1M/CRkc8rdQAzX2DdbYJbx9A05S+wtbhQKBZtx54lwA6D32jAt1HHPWi53m75gYK0mUUEWbW0o0/q8+RfImxLJopaLbJrN/XROFnnWL1nCrbE/UbJ21uSc429iSJ5wef3EbweJt18sv9t0RFEcL3Oel/8NbRKyMOHeIW9JMgvLzWHKVrY1Fypca513uiFcsGxTpKGQmzzhAkHb9aftfvu06rUZoM1ATpbc212g8s90Vamlc978LOUW0YRr81OccPOWqsT0haLFp4njBuKdZrn8tTXjfu5zXvOoTRlHwBqMObRSUJ7eW5vig4ozy0VLCvqGr+XlHCLcDXPEnTfUWEq3AbaJYyeBwgVlTgu5Fsd9EvAcgTaxhtP1y+pXLX8YZ2uNWi4u4YKCbfMAedafW7J+tmfurRa9ozadPJIoXH1GuSXktmpuz/tLaMBys1oXFAmmokPaA0YtF5dw7Wszi3BLQ5MlD3WBz4u+loRBds4Coh2GrSwc93W1FuzzsPpxcCGVt4O7oqVcTA3XHgSsW76FrNYBmZ+q0ijqZheOuxBu0d6BlkQSKVztplT5epZxzzq+l1DjBqcdBxc+kHtrETTLxdRwkWZcLRnMRajbdClnCHe6i4YVboE02QSlBkFjNif3bNOP+rR7a5VqtVn+sO/0yGOOQgDM2gyAjfn75TaKoJQv6uux2Sl0rHDTsP+QUOGOaQAeTbvBKh63soYsT9MVmof34Xq6oT0STCkgz/KLCu0wn89rbWHFXD4/m7qQX2TcRSEhh/2dSBjJFC6QIKjofntFL9ftiOPplRwHTzBFfAzNI3p2b66dvf/QtCuRioibVOGCh0C5PT19dVVH3mJGux2c/jBvBw8Gpj3G9XT6TNYlk7wLKajiJle4/KQxOBOwJeEAAAH+SURBVH39xPFiB0Zpalctk3nzc4q5ErYF3PW05HAagU1Vs5RMBEmqcEFl1D/tT5BXaxtrxtnadDUYrJyStlzD1gPyqJgXGJZb2gg4sGl9JFa4rFDpW7rBmOluLZmEjCWv1QWwGi25Wmghly/kTXsZJZz2E0hihQvqzdmkcQrW9LO1RLIuDDeKTkNCi+4eOrRMeyRD78LCKNoV4bOemHt61bIPSosilx0HQeNIi24TLFxQX8/w7g8RzJlKPD4m6a6ZBAuXEB1IkcyB9LQhEeESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOESUgkRLiGVEOES0gcA4P8BNckzHBrWdmoAAAAASUVORK5CYII="
height="250"></center>
As the picture above shows, this vector gradient represents the slope of the *plane* that approximates the surface of $f$ near the point $x_0$. The gradient points in the direction of steepest ascent ("the uphill direction"), and its magnitude tells us the steepness of the plane (so zero gradient means the plane is flat).
**Note**: the gradient points in the direction of *increase* of the function. For gradient descent (discussed next), we will instead be travelling in the direction of *decrease* of the loss. This means that our updates will simply be based on the negative of the gradient.
## Gradient Descent
Gradient descent is the simplest of training algorithms commonly used in deep learning. However, it gives excellent results in many cases, and also forms the basis for many other powerful optimization methods - such as Momentum, RMSProp, and Adam - which we will look at later in this practical. Mathematically we can describe gradient descent as follows:
\begin{equation}
\mathbf{θ}= \mathbf{θ} −\eta \times \nabla_\mathbf{θ} J(\mathbf{θ})
\end{equation}
where $\mathbf{θ}$ are the parameters of the model, $\eta$ (eta) is the learning rate, $J(\mathbf{θ})$ is the loss (also called $\mathcal{L}$), and $\nabla_\theta J(\mathbf{θ})$ is the **gradient** of the loss with respect to the parameters (similar to $\red{\nabla f(\mathbf x_0)}$ in the example above). This equation tells us that to update each of the parameters, we scale the gradient for each parameter by the learning rate and subtract it from the corresponding parameter. Or, in pseudo-code:
```
for each epoch:
grad = calc_grad(loss_func, data, params) # Calculate gradient of loss wrt parameters
params = params - learning_rate * grad
```
You might have noticed that we are sweeping a lot of critical details under the rug here! Firstly we are assuming that we can easily calculate the gradients using some `calc_grad` function, and secondly, we are ignoring the issue of batch size (the number of training data points we use to calculate an estimate of the gradient).
Luckily for us, TensorFlow addresses the first detail thanks to **automatic differentiation** (AD). With AD, calculating the gradients is about as simple as calling a `calc_grad` function, which means that we do not need to worry about the details of *how* to calculate the gradients. We don't need to think much about implementing derivatives for each of our operations, or the backpropagation algorithm, for example. If you want to know more about how this all works, you should check out the [Build your own TensorFlow](https://colab.research.google.com/drive/14GeXkFd5pQKKNIJ7BMswP0ihlYg5nIbS#forceEdit=true&offline=true&sandboxMode=true) tutorial.
In practice, batch size is simply a hyper-parameter that we can tune. However, there are three cases that are worth knowing about:
1. Using a batch size of $n$, where $n$ is the number of training examples is known as **Batch Gradient Descent (BGD)**. In this case, all of the data is used to calculate the gradient at each step. This results in the *most accurate* estimate of the gradient. If the learning rate is not too high, BGD is guaranteed to converge to:
* the global minimum for [convex](https://en.wikipedia.org/wiki/Convex_function) optimisation surfaces
* a local minimum for non-convex surfaces (provided that there are no [saddle points](https://en.wikipedia.org/wiki/Saddle_point)).
However, some downsides of BGD are that it is **not compatible with online learning**, where we get new examples during training, and that it can be slow for large training datasets.
**Exercise:** what will happen if we use BGD with too high a learning rate on a convex optimisation surface? Will it diverge or find a local minimum? *Hint:* try drawing a diagram.
2. Using a batch size of 1, called **Stochastic Gradient Descent (SGD)**. In this case, only a single data point is used to calculate an estimate of the gradient. Thus, the estimate is very noisy, and we are not guaranteed to find a minimum (local or global). However, SGD still performs very well in practice and allows for online learning. It also turns out that having noisy estimates of the gradient acts as a form of **regularisation** which can prevent over-fitting. Finally, because we are performing gradient descent with one example at a time, we need much less memory. Not having enough memory can be a significant issue when using BGD.
3. Using a batch size of $m < n$ is called **Mini-batch Gradient Descent** and is a compromise between batch and stochastic gradient descent. We use $m$ examples to calculate an estimate of the gradient. Thus, we still have *some* noise in the gradient estimate, we can tune the batch size to make good use of memory, and the variance of the gradient estimate is greatly reduced — which leads to better convergence to local or global minima.
In deep learning, we almost always use mini-batch gradient descent. However, it is often referred to simply as SGD.
### Optional extra reading: choosing a batch size
You might be wondering how one chooses which batch size to use for a given problem.
One approach is to choose as big a batch size as possible. The reason we might want to do this is that a larger batch size means that our model will train more quickly. This speedup is because modern computer hardware, especially GPUs, are designed with parallelism in mind. In other words, by having a larger batch size, we can take better advantage of our hardware. So in practice, we often choose the largest batch size that will fit into memory.
Because the gradient estimation becomes better when more samples are used, using a larger batch size allows you to select a higher learning rate - which means that each iteration would make faster progress. On the other hand, choosing too large a learning rate can be computationally burdensome, and even lead to not converging to a minimum if the descent direction is one of very high curvature. So, in practice we still have to [tune](https://arxiv.org/pdf/1803.09820.pdf) the combination of batch size and learning rate for different tasks and models.
[Here](https://arxiv.org/abs/1606.04838) is an excellent reference on optimisation for machine learning that includes a thoughtful comparison of stochastic and full batch gradient methods. For a more advanced read, [this blogpost](https://blog.janestreet.com/does-batch-size-matter/) discusses batch size choice in the context of recent Bayesian interpretations of SGD.
Let's look at the one dimensional straight-line aproximation example from above and see how the batch size affects the calculated gradient. In the plot below, we see the function $\green{f(x)}$, the true tangent line $\blue{df(x)}$, the true gradient $\blue{\tilde{f'}(x_0)}$, the calculated tangent line $\red{\tilde{df}(x)}$ and the calculated gradient $\red{\tilde{f'}(x_0)}$. In this case, we get the calculated tangent line $\red{\tilde{df}(x)}$ and gradient $\red{\tilde{f'}(x_0)}$ by using *batch_size* number of datapoints to calculate an estimate.
Change the values for $x_0$ and *batch_size* and see what you can learn.
```
#@title Helper functions (RUN ME) (double click to unhide/hide the code)
def f(x):
return -np.cos(x)
def tangent_f(x):
return np.sin(x)
def df(x, x_0):
return tangent_f(x_0) * (x - x_0) + f(x_0)
def perpindicular_unit_f(x_0):
slope_f = tangent_f(x_0)
y_0 = f(x_0)
x_1 = slope_f / np.sqrt(2) + x_0
y_1 = -x_1 / slope_f + y_0 + x_0 / slope_f
return [[x_0, x_1], [y_0, y_1]]
def noisy_df(x, x_0, noisy_x_0):
return tangent_f(noisy_x_0) * (x - x_0) + f(x_0)
def noisy_perpindicular_unit_f(x_0, noisy_x_0):
slope_f = tangent_f(noisy_x_0)
y_0 = f(x_0)
x_1 = slope_f / np.sqrt(2) + x_0
y_1 = -x_1 / slope_f + y_0 + x_0 / slope_f
return [[x_0, x_1], [y_0, y_1]]
def interactive_noisy_gradient_visual(x_0, noisy_x_0):
# change the fontsize for better visibility
init_size = plt.rcParams["font.size"] # store initial font size
plt.rcParams.update({'font.size': 22}) # update the size
plt.figure(figsize=(12, 8))
x = np.linspace(-np.pi, 2 * np.pi)
f_x = f(x)
y_0 = f(x_0)
# plot f(x)
plt.plot(x, f_x, label=r"$f(x)$", color="green")
# add a point showing where x_0 falls on f(x)
plt.plot(x_0, f(x_0), marker="o", color="black")
# plot the tangent line to f(x) at x_0
plt.plot(x, df(x, x_0), linestyle="--", color="cornflowerblue", label=r"$df(x)$")
# drop a vertical line from x_0
plt.plot([x_0, x_0], [f(x_0), -3.1], color="silver")
# plot the noisy tangent line to f(x) at x_0
plt.plot(x, noisy_df(x, x_0, noisy_x_0), linestyle="--", color="red", label=r"$\widetilde{df}(x)$")
# plot the normal vector to the tangent
[[x_0, x_1], [y_0, y_1]] = perpindicular_unit_f(x_0)
plt.plot([x_0, x_1], [y_0, y_1], color="dimgray")
# plot the positive direction of change vector
dx = x_1 - x_0
dy = 0 # y_1 - y_1
arrow = plt.arrow(
x_0, y_1, dx, dy,
color="blue", label=r"$f'(x_0)$",
lw=3, head_width=np.abs(x_1 - x_0)/10, length_includes_head=True
)
plt.plot([x_0, x_1], [y_1, y_1], color="blue", label=r"$f'(x_0)$")
# plot the noisy normal vector to the tangent
[[noisy_x_0, x_1], [noisy_y_0, y_1]] = noisy_perpindicular_unit_f(x_0, noisy_x_0)
plt.plot([x_0, x_1], [y_0, y_1], color="dimgray")
# plot the noisy positive direction of change vector
dx = x_1 - x_0
dy = 0 # y_1 - y_1
arrow = plt.arrow(
x_0, y_1, dx, dy,
color="red", label=r"$f'(x_0)$",
lw=3, head_width=np.abs(x_1 - x_0)/10, length_includes_head=True
)
plt.plot([x_0, x_1], [y_1, y_1], color="red", label=r"$\widetilde{f'}(x_0)$")
plt.legend(loc="upper left")
plt.xlim(-3.1, 6.2)
plt.ylim(-3.1, 3.1)
plt.xlabel(r"$x_0$")
plt.show()
# reset to initial font size
plt.rcParams.update({'font.size': init_size})
def interactive_batch_size_visual(x_0, batch_size):
np.random.seed(0)
noisy_x_0 = x_0 + np.mean(np.random.normal(loc=0, scale=0.5, size=batch_size))
interactive_noisy_gradient_visual(x_0, noisy_x_0)
#@title Double click to unhide/hide the code {run: "auto"}
x_0 = 2.7 #@param {type:"slider", min:-3.1, max:6.2, step:0.01}
batch_size = 65 #@param {type:"slider", min:1, max:256, step:1}
interactive_batch_size_visual(x_0, batch_size)
```
**Exercise:** What are the general relationships you notice between the accuracy of the calculated gradient $\red{\tilde{f'}(x_0)}$ and (a) the batch size and (b) the funciton $\green{f(x)}$?
### Implementing SGD
SGD is incredibly simple to implement, and as you'll see later can perform very well!
```
def SGD_update(params, grads, states, hyper_params):
# hyper-param typical values: learning_rate=0.01
# SGD doesn't have any state, however, the algorithms
# we will look at later do!
for param, grad in zip(params, grads):
param.assign_sub(hyper_params['lr'] * grad) # TF function that updates by substracting
```
And now we can visualize each step of the SGD optimization of Rosenbrock's banana function:
**Exercise:** Take a look at the hidden code below and make sure that you understand it. Do you see how the code relates to the pseudo-code and equation above? For example, where do you find `calc_grad`? Once you understand the code, you should hide it again to make it easier to change the sliders and see the results.
```
#@title Helper functions (RUN ME) (double click to unhide/hide the code)
def optimize_banana(update_func, params, states, hyper_params, add_gradient_noise=False):
# plot the loss surface, minimum value and starting point
X, Y, Z = gen_2d_loss_surface(rosenbrock_banana)
fig, ax = make_contour_plot(X, Y, Z)
ax.plot(1, 1, 'r*', ms=30, label='minimum')
ax.plot(start_x, start_y, 'b*', ms=20, label='start')
for epoch in range(epochs):
with tf.GradientTape() as tape:
# we are trying to minimize the output of the 🍌 func
if add_gradient_noise:
# add random noise to the function parameters, this will result in a noisy gradient
# we only perturb the linear parameter b so in expectation we get the true gradient
loss = rosenbrock_banana(x, y, b=(20.0+np.random.normal()))
else:
loss = rosenbrock_banana(x, y)
# calculate the gradients of the loss with respect to the params
grads = tape.gradient(loss, params)
# save the old x and y values for the plot
old_x = params[0].numpy()
old_y = params[1].numpy()
# update the parameters using SGD
update_func(params, grads, states, hyper_params)
# plot the change in x and y for each update step
ax.annotate('', xy=(x.numpy(), y.numpy()),
xytext=(old_x, old_y),
arrowprops={'arrowstyle': '->', 'color': 'k', 'lw': 1},
va='center', ha='center')
ax.plot(x.numpy(), y.numpy(), 'g*', ms=20, label='end')
ax.legend()
fig.show()
#@title Double click to unhide/hide the code {run: "auto"}
start_x = -1 #@param {type:"slider", min:-2, max:2, step:0.1}
start_y = 0.73334 #@param {type:"slider", min:-0.26666, max:1.0666, step:0.1}
learning_rate = 0.015 #@param {type:"slider", min:0, max:0.02, step:0.0005}
epochs = 100 #@param {type:"slider", min:1, max:150, step:1}
x = tf.Variable(start_x, dtype='float32')
y = tf.Variable(start_y, dtype='float32')
params = [x, y]
states = []
hyper_params = {"lr": learning_rate}
optimize_banana(SGD_update, params, states, hyper_params, add_gradient_noise=False)
```
**Remark:** You might have noticed that we are computing the true gradient of the 🍌 function, which has no stochasticity! To make it look more like an actual case of SGD, we added the option to add some noise to the value of $b$ to obtain a stochastic gradient whose expectation is the actual gradient. This is similar to what happens in machine learning when we take random training samples: we can think of the random data points as the random parameters of this loss function.
**Partner Exercise:** Tweak the starting position (x and y), as well as the learning rate and the number of epochs. See if you can get to the global minimum. What do you notice about the behaviour of SGD in the 🍌 valley?
## SGD with Momentum
Hopefully, you've noticed a bit of an issue with SGD. When the gradient oscillates sideways or is very small (i.e. in the 🍌) progress towards the minimum is very slow. One solution to this problem is to add a momentum term to our optimization step:
\begin{align}
\Delta \mathbf{θ} &= \gamma \Delta \mathbf{θ} + \eta \nabla_\mathbf{θ} J(\mathbf{θ}) \\
\mathbf{θ} &= \mathbf{θ} − \Delta \mathbf{θ}
\end{align}
where $\Delta \mathbf{θ}$ is the change in parameters $\mathbf{θ}$ at each step and is made up of a mixture between the gradients at a given step and the change from the previous step. $\gamma$ (gamma) is called the *momentum* term, and $\eta$ is called the learning rate, as before.
The reason that this method is called *momentum* is that we can compare it to SGD as follows:
> Gradient descent is a person walking down a hill. They follow the steepest path downwards; their progress is slow, but steady. Momentum is a heavy ball rolling down the same hill. The added inertia acts both as a smoother and an accelerator, dampening oscillations and causing it to barrel through narrow valleys, small humps and local minima.
In other words, **the momentum term speeds up optimisation if the direction of change stays more or less the same and reduces oscillations when the direction of change goes back and forth**.
We can also describe momentum with some simple pseudo-code:
```
change = 0
for each epoch:
grad = calc_grad(loss_func, data, params)
change = momentum * change + learning_rate * grad
params = params - change
```
### Implementing Momentum
```
def momentum_update(params, grads, states, hyper_params):
# hyper-param typical values: learning_rate=0.01, momentum=0.9
changes = states['changes']
for param, grad in zip(params, grads):
changes[param].assign(hyper_params['momentum'] * changes[param] +
hyper_params['lr'] * grad)
param.assign_sub(changes[param])
#@title Double click to unhide/hide the code{run: "auto"}
start_x = -1 #@param {type:"slider", min:-2, max:2, step:0.1}
start_y = 0.73334 #@param {type:"slider", min:-0.26666, max:1.0666, step:0.1}
learning_rate = 0.01 #@param {type:"slider", min:0, max:0.02,step:0.0005}
momentum = 0.8 #@param {type:"slider", min:0, max:0.99, step:0.01}
epochs = 100 #@param {type:"slider", min:1, max:150,step:1}
x = tf.Variable(start_x, dtype='float32')
y = tf.Variable(start_y, dtype='float32')
params = [x, y]
changes = {param: tf.Variable(0., dtype='float32') for param in params}
hyper_params = {"lr": learning_rate, "momentum": momentum}
states = {"changes": changes}
optimize_banana(momentum_update, params, states, hyper_params, add_gradient_noise=False)
```
**Partner Exercise:** Play around with the various parameters and see if you can get to the minimum. Compare the performance of momentum in the 🍌 to that of SGD.
**Partner Exercise:** Why do we see oscillations for high enough momentum values?
**Partner Exercise:** Do you see any relationship between the amount of momentum and the learning rate?
## RMSProp (Root Mean Square Propagation)
So far, we have chosen a learning rate $\eta$, and it has been multiplied by the whole gradient. Thus, each element in the gradient vector has had the same learning rate applied to it at every step. However, we can imagine that:
1. Each weight might not need to vary by the same amount.
2. The amount we want to change each parameter might change throughout the optimisation process.
**Exercise:** can you think of examples of when these two cases might apply?
RMSProp is a method that addresses these issues. It can be described with the following formulae:
\begin{align}
\mathbf{v} &= \gamma \mathbf{v} + (1 - \gamma) (\nabla_\mathbf{θ} J(\mathbf{θ}))^2 \\
\mathbf{θ} &= \mathbf{θ} − \frac{\eta}{\sqrt{\mathbf{v}} + \mathbf{\epsilon}} \nabla_\mathbf{θ} J(\mathbf{θ})
\end{align}
where each element of $\mathbf{v}$ is an estimate of the square of the gradient for a specific parameter, calculated using a rolling average, $\gamma$ is a forgetting factor for the rolling average, and $\mathbf{\epsilon}$ (epsilon) is a small number added for numerical stability.
In words, RMSProp is scaling down the learning rate for each gradient by rolling average of the most recent gradients for that parameter. Importantly **each parameter has its own learning rate, which changes over time**.
We can also describe RMSPRop using pseudo-code:
```
average = 0
for each epoch:
grad = calc_grad(loss_func, data, params)
average = gamma * average + (1 - gamma) * pow(grad, 2)
params = params - learning_rate / sqrt(average) * grad
```
### Implementing RMSProp
```
def RMSProp_update(params, grads, states, hyper_params):
# hyper-param typical values: learning_rate=0.001, gamma=0.9, eps=1e-8
averages = states['averages']
for param, grad in zip(params, grads):
averages[param].assign(hyper_params['gamma']*averages[param] +
(1 - hyper_params['gamma'])*tf.math.pow(grad, 2))
param.assign_sub(hyper_params['lr']/(tf.sqrt(averages[param]) + hyper_params['eps'])*grad)
#@title Double click to unhide/hide the code {run: "auto"}
start_x = -1 #@param {type:"slider", min:-2, max:2, step:0.1}
start_y = 0.73334 #@param {type:"slider", min:-0.26666, max:1.0666, step:0.1}
learning_rate = 0.01 #@param {type:"slider", min:0, max:0.02,step:0.0005}
gamma = 0.63 #@param {type:"slider", min:0.01, max:0.99, step:0.01}
epochs = 100 #@param {type:"slider", min:1, max:150,step:1}
x = tf.Variable(start_x, dtype='float32')
y = tf.Variable(start_y, dtype='float32')
params = [x, y]
averages = {param: tf.Variable(0., dtype='float32') for param in params}
states = {"averages": averages}
hyper_params = {"lr": learning_rate, "gamma": gamma, "eps": 1e-8}
optimize_banana(RMSProp_update, params, states, hyper_params, add_gradient_noise=False)
```
**Partner Exercise:** Play around with the various parameters and see if you can get to the minimum. Compare the performance of RMSProp with momentum and SGD, particularly in the 🍌.
## Adam (Adaptive moment estimation)
Adam combines the ideas of momentum and adaptive learning rates that we have explored above. More specifically, in addition to storing the rolling averages of the *squared* gradients and using them to control the learning rate for each parameter, like RMSProp, it also stores the rolling averages of the gradients themselves and uses them like momentum. Mathematically, we can describe Adam as follows:
\begin{align}
\mathbf m &= β_1 \mathbf{m} + (1 − β_1) \nabla_\mathbf{θ} J(\mathbf{θ}) \\
\mathbf v &= β_2 \mathbf{v} + (1 − β_2) (\nabla_\mathbf{θ} J(\mathbf{θ}))^2 \\
\mathbf{\hat{m}} &= \frac{\mathbf{m}}{1 − β_1^t} \\
\mathbf{\hat{v}} &= \frac{\mathbf{v}}{1 − β_2^t} \\
\mathbf{θ} &= \mathbf{θ} − \frac{\eta}{\sqrt{\mathbf{\hat{v}}} + \mathbf{\epsilon}} \mathbf{\hat{m}}
\end{align}
where $\mathbf{m}$ is a rolling estimate of the gradient, $\mathbf{v}$ is a rolling estimate of the squared gradient, $\mathbf{\hat{m}}$ and $\mathbf{\hat{v}}$ are bias-corrected estimates, and the $\beta_i$ (beta) are mixing factors. As before, $\eta$ and $\epsilon$ are the learning rate and a numerical stability term.
The name Adam comes from the fact that $\mathbf{m}$ and $\mathbf{v}$ are estimates of the first moment (the mean) and the second moment (the uncentered variance) of the gradient. The reason that we need bias corrected versions of the estimates is that they are initialized to be zero vectors, which means that when the training starts, the estimates are biased towards zero. This problem is especially relevant when $β_1$ and $β_2$ are close to 1.
The pseudo-code for Adam is slightly longer than the other methods we've looked at but should be reasonably simple to understand. If something doesn't make sense, go back and look at momentum and RMSProp.
```
first_moment = 0
second_moment = 0
t = 0
for each epoch:
t = t + 1
grad = calc_grad(loss_func, data, params)
first_moment = beta1 * first_moment + (1 - beta1) * grad
second_moment = beta2 * second_moment + (1 - beta2) * pow(grad, 2)
first_moment_unbiased = first_moment / (1 - pow(beta1, t))
second_moment_unbiased = second_moment / (1 - pow(beta2, t))
params = params - (learning_rate / sqrt(second_moment_unbiased)) * first_moment_unbiased
```
### Implementing Adam
Implementing Adam is left as an exercise, we've set up the scafolding for you but you'll have to get your hands dirty! So skip this section for now, and come back later!
```
def Adam_update(params, grads, states, hyper_params):
# hyper-param typical values: learning_rate=0.001, beta1=0.9, beta2=0.999, eps=1e-8
t = states["t"]
t.assign_add(1.0)
first_moments = states["first_moments"]
second_moments = states["second_moments"]
for param, grad in zip(params, grads):
# come back here later!
#@title Double click to unhide/hide the code {run: "auto"}
start_x = -1 #@param {type:"slider", min:-2, max:2, step:0.1}
start_y = 0.73334 #@param {type:"slider", min:-0.26666, max:1.0666, step:0.1}
learning_rate = 0.01 #@param {type:"slider", min:0, max:0.02,step:0.0005}
beta1 = 0.8 #@param {type:"slider", min:0.01, max:0.99, step:0.01}
beta2 = 0.625 #@param {type:"slider", min:0.01, max:0.999, step:0.001}
epochs = 100 #@param {type:"slider", min:1, max:150,step:1}
x = tf.Variable(start_x, dtype='float32')
y = tf.Variable(start_y, dtype='float32')
params = [x, y]
first_moments = {param: tf.Variable(0., dtype='float32')
for param in params}
second_moments = {param: tf.Variable(0., dtype='float32')
for param in params}
t = tf.Variable(0., dtype='float32')
states = {"first_moments": first_moments,
"second_moments": second_moments, "t": t}
hyper_params = {"lr": learning_rate, "beta1": beta1, "beta2": beta2, "eps": 1e-8}
optimize_banana(Adam_update, params, states, hyper_params)
```
## Learning Rate Decay
One of the advantages of methods such as Adam and RMSProp is that the effective learning rate for each parameter is no longer a constant during training. This behaviour is desirable because we often want to reduce the learning rate as we near the global minimum so that we do not overshoot it. Another simple strategy for solving this problem is *learning rate decay*. With learning rate decay, we progressively reduce the learning rate during the training process. For example, we might decay our learning rate as follows:
\begin{equation}
\eta = \eta \times \frac{1}{1 + \delta \times t}
\end{equation}
where $\eta$ is the learning rate, $\delta$ (delta) is the decay rate, and $t$ is the number of the current training epoch. This method will give us a learning rate that looks something like this:
```
initial_learning_rate = 0.01
epochs = 100
decay_rate = initial_learning_rate/epochs
learning_rates = [initial_learning_rate]
for t in range(epochs):
previous_learning_rate = learning_rates[t]
learning_rate_at_t = previous_learning_rate * 1/(1 + decay_rate * t)
learning_rates.append(learning_rate_at_t)
plt.plot(learning_rates)
plt.xlabel("Epoch")
plt.ylabel("LR")
plt.title("LR vs Epoch")
plt.show()
```
In practice, while we do often use the simple decay scheme shown above, it is also common to use more complicated *learning rate schedules* such as exponential decay and step decay:
```
# exponential decay
initial_learning_rate = 0.01
epochs = 100
decay_rate = 0.01
learning_rates = []
for t in range(epochs):
learning_rate_at_t = initial_learning_rate * np.exp(-decay_rate * t)
learning_rates.append(learning_rate_at_t)
plt.plot(learning_rates)
plt.xlabel("Epoch")
plt.ylabel("LR")
plt.title("LR vs Epoch")
plt.show()
# step decay
initial_learning_rate = 0.01
epochs = 100
epochs_wait = 10
decay_rate = 0.5
learning_rates = []
for t in range(epochs):
exponent = np.floor((1 + t) / epochs_wait) # this part gives us the "steps"
learning_rate_at_t = initial_learning_rate * np.power(decay_rate, exponent)
learning_rates.append(learning_rate_at_t)
plt.plot(learning_rates)
plt.xlabel("Epoch")
plt.ylabel("LR")
plt.title("LR vs Epoch")
plt.show()
```
Any of these learning rate decay methods are compatible with the optimisation methods described above. The choices of whether or not to use learning rate decay and if so, which method to use, are hyper-parameters that we can tune.
## Putting it all into practice
Putting all of this into practice is very simple! Keras provides us with a high-level API that makes using any of the optimization methods or learning rate schedules as easy as changing a single line of code. Of course, if you are defining a custom training loop using `tf.GradientTape` then the code we've used above can easily be converted to work for any model and dataset.
One of the advantages of using Keras is that it provides us with reasonable default values for all of the hyper-parameters of the optimization algorithms. Let's use Keras to train a simple MLP on FashionMNIST so that we can compare the optimization algorithms we've looked at in a more realistic setting.
As a quick reminder, FashionMNIST contains 28x28 grayscale images from 10 different types of clothing. Let's take a quick look:
```
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
text_labels = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
img_index = np.random.randint(0, 50000)
plt.imshow(train_images[img_index], cmap="gray_r")
plt.xlabel(text_labels[train_labels[img_index]])
```
Before we train on the data, we want to do a little pre-processing. We won't go into detail about how and why we are doing the pre-processing, but if you want to know more, you should take a look at the *Deep Feedforward Networks* practical.
```
batch_size = 128
train_ds = tf.data.Dataset.from_tensor_slices((train_images, train_labels))
# Divide image values and cast to float so that they end up as a floating point number between 0 and 1
train_ds = train_ds.map(lambda x, y: (tf.cast(x, tf.float32) / 255.0, tf.cast(y, tf.int32)))
# Shuffle the examples.
train_ds = train_ds.shuffle(buffer_size=batch_size * 10)
# Now "chunk" the examples into batches
train_ds = train_ds.batch(batch_size)
test_ds = tf.data.Dataset.from_tensor_slices((test_images, test_labels))
test_ds = test_ds.map(lambda x, y: (tf.cast(x, tf.float32) / 255.0, tf.cast(y, tf.int32)))
test_ds = test_ds.batch(batch_size)
```
Now let's define a simple MLP:
```
def build_mlp():
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
return model
model = build_mlp()
model.summary()
```
And finally, lets train the MLP using a few different optimisers and compare the results:
```
#@title Helper functions (double click to unhide/hide the code)
def make_loss_plots(losses):
plt.close()
for label, loss_vals in losses.items():
plt.plot(loss_vals, label=label)
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.title("Loss vs Epoch")
plt.legend()
plt.show()
losses = {}
tf.random.set_seed(0)
# SGD
model = build_mlp()
model.compile(optimizer='sgd',
# we can use the string shortcut if we
# don't want to change the hyper-parameters
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
sgd_hist = model.fit(train_ds, epochs=5,
validation_data=test_ds)
losses['SGD'] = sgd_hist.history["val_loss"]
make_loss_plots(losses)
# Momentum
model = build_mlp()
model.compile(optimizer=tf.keras.optimizers.SGD(lr=0.01, momentum=0.9),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
momentum_hist = model.fit(train_ds, epochs=5,
validation_data=test_ds)
losses['Momentum'] = momentum_hist.history["val_loss"]
make_loss_plots(losses)
# RMSProp
model = build_mlp()
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=0.001, rho=0.9),
# rho is the symbol used for the forget factor in Keras
# we used gamma (γ) in this practical
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
rmsprop_hist = model.fit(train_ds, epochs=5,
validation_data=test_ds)
losses['RMSProp'] = rmsprop_hist.history["val_loss"]
make_loss_plots(losses)
# Adam
# UNCOMMENT THIS ONCE YOU HAVE IMPLEMENTED ADAM!
# model = build_mlp()
# model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999),
# loss='sparse_categorical_crossentropy',
# metrics=['accuracy'])
# adam_hist = model.fit(train_ds, epochs=5,
# validation_data=test_ds)
# losses['Adam'] = adam_hist.history["val_loss"]
# make_loss_plots(losses)
# SGD with learning rate decay
model = build_mlp()
model.compile(optimizer=tf.keras.optimizers.SGD(lr=0.5, momentum=0.0, decay=0.01),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
sgd_decay_hist = model.fit(train_ds, epochs=5,
validation_data=test_ds)
losses['SGD with decay'] = sgd_decay_hist.history["val_loss"]
make_loss_plots(losses)
# Momentum with learning rate decay
model = build_mlp()
model.compile(optimizer=tf.keras.optimizers.SGD(lr=0.5, momentum=0.9, decay=0.01),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
sgd_decay_hist = model.fit(train_ds, epochs=5,
validation_data=test_ds)
losses['Momentum with decay'] = sgd_decay_hist.history["val_loss"]
make_loss_plots(losses)
```
**Exercise:** Play with the parameters of the various optimisers and see how that affects this particular problem.
**Note:** There is an element of randomness each time we train our models; this means that we should be careful when making judgements about which methods are best for this problem.
## Optional extra reading: second-order methods
All of the optimization methods we've looked at in this practical are what we call first order methods: they calculate a straight line estimate of the gradient and take a step in that direction. However, this straight line estimate throws away some useful information about the curvature of our loss surface. In other words, a first order method gives us information about whether or not our loss is increasing or deacresing in a given direction, but a second order method tells us how much it is increasing or decreasing. With this extra information, second order methods can find a minima in fewer steps, and have less trouble with getting stuck in saddle points.
Unfortunately, the benefits of second-order methods come at a cost. As you might have guessed from the names, a first-order method calculates the first derivative of our loss function, while a second-order method also calculates the second derivative. Calculating these second derivatives (the [Hessian matrix](https://en.wikipedia.org/wiki/Hessian_matrix)) is computationally expensive, which means that second-order methods are often slower in practice. For this reason, we usually do not see second-order methods in deep learning. However, fast approximations for second-order methods are an active area of research!
Examples of second-order methods include [Newton's method](https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization) and [BFGS](https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm).
## Conclusion:
### What are optimisers, and how do we use them?
Optimisers are algorithms that try to find the best item when compared to other items using a given method of comparison (or metric). In deep learning, we use them to try to find the best values for our weights and biases, using a loss function as our metric.
### What method should you use?
There are **no hard rules** for which methods you should use. It will always depend on your particular model and dataset. However, some guidelines are worth keeping in mind:
1. Adam typically works very well in a large number of settings and is usually a good first choice.
2. RMSProp can often outperform Adam for RNNs as well as in RL. If you are working in either of these domains, then it might be worth trying RMSProp.
3. SGD and SGD with momentum often work just as well as more sophisticated methods like Adam. Don't think that they aren't worth trying out just because they are simple.
## Tasks
1. **[All]** Combine the implementations for momentum and RMSProp to implement Adam. Experiment with the parameters of Adam and compare it to the previous methods we have looked at.
2. **[All]** Augment any of the optimization methods with learning rate decay. You can choose any of the learning rate decay methods. Play around with the parameters. Compare the performance of the optimization method with and without learning rate decay.
3. **[Optional, Intermediate]** Implement the other two learning rate decay methods and compare all three decay methods with one another.
4. **[Optional, Advanced]** Implement Nesterov Momentum. You can read more about it [here](http://ruder.io/optimizing-gradient-descent/).
5. **[Optional, Advanced]** Implement Nadam. You can read more about it [here](http://ruder.io/optimizing-gradient-descent/) (it also works particularly well in deep RL).
**IMPORTANT: Please fill out the exit ticket form before you leave the practical: https://forms.gle/J4i5wehZPUdggCc29**
## Extra resources
* A [blog post](http://fa.bianp.net/teaching/2018/eecs227at/gradient_descent.html) from Fabian Pedregosa on gradient descent [**Highly Recommended**].
* [Distil.pub post](https://distill.pub/2017/momentum/) by Gabriel Goh on why momentum works [**Highly Recommended**].
* [Sebastian Ruder's blog](http://ruder.io/optimizing-gradient-descent/) on gradient descent algorithms.
* Deep Dive into Deep Learning chapter on [Optimization Algorithms](http://d2l.ai/chapter_optimization/index.html).
* Keras optimizer [docs](https://keras.io/optimizers/).
| github_jupyter |
# Example of False Positives
Early COVID-19 tests had very high false positive rates, which is often managable when tests are used along with other indicators of infection, but can produce spurious results when a very large number of tests are administered. In this notebook, we'll use the Specificity and Selectivity parameters from a recently published paper, [Development and Clinical Application of A Rapid IgM-IgG Combined Antibody Test for SARS-CoV-2 Infection Diagnosis](https://pubmed.ncbi.nlm.nih.gov/32104917/), which reports:
> The overall testing sensitivity was 88.66% and specificity was 90.63%
Specificty and Selective are a bit hard to understand, but are well explained on their [Wikipedia page. ](https://en.wikipedia.org/wiki/Sensitivity_and_specificity). The important part to understand is the table in the [worked example](https://en.wikipedia.org/wiki/Sensitivity_and_specificity#Worked_example). When a test is administered, there are four possible outcomes. The test can return a positive results, which can be a true positive or a false positive, or it can return a negative result, which is a true negative or a false negative. Of you organize those posibilities by what is the true condition ( does the patient have the vius or not ):
* Patient has virus
* True Positive ($\mathit{TP}$)
* False negative ($\mathit{FN}$)
* Patient does not have virus
* True Negative ($\mathit{TN}$)
* False Positive. ($\mathit{FP}$)
In the wikipedia table:
* The number of people who do have the virus is $\mathit{TP}+\mathit{FN}$
* The number or of people who do not have the virus is $\mathit{TN}+\mathit{FP}$
The values of Sensitivity and Specificity are defined as:
$$\begin{array}{ll}
Sn = \frac{\mathit{TP}}{\mathit{TP} + \mathit{FN}} & \text{True positives outcomes divided by all positive conditions} \tag{1}\label{eq1}\\
Sp = \frac{\mathit{TN}}{\mathit{FP} + \mathit{TN}} & \text{True negatives outcomes divided by all negative conditions}\\
\end{array}$$
We want to know the number of false positives($\mathit{FP}$) given the number of positive conditions ($\mathit{TP}+\mathit{FN}$) and the total number of tests. To compute these, we need to have some more information about the number of people tested, and how common the disease is:
* Total test population $P$, the number of people being tested, which equals $\mathit{TP}+\mathit{FP}+\mathit{FN}+\mathit{TN}$
* The prevalence $p$, the population rate of positive condition.
We can do a little math to get:
$$\begin{array}{ll}
\mathit{TP} = Pp\mathit{Sn} & \text{}\\
\mathit{FP} = P(1-p)(1-\mathit{Sp}) \text{}\\
\mathit{TN} = P(1-p)\mathit{Sp} & \text{}\\
\mathit{FN} = Pp(1-\mathit{Sn})& \text{}\\
\end{array}$$
You can see examples of these equations worked out in the third line in the red and green cells of the [Worked Example](https://en.wikipedia.org/wiki/Sensitivity_and_specificity#Worked_example) on the Sensitivity and Specificity Wikipedia page.
One of the interesting questions when test results are reported is "What percentage of the positive results are true positives?" This is a particularly important question for the COVID-19 pandemic because there are a lot of reports that most people with the virus are asymptomatic. Are they really asymptomatic, or just false positives?
The metric we're interested here is the portion of positive results that are true positives, the true positive rate, $\mathit{TPR}$:
$$\mathit{TPR} = \frac{\mathit{TP} }{ \mathit{TP} +\mathit{FP} } $$
Which expands to:
$$\mathit{TPR} = \frac{p\mathit{Sn} }{ p\mathit{Sn} + (1-p)(1-\mathit{Sp}) } $$
It is important to note that $\mathit{TPR}$ is not dependent on $P$, the size of the population being tested. It depends only on the quality parameters of the test, $\mathit{Sn}$ and $\mathit{Sp}$, and the prevalence, $p$. For a given test, only the prevalence will change over time.
```
Sp = .9063
Sn = .8866
def p_vs_tpr(Sp, Sn):
for p in np.power(10,np.linspace(-7,np.log10(.5), num=100)): # range from 1 per 10m to 50%
tpr = (p*Sn) / ( (p*Sn)+(1-p)*(1-Sp))
yield (p, tpr)
df = pd.DataFrame(list(p_vs_tpr(Sp, Sn)), columns='p tpr'.split())
df.head()
ax = df.plot(x='p',y='tpr', figsize=(10,10))
#ax.set_xscale('log')
#ax.set_yscale('log')
# Find the row with a TPR closest to 50%
loss_min_idx = (df['tpr']-.5).abs().idxmin()
df.iloc[loss_min_idx]
def gen_data():
for Sp in np.linspace(.9,1,num=20,endpoint=False):
for Sn in np.linspace(.88,1,num=20,endpoint=False):
df = pd.DataFrame(list(p_vs_tpr(Sp, Sn)), columns='p tpr'.split())
# Find the row with a TPR closest to 50%
loss_min_idx = (df['tpr']-.5).abs().idxmin()
p = df.iloc[loss_min_idx].p
yield (Sp, Sn, p)
df = pd.DataFrame(list(gen_data()), columns='Sp Sn p'.split())
flights = sns.load_dataset("flights")
flights = flights.pivot("month", "year", "passengers")
flights
import seaborn as sns; sns.set()
t =df.pivot_table('p','Sp','Sn')
ax = sns.heatmap(t)
```
Now we can calculate these values for the situation in the US when it first hit 100 cases, on March 3.
```
# What was the date that the US first hit 100 cases?
import metapack as mp
pkg = mp.open_package('http://library.metatab.org/jhu.edu-covid19-1.zip')
df = pkg.resource('confirmed').dataframe()
df[df.location == 'US'].date_100.unique()
```
Assume that on March 3, using the test described in the article linked above, that the US Government was able to test the whole US population, and that the actual number of cases was 10X of the reported number. So we have:
* $\mathit{Sp} = .9063$
* $\mathit{Sn} = .8866$
* $P = 329e6$
* $p = 1000/P$
```
# In machine learning, the chart of the TP, FP, TN, FN values
# is called a confusion matrix.
def calc_cm(p, P, Sp, Sn):
TP = P * p * Sn
FP = P * (1 - p) * ( 1 - Sp )
TN = P * ( 1 - p ) * Sp
FN = P * p * ( 1- Sn )
return ( TP, FP, TN, FN)
recorded_cases = 100
actual_cases = recorded_cases * 10
P = 1e6 #329e6
p = actual_cases / P
"p={} TP={} FP={} TN={} FN={}".format(p, *calc_cm(p, P, Sp, Sn))
```
So, in this case, of 1,000 infections in the whole US, the testing would have caught 886 of them, while producing 31 million false positives.
```
import pandas as pd
import numpy as np
def gen_data(p, Sp, Sn):
tests = np.linspace(1e6,250e6, num=101)
for tn in tests:
yield (tn,p)+tuple(int(e) for e in calc_cm(p, tn, Sp, Sn))
df = pd.DataFrame(list(gen_data(p,Sp, Sn)), columns='tests p TP FP TN FN'.split())
df['p_ratio'] = (df.p*df.tests) / df.TP
df
```
| github_jupyter |
# Scientific analysis in Python

- collection of packages and functions based on NumPy
- the goal is to have similar functionality as is available in MATLAB, IDL, R...
- heavily integrated with NumPy and matplotlib
## Organized in Sub-packages
### need to be individually imported
cluster Clustering algorithms
constants Physical and mathematical constants
fftpack Fast Fourier Transform routines
integrate Integration and ordinary differential equation solvers
interpolate Interpolation and smoothing splines
io Input and Output
linalg Linear algebra
ndimage N-dimensional image processing
odr Orthogonal distance regression
optimize Optimization and root-finding routines
signal Signal processing
sparse Sparse matrices and associated routines
spatial Spatial data structures and algorithms
special Special functions
stats Statistical distributions and functions
DO:
from scipy import ndimage
DON'T:
import scipy
from scipy import *
## Interpolation
### 1d Interpolation
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
x = np.linspace(0, 10, num=11, endpoint=True)
y = np.cos(-x**2/9.0)
f = interp1d(x, y)
f2 = interp1d(x, y, kind='cubic')
f3 = interp1d(x, y, kind='next')
xnew = np.linspace(0, 10, num=100, endpoint=True)
plt.plot(x, y, 'o')
plt.plot(xnew, f(xnew), '-')
plt.plot(xnew, f2(xnew), '--')
plt.plot(xnew, f3(xnew), ':')
plt.legend(['data', 'linear', 'cubic','nearest'], loc='best')
plt.show()
```
### 2d Interpolation
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import griddata
def func(x, y):
return x*(1-x)*np.cos(4*np.pi*x) * np.sin(4*np.pi*y**2)**2
grid_x, grid_y = np.mgrid[0:1:100j, 0:1:200j]
points = np.random.rand(1000, 2)
values = func(points[:,0], points[:,1])
grid_z0 = griddata(points, values, (grid_x, grid_y), method='nearest')
grid_z1 = griddata(points, values, (grid_x, grid_y), method='linear')
grid_z2 = griddata(points, values, (grid_x, grid_y), method='cubic')
plt.subplot(221)
plt.imshow(func(grid_x, grid_y).T, extent=(0,1,0,1), origin='lower')
plt.plot(points[:,0], points[:,1], 'k.', ms=1)
plt.title('Original')
plt.subplot(222)
plt.imshow(grid_z0.T, extent=(0,1,0,1), origin='lower')
plt.title('Nearest')
plt.subplot(223)
plt.imshow(grid_z1.T, extent=(0,1,0,1), origin='lower')
plt.title('Linear')
plt.subplot(224)
plt.imshow(grid_z2.T, extent=(0,1,0,1), origin='lower')
plt.title('Cubic')
plt.gcf().set_size_inches(6, 6)
plt.show()
```
## Integration
### Numerical Integration
Suppose we want to integrate any function $f(x)$ within the boarders $a$ and $b$
$\int_{a}^{b} f(x) dx$
Say for a concrete example $\int_{0}^{\pi/2} sin(x) dx$
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import quad
# quad integrates the function using adaptive Gaussian quadrature from the Fortran QUADPACK library
res, err = quad(np.sin, 0, np.pi/2)
print(res)
print(err)
```
### Integrating ordinary differential equations (ODE)
Lets use the example: $\frac{dy}{dt} = -2y$
We want to integrate this equation between $t=0$ and $t=4$ with initial condition $y(t=0)=1$
```
from scipy.integrate import odeint
def calc_derivative(ypos, time):
return -2 * ypos
time_vec = np.linspace(0, 4, 40)
y = odeint(calc_derivative, y0=1, t=time_vec)
plt.plot(time_vec,y)
plt.show()
```
## Optimization
### Curve fitting
```
import numpy as np
import matplotlib.pyplot as plt
from scipy import optimize
x_data = np.linspace(-5, 5, num=50)
y_data = 2.9 * np.sin(1.5 * x_data) + np.random.normal(size=50)
# We know that the data lies on a sine wave, but not the amplitudes or the period.
plt.plot(x_data,y_data,"o")
plt.show()
# Set up a function with free parameters to fit to the data
def test_func(x, a, b):
return a * np.sin(b * x)
params, params_covariance = optimize.curve_fit(test_func, x_data, y_data, p0=[2, 2])
print(params)
plt.plot(x_data,y_data,"o")
plt.plot(x_data,test_func(x_data,params[0],params[1]),"r-")
plt.show()
```
### Finding the minimum
```
import numpy as np
import matplotlib.pyplot as plt
from scipy import optimize
def f(x):
return x**2 + 10*np.sin(x)
x = np.arange(-10, 10, 0.1)
plt.plot(x, f(x))
plt.show()
result = optimize.minimize(f, x0=0)
# Uses by default the Broyden–Fletcher–Goldfarb–Shanno algorithm but another algorithm can be defined.
# result is a compound object that contains all the information of the minimization attempt
print(result)
plt.plot(x, f(x))
plt.plot(result.x, f(result.x),"ro")
plt.show()
```
### Orbit fitting with SciPy


## Fourier Transforms
```
import numpy as np
from scipy import fftpack
from matplotlib import pyplot as plt
time_step = 0.02
period = 5.
time_vec = np.arange(0, 20, time_step)
sig = (np.sin(2 * np.pi / period * time_vec) + 0.5 * np.random.randn(time_vec.size))
plt.plot(time_vec, sig, label='Original signal')
plt.show()
# The FFT of the signal
sig_fft = fftpack.fft(sig)
# And the power (sig_fft is of complex dtype)
power = np.abs(sig_fft)
# The corresponding frequencies
sample_freq = fftpack.fftfreq(sig.size, d=time_step)
# Plot the FFT power
plt.figure(figsize=(6, 5))
plt.plot(sample_freq, power)
plt.xlabel('Frequency [Hz]')
plt.ylabel('power')
# Find the peak frequency: we can focus on only the positive frequencies
pos_mask = np.where(sample_freq > 0)
freqs = sample_freq[pos_mask]
peak_freq = freqs[power[pos_mask].argmax()]
high_freq_fft = sig_fft.copy()
high_freq_fft[np.abs(sample_freq) > peak_freq] = 0
filtered_sig = fftpack.ifft(high_freq_fft)
plt.figure(figsize=(6, 5))
plt.plot(time_vec, sig, label='Original signal')
plt.plot(time_vec, filtered_sig, linewidth=3, label='Filtered signal')
plt.xlabel('Time [s]')
plt.ylabel('Amplitude')
plt.legend(loc='best')
plt.show()
```
## Image processing
```
import astropy.io.fits as fits
import matplotlib.pyplot as plt
from scipy import ndimage as nd
from scipy.ndimage.filters import gaussian_filter
fits_file = fits.open("./data/exohost.fits")
data_array = fits_file[0].data
plt.imshow(data_array, origin='lower', cmap=plt.cm.gist_heat , vmin=-10, vmax=200)
plt.show()
```
### High-pass filter
```
low_frequ = gaussian_filter(data_array, sigma=10, truncate=6.1)
high_pass_filtered_image = data_array - low_frequ
plt.imshow(high_pass_filtered_image, origin='lower', cmap=plt.cm.gist_heat , vmin=-10, vmax=200)
plt.show()
```
### Remove radial average
```
rot_angles = np.arange(0,180,5)
rotated_images = []
for angle in rot_angles:
rotated = nd.rotate(data_array, angle, axes=(-1, -2), reshape=False, output=None, order=3, mode='constant', cval=0.0, prefilter=True)
diff = data_array - rotated
rotated_images.append(diff)
rotated_images = np.array(rotated_images)
final_subtracted_result = np.median(rotated_images, axis=0)
plt.imshow(final_subtracted_result, origin='lower', cmap=plt.cm.gist_heat , vmin=-10, vmax=200)
plt.show()
```

- flexible and expressive data structures with NumPy and SciPy under the hood (so fast!)
- deals well with heterogeneous data sets and in particular labeled data
- well suited for large data sets including missing data
- very strong for ordered or un-ordered time series
- easy in- and output interfaces with multiple data formats, including excel tables
### Think excel for python!
## Creating and manipulating pandas data frame
```
import pandas as pd
import numpy as np
# We want to record different measurements taken at different dates
dates = pd.date_range('20130101', periods=6)
print(dates)
# Lets now create a data frame object that has several columns and is indexed by the dates we created
df = pd.DataFrame(np.random.randn(6,4), index=dates, columns=list('ABCD'))
print(df)
# data frames can have completely different data types in each column
df2 = pd.DataFrame({ 'A' : 1.,
'B' : pd.Timestamp('20130102'),
'C' : pd.Series(1,index=list(range(4)),dtype='float32'),
'D' : np.array([3] * 4,dtype='int32'),
'E' : pd.Categorical(["test","train","test","train"]),
'F' : 'foo' })
print(df2)
#print(df2.dtypes)
# You can select columns by labels
print(df2["A"])
# We can perform statistical operations
print(df2.mean())
# We can apply numpy functions
print(df)
print(df.apply(np.cumsum))
# We can easily plot data
import matplotlib.pyplot as plt
df.plot()
plt.show()
```
| github_jupyter |
```
%matplotlib inline
```
# Set theory and proofs
Mathematics, as it is taught at school, is usually taught by example. Pupils are asked to learn the various mathematical "laws" by rote. For example, we all know from school that $$3 + 5 = 8$$ and $$\frac{d}{dx} x^2 = 2x.$$ We rarely pause and ask, why is this the case, and what are we really doing?
In fact, a lot is going on in these equations. For example, we would have to spend a lot of time explaining what $dx$ "really" is in $$\frac{d}{dx} x^2 = 2x.$$ We would either have to introduce the notion of an **infinitesimal** or resort to Silvanus Thompson's explanation from *Calculus Made Easy* (1910):
<center>
<em>$dx$ means a little bit of $x$.</em>
</center>
This explanation may be sufficient if we want to *do* mathematics rather than *understand* it. Thompson uses the Ancient Simian Proverb as an epigraph to his work:
<center>
<em>What one fool can do, another can.</em>
</center>
<p/>
<center>
<img src="images/silvanus-thompson.png" width="150" alt="Silvanus Thompson">
</center>
What are we doing, what is mathematics anyway? **"Mathematics"** is not a "-logy", unlike many of the sciences. The word itself comes from the Ancient Greek μαθηματικός (mathēmatikós, "fond of learning"). Thus mathematics is... a form of learning? This definition is as abstract as mathematics itself...
It was probably G. H. Hardy in *A Mathematician's Apology* (1940) who defined mathematics as the "study of patterns":
<center>
<em>A mathematician, like a painter or a poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas.</em>
</center>
<p/>
<center>
<img src="images/g-h-hardy-quote-mathematical-reality.png" width="400" alt="G. H. Hardy">
</center>
Lynn Arthur Steen seconds him in *The Science of Patterns* (Science, 1988):
<center>
<em>Mathematics is often defined as the science of space and number, as the discipline rooted in geometry and arithmetic. Although the diversity of modern mathematics has always exceeded this definition, it was not until the recent resonance of computers and mathematics that a more apt definition became fully evident. Mathematics is the science of patterns. The mathematician seeks patterns in number, in space, in science, in computers, and in imagination.</em>
</center>
<p/>
<center>
<img src="images/lynn-steen-quote-science-of-patterns.png" width="400" alt="Lynn Arthur Steen">
</center>
According to this definition, mathematics is a *study of patterns*. And patterns are more general than numbers. Therefore, a number cannot be the most basic "unit" of study in mathematics. What is this most basic "unit"? What abstraction could we introduce to study patterns in all their various forms, including numbers?
In 1874, Georg Cantor introduced just such an abstraction in *On a Property of the Collection of All Real Algebraic Numbers*: the *set*. It has proved to be a very fruitful abstraction, allowing us, among other things, to formalise the notion of *infinity*.
<p/>
<center>
<img src="images/georg-cantor.png" alt="Georg Cantor">
</center>
## Sets
A **set** is (arguably) the most fundamental object in mathematics. It is a collection of distinct objects. For example, we could talk about the set of numbers from one up to ten, inclusive. We could give this set a name, say, $S$, and write it like so: $$S = \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10\}.$$ We refer to objects in a particular set as its **elements**. We have just *defined* a particular set by listing its elements. Thus 3 is an element of $S$, and we write $$3 \in S.$$ On the other hand, 11 is not an element of $S$, and we write $$11 \notin S.$$ Neither, in fact, is Joe: $$\text{Joe} \notin S.$$
## Finite and infinite sets
This particular set, $$S = \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10\},$$ is **finite**. Consider the set of all **natural numbers**: $$\mathbb{N} = \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, ...\}.$$ There is no way to list all of its elements, of course, but at least, if we had an infinite amount of time, we could enumerate (list) them: "one", "two", "three", and so on. We call such sets **countably infinite** or **denumerable**.
## Equivalence of sets
Two sets, $A$ and $B$ are said to be **equivalent** or **equinumerous** (we write $A \sim B$) if a one-to-one correspondence can be set up between all their elements. For example, the sets $A = \{1, 2, 3\}$ and $B = \{a, b, c\}$ are equivalent:
$$
1 \mapsto a, \\
2 \mapsto b, \\
3 \mapsto c.$$
This is not the only such one-to-one correspondence; for example, we could use this one:
$$
1 \mapsto b, \\
2 \mapsto a, \\
3 \mapsto c.
$$
On the other hand, $A = \{1, 2\}$ and $B = \{a, b, c\}$ are not equivalent: we need a *one-to-one* correspondence between *all* elements of $A$ and all the elements of $B$, but $A$ has fewer elements than $B$.
Finite sets are equivalent if and only if (or, to use Paul Halmos's abbreviation, **iff**) they have the same number of elements. In fact, equivalence is a generalisation of this notion (that the sets have the same number of elements) to sets with infinitely many elements.
Thus every countably infinite (denumerable) set is equivalent to the set of natural numbers, $\mathbb{N}$: enumerating a set or listing its elements is the same as finding a one-to-one correspondence between the elements of this set and natural numbers.
Are positive rational numbers (i.e. fractions $\frac{p}{q}$, $p, q \in \mathbb{N}$, $q \neq 0$) denumerable? What do you think?
First, note that positive rationals can be arranged in a table:
<table>
<tr><td>$p$:</td> <td></td> <td>$1$</td> <td></td> <td>$2$</td> <td></td> <td>$3$</td> <td></td> <td>$4$</td> <td></td> <td>$5$</td> <td></td> <td>$\ldots$</td> </tr>
<tr><td>$q$</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr>
<tr><td>$1$</td> <td></td> <td>$\frac{1}{1}$</td> <td></td> <td>$\frac{2}{1}$</td> <td></td> <td>$\frac{3}{1}$</td> <td></td> <td>$\frac{4}{1}$</td> <td></td> <td>$\frac{5}{1}$</td> <td></td> <td>$\ldots$</td> </tr>
<tr><td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr>
<tr><td>$2$</td> <td></td> <td>$\frac{1}{2}$</td> <td></td> <td>$\frac{2}{2}$</td> <td></td> <td>$\frac{3}{2}$</td> <td></td> <td>$\frac{4}{2}$</td> <td></td> <td>$\frac{5}{2}$</td> <td></td> <td>$\ldots$</td> </tr>
<tr><td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr>
<tr><td>$3$</td> <td></td> <td>$\frac{1}{3}$</td> <td></td> <td>$\frac{2}{3}$</td> <td></td> <td>$\frac{3}{3}$</td> <td></td> <td>$\frac{4}{3}$</td> <td></td> <td>$\frac{5}{3}$</td> <td></td> <td>$\ldots$</td> </tr>
<tr><td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr>
<tr><td>$4$</td> <td></td> <td>$\frac{1}{4}$</td> <td></td> <td>$\frac{2}{4}$</td> <td></td> <td>$\frac{3}{4}$</td> <td></td> <td>$\frac{4}{4}$</td> <td></td> <td>$\frac{5}{4}$</td> <td></td> <td>$\ldots$</td> </tr>
<tr><td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr>
<tr><td>$5$</td> <td></td> <td>$\frac{1}{5}$</td> <td></td> <td>$\frac{2}{5}$</td> <td></td> <td>$\frac{3}{5}$</td> <td></td> <td>$\frac{4}{5}$</td> <td></td> <td>$\frac{5}{5}$</td> <td></td> <td>$\ldots$</td> </tr>
<tr><td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr>
<tr><td>$\vdots$</td> <td></td> <td>$\vdots$</td> <td></td> <td>$\vdots$</td> <td></td> <td>$\vdots$</td> <td></td> <td>$\vdots$</td> <td></td> <td>$\vdots$</td> <td></td> <td>$\ddots$</td> </tr>
</table>
Not all entries in this table are distinct, for example, $\frac{1}{1} = \frac{2}{2} = \frac{3}{3} = \frac{4}{4} = \frac{5}{5}$, $\frac{2}{4} = \frac{1}{2}$, etc. Let us erase all rational numbers that have non-trivial common factors in the numerator and denominator, while keeping the first occurrence. All rational numbers in our table are now unique; if we continue the table indefinitely, it will include all positive rational numbers:
<table>
<tr><td>$p$:</td> <td></td> <td>$1$</td> <td></td> <td>$2$</td> <td></td> <td>$3$</td> <td></td> <td>$4$</td> <td></td> <td>$5$</td> <td></td> <td>$\ldots$</td> </tr>
<tr><td>$q$</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr>
<tr><td>$1$</td> <td></td> <td>$\frac{1}{1}$</td> <td></td> <td>$\frac{2}{1}$</td> <td></td> <td>$\frac{3}{1}$</td> <td></td> <td>$\frac{4}{1}$</td> <td></td> <td>$\frac{5}{1}$</td> <td></td> <td>$\ldots$</td> </tr>
<tr><td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr>
<tr><td>$2$</td> <td></td> <td>$\frac{1}{2}$</td> <td></td> <td></td> <td></td> <td>$\frac{3}{2}$</td> <td></td> <td></td> <td></td> <td>$\frac{5}{2}$</td> <td></td> <td>$\ldots$</td> </tr>
<tr><td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr>
<tr><td>$3$</td> <td></td> <td>$\frac{1}{3}$</td> <td></td> <td>$\frac{2}{3}$</td> <td></td> <td></td> <td></td> <td>$\frac{4}{3}$</td> <td></td> <td>$\frac{5}{3}$</td> <td></td> <td>$\ldots$</td> </tr>
<tr><td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr>
<tr><td>$4$</td> <td></td> <td>$\frac{1}{4}$</td> <td></td> <td></td> <td></td> <td>$\frac{3}{4}$</td> <td></td> <td></td> <td></td> <td>$\frac{5}{4}$</td> <td></td> <td>$\ldots$</td> </tr>
<tr><td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr>
<tr><td>$5$</td> <td></td> <td>$\frac{1}{5}$</td> <td></td> <td>$\frac{2}{5}$</td> <td></td> <td>$\frac{3}{5}$</td> <td></td> <td>$\frac{4}{5}$</td> <td></td> <td></td> <td></td> <td>$\ldots$</td> </tr>
<tr><td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr>
<tr><td>$\vdots$</td> <td></td> <td>$\vdots$</td> <td></td> <td>$\vdots$</td> <td></td> <td>$\vdots$</td> <td></td> <td>$\vdots$</td> <td></td> <td>$\vdots$</td> <td></td> <td>$\ddots$</td> </tr>
</table>
Finally, we associate these numbers with the natural numbers $1, 2, 3, \ldots$. We start in the top-left corner of the table and then follow the arrows:
<table>
<tr><td>$p$:</td> <td></td> <td>$1$</td> <td></td> <td>$2$</td> <td></td> <td>$3$</td> <td></td> <td>$4$</td> <td></td> <td>$5$</td> <td></td> <td>$\ldots$</td> </tr>
<tr><td>$q$</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr>
<tr><td>$1$</td> <td></td> <td>$\frac{1}{1}$</td> <td></td> <td>$\frac{2}{1}$</td> <td>$\rightarrow$</td> <td>$\frac{3}{1}$</td> <td></td> <td>$\frac{4}{1}$</td> <td>$\rightarrow$</td> <td>$\frac{5}{1}$</td> <td></td> <td>$\ldots$</td> </tr>
<tr><td></td> <td></td> <td>$\downarrow$</td> <td>$\nearrow$</td> <td></td> <td>$\swarrow$</td> <td></td> <td>$\nearrow$</td> <td></td> <td>$\swarrow$</td> <td></td> <td></td> <td></td> </tr>
<tr><td>$2$</td> <td></td> <td>$\frac{1}{2}$</td> <td></td> <td>$\swarrow$</td> <td></td> <td>$\frac{3}{2}$</td> <td></td> <td>$\swarrow$</td> <td></td> <td>$\frac{5}{2}$</td> <td></td> <td>$\ldots$</td> </tr>
<tr><td></td> <td></td> <td></td> <td>$\swarrow$</td> <td></td> <td>$\nearrow$</td> <td></td> <td>$\swarrow$</td> <td></td> <td>$\nearrow$</td> <td></td> <td></td> <td></td> </tr>
<tr><td>$3$</td> <td></td> <td>$\frac{1}{3}$</td> <td></td> <td>$\frac{2}{3}$</td> <td></td> <td>$\swarrow$</td> <td></td> <td>$\frac{4}{3}$</td> <td></td> <td>$\frac{5}{3}$</td> <td></td> <td>$\ldots$</td> </tr>
<tr><td></td> <td></td> <td>$\downarrow$</td> <td>$\nearrow$</td> <td></td> <td>$\swarrow$</td> <td></td> <td>$\nearrow$</td> <td></td> <td>$\swarrow$</td> <td></td> <td></td> <td></td> </tr>
<tr><td>$4$</td> <td></td> <td>$\frac{1}{4}$</td> <td></td> <td>$\swarrow$</td> <td></td> <td>$\frac{3}{4}$</td> <td></td> <td>$\swarrow$</td> <td></td> <td>$\frac{5}{4}$</td> <td></td> <td>$\ldots$</td> </tr>
<tr><td></td> <td></td> <td></td> <td>$\swarrow$</td> <td></td> <td>$\nearrow$</td> <td></td> <td>$\swarrow$</td> <td></td> <td>$\nearrow$</td> <td></td> <td></td> <td></td> </tr>
<tr><td>$5$</td> <td></td> <td>$\frac{1}{5}$</td> <td></td> <td>$\frac{2}{5}$</td> <td></td> <td>$\frac{3}{5}$</td> <td></td> <td>$\frac{4}{5}$</td> <td></td> <td>$\swarrow$</td> <td></td> <td>$\ldots$</td> </tr>
<tr><td></td> <td></td> <td>$\downarrow$</td> <td>$\nearrow$</td> <td></td> <td>$\swarrow$</td> <td></td> <td>$\nearrow$</td> <td></td> <td>$\swarrow$</td> <td></td> <td></td> <td></td> </tr>
<tr><td>$\vdots$</td> <td></td> <td>$\vdots$</td> <td></td> <td>$\vdots$</td> <td></td> <td>$\vdots$</td> <td></td> <td>$\vdots$</td> <td></td> <td>$\vdots$</td> <td></td> <td>$\ddots$</td> </tr>
</table>
It is easy to see that all *negative* rational numbers are also denumerable: we simply put the minus sign before the fractions in the tables above.
What about all **real numbers**, including positive and negative rationals, such as $\frac{1}{2}$ and $-\frac{1}{2}$, zero, and irrationals, such as $\sqrt{2}$, $\pi = 3.1415926535\ldots$ and $e = 2.7182818284\ldots$? Clearly, this set (denoted $\mathbb{R}$) is also infinite. Can we enumerate it?
We know that all real numbers can be written as decimal fractions, e.g. $\pi = 3.1415926535...$. Suppose we can enumerate all real numbers between 0 and 1, inclusive:
$$
a_1 = 0.a_{11}a_{12}a_{13}a_{14}a_{15}\ldots, \\
a_2 = 0.a_{21}a_{22}a_{23}a_{24}a_{25}\ldots, \\
a_3 = 0.a_{31}a_{32}a_{33}a_{34}a_{35}\ldots, \\
\vdots \\
a_k = 0.a_{k1}a_{k2}a_{k3}a_{k4}a_{k5}\ldots, \\
\vdots \\
$$
Here $a_{11}, a_{12}, a_{13}, a_{14}, a_{15}, \ldots, a_{21}, a_{22}, a_{23}, \ldots$ are all decimal digits, $0, 1, 2, 3, \ldots 9$.
If indeed it is possible to enumerate all real numbers between 0 and 1, then all of them appear on our list. But consider the number $b$ which differs from $a_1$ in the first digit (so that digit is anything but $a_{11}$), from $a_2$ in the second digit (so that digit is anything but $a_{22}$), from $a_3$ in the third digit (so that digit is anything but $a_{33}$), and so on. We have highlighted these digits below:
$$
a_1 = 0.\mathbf{a_{11}}a_{12}a_{13}a_{14}a_{15}\ldots, \\
a_2 = 0.a_{21}\mathbf{a_{22}}a_{23}a_{24}a_{25}\ldots, \\
a_3 = 0.a_{31}a_{32}\mathbf{a_{33}}a_{34}a_{35}\ldots, \\
\vdots \\
a_k = 0.a_{k1}a_{k2}a_{k3}a_{k4}a_{k5}\ldots \mathbf{a_{kk}}\ldots, \\
\vdots \\
$$
By construction, $b$ differs from *all* numbers on our list, therefore $b$ is *not* on our list. So our attempt to enumerate all numbers between 0 and 1 (let alone *all* real numbers!) has failed.
We have just shown that real numbers are not enumerable using the so-called **Cantor's diagonal slash argument**. It is a "diagonal slash" for obvious reasons, while it's "Cantor's" because the aforementioned Georg Cantor discovered it.
Incidentally, Cantor's set theory, which has become the foundation of modern mathematics, was initially rejected by many prominent mathematicians. Leopold Kronecker, for instance, said:
<center>
<em>I don't know what predominates in Cantor's theory — philosophy or theology, but I am sure that there is no mathematics there.</em>
</center>
Thus the set $\mathbb{R}$, just like the set $\mathbb{N}$, is infinite, but it is even "more" infinite that $\mathbb{N}$: it is **uncountable** or **uncountably infinite**: we can't even enumerate it! In set theory this idea of *different kinds of infinity* generates further to the notion of **cardinality**.
## Subsets and supersets
Notice that all elements of our example set $$S = \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10\}$$ are also elements of $\mathbb{N}$. We say that $S$ is a **subset** of $\mathbb{N}$ and write $$S \subseteq \mathbb{N}.$$ Of course, $$\mathbb{N} \nsubseteq S.$$
We could, equivalently, say that $\mathbb{N}$ is a **superset** of $S$ and write $$\mathbb{N} \supseteq S.$$
Similarly, $$\mathbb{N} \subseteq \mathbb{R},$$ which is the same thing as $$\mathbb{R} \supseteq \mathbb{N},$$ and $$\mathbb{R} \nsubseteq \mathbb{N},$$ which is the same thing as $$\mathbb{N} \nsupseteq \mathbb{R}.$$
Since $S$ is a subset of $\mathbb{N}$, we could use the following "syntactic sugar" to define $S$:
$$S = \{x \, | \, x \in \mathbb{N}, x \leq 10\}.$$
We read "$|$" as "such that". Thus, instead of listing all elements of a set, we could define it by mentioning a particular **property** of its elements, such as $x \leq 10$.
We can write $$|S| = 10$$ to indicate that $S$ has exactly 10 elements.
## Equality of sets
Any two sets $A$ and $B$ are equal, $A = B$, iff (if and only if) $A \subseteq B$ *and* $B \subseteq A$.
We shall add that we consider only distinct objects when talking about elements of sets. Repeats are not allowed. Thus $\{2, 2\}$ is really the same set as $\{2\}$ and it is deemed to contain exactly one element.
Nor do we care about the order of elements in a set: $\{1, 2, 3\}$ and $\{3, 2, 1\}$, for instance, are deemed to be equal.
A set containing a single element, such as $\{5\}$, is called a **singleton** set. We distinguish sets from their elements. Thus $5 \in A$, whereas $\{5\} \notin A$. This is because $\{5\}$ is *not* the *number five*, it is a *set containing* the number five.
In the programming language Python there is a data structure called `set`, which works according to principles that mimic those of a mathematical set.
```
A = set([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
A
set([2, 3, 4]).issubset(A)
A.issubset(set([2, 3, 4]))
set([2, 3, 4]) == A
set([1, 2, 3]) == set([3, 2, 1])
set([2, 2]) == set([2])
set([2, 2])
len(set([2, 2]))
B = set([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1, 1, 1])
A.issubset(B)
B.issubset(A)
A == B
len(B)
set(['foo', 'bar', 'baz', 1, 2, 3, 4, 5, 1./7.])
```
## Proof by contradiction
When we were asked whether the set of positive rational numbers was denumerable, we simply enumerated its elements. We **proved** the statement "the set of rational numbers is denumerable" by actually finding — constructing — the requisite enumeration. Such a proof is known as a **constructive proof**: the existence of a mathematical object (in this case, the one-to-one correspondence between the natural numbers and the positive rational numbers) is demonstrated by creating or providing a method for creating the object.
**Cantor's diagonal slash argument** is an example of a different kind of proof — **proof by contradiction** or, in Latin, ***reductio ad absurdum***.
In *A Mathematician's Apology*, G. H. Hardy described proof by contradiction as "one of a mathematician's finest weapons", saying
<center>
<em>It is a far finer gambit than any chess gambit: a chess player may offer the sacrifice of a pawn or even a piece, but a mathematician offers the game.</em>
</center>
Here is another example. This one is due to Euclid (c. 300 BC).
<p/>
<center>
<img src="images/euclid.png" alt="Euclid">
</center>
Recall that a prime is a natural number greater than 1 that cannot be formed by multiplying two smaller natural numbers. The first few primes are 2, 3, 5, 7, 11, 13, 17, 19, 23. Indeed, we can quickly come up with a list of primes using Python list comprehensions (in practice there are *far* more efficient algorithms for finding primes):
```
[x for x in range(2, 100) if all(x % y != 0 for y in range(2, x))]
```
How do you prove that there are infinitely many primes, i.e. the set of all primes is infinite (obviously, countably infinite, since it is a subset of natural numbers, a countably infinite set)?
Since we are talking about proofs by contradiction, it may be sensible to assume that that's how we shall proceed. Assume *for a contradiction* that $P$, the set of *all* primes, is finite. Say, there exist exactly $n$ primes, $n \in \mathbb{N}$:
$$P = \{p_1, p_2, p_3, \ldots, p_n\}.$$
Multiplying the elements of $P$ together, we obtain another number. Let us add 1 to that number to obtain
$$a = p_1 p_2 p_3 \ldots p_n + 1.$$
Clearly this number is greater than any of the elements of $P$, so it is not in $P$. Since, by our assumption, $P$ contains all primes, $a$ is not a prime. Then there exists some prime in $P$, say, $p_k$, $1 \leq k \leq n$, that divides a. Then,
$$a = p_k \cdot b, \quad b \in \mathbb{N}.$$
But then
$$
\begin{align}
1 &= a - p_1 p_2 p_3 \ldots p_n \\
&= p_k \cdot b - p_1 \ldots p_{k-1} p_k p_{k+1} \ldots p_n \\
&= p_k (b - p_1 \ldots p_{k-1} p_{k+1} \ldots p_n).
\end{align}
$$
In other words, $p_k$ divides 1. Since no natural number other than 1 divides 1, we have a contradiction. Our assumption that $P$ is finite must have been wrong. There are therefore infinitely many primes. **Q.E.D.** (which signifies the end of the proof and is an initialism of the Latin phrase *quod erat demonstrandum*, "what was to be demonstrated").
Instead of Q.E.D., people sometimes put the **Halmos symbol**, □, at the end of the proof.
Here is yet another example: prove that $\sqrt{2}$ is not a rational number, i.e. that $\sqrt{2}$ is irrational.
*Assume for a contradiction* that $\sqrt{2}$ is rational. Then we can write it as $\sqrt{2} = \frac{p}{q}$, where $p$ and $q$ are natural numbers, $q \neq 0$. Further, we can assume that $\frac{p}{q}$ is a fraction in lowest terms, i.e. there is no prime that divides both $p$ and $q$. (Thus $\frac{3}{9}$ is not in lowest terms, since $3$ divides both $3$ and $9$, whereas $\frac{1}{3}$ is.)
Since $\sqrt{2} = \frac{p}{q}$, on squaring both sides, we obtain $2 = \frac{p^2}{q^2}$, hence $p^2 = 2q^2$. Since the right-hand side is even, the left-hand side must be even. Therefore $p$ is even, say $p = 2a$ for some $a \in \mathbb{N}$. But then $p^2 = 4a^2$, so $4a^2 = 2q^2$, whence $q^2 = 2a^2$, thus $q$ is even.
This contradicts our assumption that $\frac{p}{q}$ is a fraction in lowest terms. We have reached the contradiction. Therefore $\sqrt{2}$ cannot be written as a fraction in the form $\frac{p}{q}$. In other words, $\sqrt{2}$ is irrational. □
That $\sqrt{2}$ is irrational was discovered by Pythagoras and his followers, i.e. the Pythagoreans, in the 6th century BC. Prior to this discovery, people believed that all numbers were rational, i.e. could be expressed as simple fractions.
<p/>
<center>
<img src="images/pythagoras.png" width="300" alt="Pythagoras">
</center>
The discovery of irrational numbers is said to have been shocking to the Pythagoreans (as it violated their mystical worldview). They kept this discovery secret. Hippasus of Metapontum divulged this secret and is supposed to have drowned at sea, apparently as a punishment from the gods for divulging it.
<p/>
<center>
<img src="images/hippasus-of-metapontum.png" width="300" alt="Hippasus of Metapontum">
</center>
## Russell's paradox and Zermelo-Fraenkel set theory
Our treatment of set theory is very informal. In practice it was built out of axioms. Refer to *Naïve set theory* by Paul Halmos for a more detailed overview.
<p/>
<center>
<img src="images/paul-halmos.png" width="150" alt="Paul Halmos">
</center>
Now, suppose that we have a set containing all sets that are not elements of themselves. Symbolically, let $$R = \{x \, | \, x \notin x\}.$$ Is $R$ an element of itself?
If it were, $R \in R$, then we could substitute it for $x$ in "$x \, | \, x \notin x$", and so $R \notin R$.
If it weren't, then $R \notin R$, and, by definition, since $R$ is not an element of itself, it is in $R$, $R \in R$.
Thus we have a paradox: $R \in R \Leftrightarrow R \notin R$.
This particular paradox was discovered by Bertrand Russell in 1901 and bears his name, so it is called **Russell's paradox**.
<p/>
<center>
<img src="images/bertrand-russell.png" width="300" alt="Bertrand Russell">
</center>
This paradox confounded the so-called **naïve set theorists**, who did not know how to deal with it. Eventually, in 1908, Ernst Zermelo proposed an axiomatisation of set theory that avoided the paradoxes of naïve set theory, which was eventually elaborated by Abraham Fraenkel, Thoralf Skolem, and Zermelo himself. The result became known as the **Zermelo-Fraenkel set theory** or **ZFC**. It is ZFC that remains the canonical axiomatic set theory to this day.
We won't go into the details of how Russell's paradox is resolved in ZFC, but we shall prefer to talk about **collections** or **families** of sets, rather than sets of sets.
## Union and intersection
The **union** of two sets $A$ and $B$ is the set of elements which are in $A$, in $B$, or in both $A$ and $B$: $$A \cup B = \{x \, | \, x \in A \text{ or } x \in B\}.$$
We can check that, in Python,
```
A = set([3, 5, 7, 9])
B = set([1, 2, 3])
A.union(B)
```
The **intersection** of two sets $A$ and $B$ is the set of elements which are in $A$ *and* in $B$: $$A \cap B = \{x \, | \, x \in A \text{ and } x \in B\}.$$
In Python,
```
A = set([3, 5, 7, 9])
B = set([1, 2, 3])
A.intersection(B)
```
**Venn diagrams** are helpful in visualising sets, including set unions and intersections (which are, of course, themselves sets).
The Python package `matplotlib_venn` is helpful in constructing them. It can be installed using `easy_install matplotlib-venn`.
```
from matplotlib_venn import venn2
v = venn2(subsets = (2, 2, 1));
v.get_label_by_id('01').set_text('')
v.get_patch_by_id('01').set_linewidth(2)
v.get_patch_by_id('01').set_edgecolor('black')
v.get_patch_by_id('01').set_facecolor('white')
v.get_label_by_id('10').set_text('')
v.get_patch_by_id('10').set_linewidth(2)
v.get_patch_by_id('10').set_edgecolor('black')
v.get_patch_by_id('10').set_facecolor('white')
v.get_label_by_id('11').set_text('$A \cap B$')
v.get_patch_by_id('11').set_linewidth(2)
v.get_patch_by_id('11').set_edgecolor('black')
v.get_patch_by_id('11').set_facecolor('#ff0000')
from matplotlib_venn import venn2
v = venn2(subsets = (2, 2, 1));
v.get_label_by_id('01').set_text('')
v.get_patch_by_id('01').set_linewidth(2)
v.get_patch_by_id('01').set_edgecolor('black')
v.get_patch_by_id('01').set_facecolor('#ff0000')
v.get_label_by_id('10').set_text('')
v.get_patch_by_id('10').set_linewidth(2)
v.get_patch_by_id('10').set_edgecolor('black')
v.get_patch_by_id('10').set_facecolor('#ff0000')
v.get_label_by_id('11').set_text('$A \cup B$')
v.get_patch_by_id('11').set_linewidth(2)
v.get_patch_by_id('11').set_edgecolor('black')
v.get_patch_by_id('11').set_facecolor('#ff0000')
```
John Venn (1834 - 1923) was an English logician and philosopher. He introduced the diagrams that would later bear his name in an 1880 paper entitled *On the Diagrammatic and Mechanical Representation of Propositions and Reasonings*.
<p/>
<center>
<img src="images/john-venn.png" width="200" alt="John Venn">
</center>
There is a stained glass window at Gonville and Caius College, Cambridge, where Venn studied and worked, commemorating Venn and the Venn diagram.
<p/>
<center>
<img src="images/john-venn-stained-glass.png" width="200" alt="A stained glass window honouring John Venn">
</center>
The union and intersection may be extended to more than two sets. For example, let $G$ be the set of Greek uppercase letter glyphs,
<font size="1">
$$G = \{A, B, \Gamma, \Delta, E, Z, H, \Theta, I, K, \Lambda, M, N, \Xi, O, \Pi, P, \Sigma, T, Y, \Phi, X, \Psi, \Omega\},$$
</font>
$E$ be the set of English uppercase letter glyphs,
<font size="1">
$$E = \{A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z\},$$
</font>
and $R$ be the set of Russian uppercase letter glyphs,
<font size="1">
$$R = \{\text{А, Б, В, Г, Д, Е, Ё, Ж, З, И, Й, К, Л, М, Н, О, П, Р, С, Т, У, Ф, Х, Ц, Ч, Ш, Щ, Ъ, Ы, Ь, Э, Ю, Я}\}$$.
</font>
Then their intersection is given by
<font size="1">
$$G \cap E \cap R = \{A, B, E, H, K, M, O, P, T, X, Y\}.$$
</font>
<p/>
<center>
<img src="images/sets-of-uppercase-letter-glyphs.png" width="300" alt="Sets of uppercase letter glyphs">
</center>
We can also take unions and intersections of infinitely many sets. Let $N_2$ denote the set of positive multiples of 2,
$$N_2 = \{2, 4, 6, 8, 10, 12, \ldots\},$$
$N_3$ denote the set of positive multiples of 3,
$$N_3 = \{3, 6, 9, 12, 15, 18, \ldots\},$$
and so on. Then we can write
$$\mathbb{N} = \{1\} \cup \bigcup_{i=2}^{\infty} N_i.$$
Since here we have taken a union of countably many sets (we can enumerate $N_2, N_3, N_4, \ldots$), we refer to $\bigcup_{i=2}^{\infty} N_i$ as a **countable union**.
It is also possible to define uncountable, or **arbitrary**, unions and intersections.
Unions and intersections are connected by the following relations:
$$
(A \cup B) \cap C = (A \cap C) \cup (B \cap C), \\
(A \cap B) \cup C = (A \cup C) \cap (B \cup C).
$$
## Set difference, De Morgan's laws
Another important operation on sets is the **set difference**,
$$A \setminus B = \{x \, | \, x \in A \text{ and } x \notin B\}.$$
If we assume that all the sets that we are considering are subsets of some large set $\Omega$, we may write $A^{\complement}$, $A'$, or $\overline{A}$ instead of $\Omega \setminus A$ and refer to $\Omega \setminus A$ as the **complement** of $A$ (in $\Omega$).
The following relations, known as **De Morgan's laws**, so named after the 19-th century British mathematician Augustus De Morgan, play an important part in **set theory**:
$$\overline{A \cup B} = \overline{A} \cap \overline{B},$$
and
$$\overline{A \cap B} = \overline{A} \cup \overline{B}.$$
They can be generalised to arbitrary unions and intersections of infinitely many sets.
Here is an exercise for you: *prove De Morgan's laws*.
Assume that an element belongs to the set on the left-hand side and show that it also belongs to the set on the right-hand side (so the set on the left-hand side is a subset of the set on the right-hand side). Then assume that an element belongs to the set on the right-hand side and show that it also belongs to the set on the left-hand side (so the set on the right-hand side is a subset of the set on the left-hand side). If two sets are subsets of each other, then they are equal.
## Cartesian products
The Cartesian product of two sets $A$ and $B$, written $A \times B$, is the set of all ordered pairs $(a, b)$ where $a \in A$ and $b \in B$.
For example, the Cartesian product of the sets $A = \{\text{foo}, \text{bar}, \text{baz}\}$ and $B = \{3, 12\}$ is the set
$$A \times B = \{(\text{foo}, 3), (\text{bar}, 3), (\text{baz}, 3), (\text{foo}, 12), (\text{bar}, 12), (\text{baz}, 12)\}.$$
As another example, consider the 2-dimensional plane, the set of pairs $(x, y)$ with $x \in \mathbb{R}$, $y \in \mathbb{R}$.
More generally, Cartesian products can be defined for $n \in \mathbb{N}$ sets. For example, the 3-dimensional space, the set of triples $(x, y, z)$, $x \in \mathbb{R}$, $y \in \mathbb{R}$, $z \in \mathbb{R}$, is a 3-fold Cartesian product $\mathbb{R}^3 = \mathbb{R} \times \mathbb{R} \times \mathbb{R}$.
<p/>
<center>
<img src="images/rene-descartes.png" width="200" alt="Rene Descartes">
</center>
## Functions
A **binary relation** *R* between a set $X$ (the **set of departure**) and a set $Y$ (the **set of destination** or **codomain**) is specified by its **graph**, $G$, which is a set of ordered pairs $(x, y)$, a subset of the cartesian product $X \times Y$. The binary relation is also known as a **mapping** or **correspondence**.
The statement $(x, y) \in G$ is read as **"$x$ is $R$-related to $y$"** and is denoted $xRy$ or $R(x, y)$. The order is important: $xRy$ does not necessarily imply $yRx$ for a particular binary relation $R$.
If $R$ is a binary relation between $X$ and $Y$, then the set $\{y \in Y \,|\, xRy \text{ for some } x \in X\}$ is called the **image** or **range**, of $R$. The set $\{x \in X \,|\, xRy \text{ for some } y \in Y\}$ is called the **domain** of $R$.
A **function** $f$ from a set $X$ to a set $Y$ (we sometimes write $f: X \rightarrow Y$) is a special case of a binary relation: it is defined by a set $G \subseteq X \times Y$ of ordered pairs $(x, y)$ such that, for *each* element $x \in X$ there corresponds *one, and only one,* pair $(x, y) \in G$, and we write $f(x) = y$ or $f: x \mapsto y$.
Suppose that $A = \{1, 2\}$, $B = \{a, b, c\}$.
Is the binary relation $f: A \rightarrow B$ defined by $G_f = \{(1, b), (2, a)\}$ a function?
<table>
<tr><td>*x*</td><td>*f(x)*</td></tr>
<tr><td>$1$</td><td>$b$</td></tr>
<tr><td>$2$</td><td>$a$</td></tr>
</table>
Indeed, it is a function: to *each* element of $x$, $f$ maps *exactly one* element of $B$.
Suppose that $A = \{1, 2\}$, $B = \{a, b, c\}$, as before.
Is the binary relation $g: A \rightarrow B$ defined by $G_g = \{(1, b), (1, a), (2, a)\}$ a function?
<table>
<tr><td>*x*</td><td>*g(x)*</td></tr>
<tr><td>$1$</td><td>$b$</td></tr>
<tr><td>$1$</td><td>$a$</td></tr>
<tr><td>$2$</td><td>$a$</td></tr>
</table>
No, it isn't: two distinct elements of $B$, $a, b \in B$, are mapped to $1 \in A$, so $g$ is not a function by definition.
Suppose that $A = \{1, 2\}$, $B = \{a, b, c\}$, as before.
Is the binary relation $h: A \rightarrow B$ defined by $G_h = \{(2, a)\}$ a function?
<table>
<tr><td>*x*</td><td>*h(x)*</td></tr>
<tr><td>$2$</td><td>$a$</td></tr>
</table>
No, it isn't: $h$ doesn't map any element of $B$ to $1 \in A$.
Suppose that $A = \{1, 2\}$, $B = \{a, b, c\}$, as before.
Is the binary relation $\alpha: A \rightarrow B$ defined by $G_{\alpha} = \{(1, a), (2, a)\}$ a function?
<table>
<tr><td>*x*</td><td>*$\alpha$(x)*</td></tr>
<tr><td>$1$</td><td>$a$</td></tr>
<tr><td>$2$</td><td>$a$</td></tr>
</table>
Indeed it is: to *each* element of $A$ there corresponds *exactly one* element of $B$.
Now, consider the binary relation $s: \mathbb{R} \rightarrow \mathbb{R}$ defined, for all $x \in \mathbb{R}$, by $s(x) = x^2$ or, in another notation, $s: x \mapsto x^2$:
```
import numpy as np
import matplotlib.pyplot as plt
xs = np.linspace(-10., 10., 50)
plt.plot(xs, [x*x for x in xs]);
plt.xlabel('x')
plt.ylabel('s(x)');
```
Is this a function?
Indeed, $s$ is a function: to *each* $x \in \mathbb{R}$ there corresponds one, and only one (thus exactly one) $s(x) \in \mathbb{R}$.
What about the binary relation $r: \mathbb{R} \rightarrow \mathbb{R}$ defined, for all $x \in \mathbb{R}$, by $r(x) = \sqrt{x}$?
First of all, this definition doesn't quite make sense. $\sqrt{\cdot}$ is defined only for nonnegative real numbers.
Suppose we redefine $r: \mathbb{R} \rightarrow \mathbb{R}$: for all $x \geq 0$, $r(x) = \sqrt{x}$.
Is this now a function?
It's not a function $r: \mathbb{R} \rightarrow \mathbb{R}$, since it does not define a value $f(x)$ for *all* $x \in \mathbb{R}$, only for those $x$ that are nonnegative.
How could we fix the definition of $r$ so that it is a function?
One way to do it is to define it as $r: \mathbb{R}_{\geq 0} \rightarrow \mathbb{R}$, where $\mathbb{R}_{\geq 0}$ is the set of *nonnegative* real numbers.
```
xs = np.linspace(-10., 10., 50)
xs = [x for x in xs if x >= 0]
plt.plot(xs, [np.sqrt(x) for x in xs]);
plt.xlabel('x')
plt.ylabel('s(x)');
```
## Image and inverse image
Let $f: X \rightarrow Y$ be a function and $A \subseteq X$. Then the **image** $f[A]$ of $A$ under $f$ is the set $$f[A] = \{y \in Y \,|\, y = f(x) \text{ for some } x \in A\}.$$
This is consistent with the definition of the **image** of a function given above: the image of the function $f$ is the image $f[X]$ of the entire set $X$.
Let $f: X \rightarrow Y$ be a function and $B \subseteq Y$. Then the **preimage** or **inverse image** of $B$ under $f$ is the set $$f^{-1}[B] = \{x \in X \,|\, f(x) \in B\}.$$
For example, for $f: \mathbb{R} \rightarrow \mathbb{R}$, $f: x \mapsto x^2$, the inverse image of the singleton set $\{4\}$ is the set $\{-2, 2\}$. The inverse image of the set $\{4, 9\}$ is the set $\{-3, -2, 2, 3\}$.
The inverse image of the union of two sets is equal to the union of their inverse images:
$$f^{-1}[A \cup B] = f^{-1}[A] \cup f^{-1}[B].$$
Exercise: How would you prove this?
The inverse image of the intersection of two sets is equal to the intersection of their inverse images:
$$f^{-1}[A \cap B] = f^{-1}[A] \cap f^{-1}[B].$$
The image of the union of two sets is equal to the union of their images:
$$f[A \cup B] = f[A] \cup f[B].$$
Is the image of the intersection of two sets equal to the intersection of their images:
$$f[A \cap B] \overset{?}{=} f[A] \cap f[B].$$
The image of the intersection of two sets is, in general, *not* equal to the intersection of their images:
$$f[A \cap B] \neq f[A] \cap f[B].$$
To see this, consider $f: \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$, defined by $f: (x, y) \mapsto x$, a **projection** on the $x$-plane.
Define $A = \{(x, 0) \,|\, 0 \leq x \leq 1\}$ and $A = \{(x, 1) \,|\, 0 \leq x \leq 1\}$.
The two sets do not intersect, or, in other words, their intersection is the so-called **empty set**: $A \cap B = \{\} = \emptyset$.
However, the images of the two sets coincide: $f(A) = f(B) = \{x \,|\, 0 \leq x \leq 1\}$.
We have just disproven
$$f[A \cap B] = f[A] \cap f[B],$$
equivalently, we have just proven
$$f[A \cap B] \neq f[A] \cap f[B].$$
by producing a particular **counterexample** — this is yet another proof method, very common in mathematics. □
## One-to-one, onto, bijections, and inverse functions
A function $f: X \rightarrow Y$ is **injective** or **one-to-one** if each possible element $y \in Y$ of its codomain $y$ is mapped to by at most one argument $x \in X$. We call **injective** functions **injections**.
It is **surjective** or **onto** if each possible element $y \in Y$ is mapped to by at least one argument $x \in X$. We call **surjective** functions **surjections**.
It is **bijective** if it is both injective and surjective (one-to-one and onto). We call such functions **bijections** or **one-to-one correspondences**.
A function $f: X \rightarrow Y$ is **invertible** if there exists a function $g: Y \rightarrow X$ such that, for all $y \in Y$, $g(f(x)) = x$. We call $g$ an inverse of $f$ and sometimes denote it by $f^{-1}$.
One can check that a function is invertible iff it is a bijection.
Suppose that $A = \{1, 2\}$, $B = \{a, b, c\}$.
Is the function $f: A \rightarrow B$ defined by $G_f = \{(1, b), (2, a)\}$ one-to-one, onto, a bijection, invertible?
<table>
<tr><td>*x*</td><td>*f(x)*</td></tr>
<tr><td>$1$</td><td>$b$</td></tr>
<tr><td>$2$</td><td>$a$</td></tr>
</table>
* It is one-to-one, since for each element it its image, $f[A]$, there corresponds a single element of $A$: to $a$, there corresponds 2, and to $b$, there corresponds 1.
* It is *not* onto, since no element of $A$ is mapped to $c \in B$.
* Since the function is *not* one-to-one *and* onto, but only one-to-one, it is not a bijection.
* Therefore it is not invertible. Indeed we could not define the inverse function $f^{-1}: B \rightarrow A$ since we wouldn't be able to map $c \in B$ to anything — it wouldn't be a function.
As before, $A = \{1, 2\}$, $B = \{a, b, c\}$.
Is the function $\alpha: A \rightarrow B$ defined by $G_{\alpha} = \{(1, a), (2, a)\}$ one-to-one, onto, a bijection, invertible?
<table>
<tr><td>*x*</td><td>*$\alpha$(x)*</td></tr>
<tr><td>$1$</td><td>$a$</td></tr>
<tr><td>$2$</td><td>$a$</td></tr>
</table>
* It is *not* one-to-one, since for $a \in f[A]$, there correspond two elements of $A$: 1 and 2: $f(1) = f(2) = a$.
* It is *not* onto, since no element of $A$ is mapped to $b \in B$, nor is there an element of $A$ mapped to $c \in B$.
* Since the function is *not* one-to-one *and* onto, in fact, it is neither, it is not a bijection.
* Therefore it is not invertible.
In fact, is there *any* bijection (and therefore *any* invertible function) between $A = \{1, 2\}$ and $B = \{a, b, c\}$?
Since $|A| \neq |B|$ there isn't!
But between $C = \{1, 2\}$ and $D = \{a, b\}$ there are two bijections. One is
$\beta: C \rightarrow D$ defined by $G_{\beta} = \{(1, a), (2, b)\}$:
<table>
<tr><td>*x*</td><td>*$\beta$(x)*</td></tr>
<tr><td>$1$</td><td>$a$</td></tr>
<tr><td>$2$</td><td>$b$</td></tr>
</table>
Its inverse is $\beta^{-1}: D \rightarrow C$ defined by $G_{\beta^{-1}} = \{(a, 1), (b, 2)\}$:
<table>
<tr><td>*y*</td><td>*$\beta^{-1}$(y)*</td></tr>
<tr><td>$a$</td><td>$1$</td></tr>
<tr><td>$b$</td><td>$2$</td></tr>
</table>
The other bijection between $C = \{1, 2\}$ and $D = \{a, b\}$ is $\gamma: C \rightarrow D$ defined by $G_{\gamma} = \{(1, b), (2, a)\}$:
<table>
<tr><td>*x*</td><td>*$\gamma$(x)*</td></tr>
<tr><td>$1$</td><td>$b$</td></tr>
<tr><td>$2$</td><td>$a$</td></tr>
</table>
Its inverse is $\gamma^{-1}: D \rightarrow C$ defined by $G_{\gamma^{-1}} = \{(a, 2), (b, 1)\}$:
<table>
<tr><td>*y*</td><td>*$\gamma^{-1}$(y)*</td></tr>
<tr><td>$a$</td><td>$2$</td></tr>
<tr><td>$b$</td><td>$1$</td></tr>
</table>
Since $\gamma$ is a bijection, so is $\gamma^{-1}$, and it is therefore invertible; the inverse of $\gamma^{-1}$ is $\gamma$.
## How natural numbers can be constructed from sets
We have mentioned that sets are the "most general" mathematical objects. But then we informally introduced other objects, such as ordered pairs. At first sight, ordered pairs are different from sets. Why do we then say that sets are the "most general" mathematical objects?
In fact, ordered pairs can be expressed as sets. There are several ways to do this, one of them the so-called **Kuratowski's definition** proposed in 1921 by Kazimierz Kuratowski:
$$(a, b) = \{\{a\}, \{a, b\}\}.$$
This definition can be used even when the two elements of the pair are identical:
$$(a, a) = \{\{a\}, \{a, a\}\} = \{\{a\}, \{a\}\} = \{\{a\}\}.$$
Triples can be defined as nested pairs:
$$(a, b, c) = (a, (b, c)),$$
and so on.
But what about *numbers*? Surely there is no way to use sets to define numbers?
In fact, we can define natural numbers and zero as follows:
$$
0 = \{\} = \emptyset, \\
1 = \{0\} = \{\emptyset\}, \\
2 = \{0, 1\} = \{\emptyset, \{\emptyset\}\}, \\
3 = \{0, 1, 2\} = \{\emptyset, \{\emptyset\}, \{\emptyset, \{\emptyset\}\}\}, \\
\vdots
$$
This definition is part of the Zermelo-Fraenkel (ZF) set theory. We can then use the **Dedekind-Peano axioms** to define arithmetic for natural numbers in terms of set theory.
Having constructed $\mathbb{N}$, we can then construct the rationals $\mathbb{Q}$, the reals $\mathbb{R}$, building on the foundation of set theory.
Proceeding onwards, we obtain... the rest of mathematics!
| github_jupyter |
```
%matplotlib inline
from d2l import torch as d2l
import torch
```
https://pytorch.org/docs/stable/random.html
https://blog.csdn.net/sinat_37145472/article/details/94755405
https://numpy.org/doc/stable/reference/random/generated/numpy.random.normal.html
https://pytorch.org/docs/master/generated/torch.normal.html
http://preview.d2l.ai/d2l-en/master/chapter_optimization/sgd.html?highlight=random%20normal
```
X = torch.normal(size=(1000, 2))
X = torch.normal(mean=0.0,size=(1000, 2))
X = torch.normal(0.0, 1, (1000, 2))
```
http://preview.d2l.ai/d2l-en/master/chapter_preliminaries/linear-algebra.html
```
A = torch.tensor([[1, 2], [-0.1, 0.5]])
b = torch.tensor([1, 2])
data = X.dot(A) + b
```
tensor.dot()方法进行了更新,只能对1维的tensor进行点乘运算
https://blog.csdn.net/qq_36852840/article/details/105224662
This function does not broadcast.
https://pytorch.org/docs/stable/generated/torch.dot.html
https://stackoom.com/question/3ErAp/%E5%A6%82%E4%BD%95%E5%9C%A8PyTorch%E4%B8%AD%E5%B0%86%E7%9F%A9%E9%98%B5%E4%B9%98%E4%BB%A5%E5%90%91%E9%87%8F
```
data = dot(X, A) + b
data = torch.dot(X, A) + b
data = torch.mm(X, A) + b
```
X = np.random.normal(size=(1000, 2))
A = np.array([[1, 2], [-0.1, 0.5]])
b = np.array([1, 2])
data = X.dot(A) + b
```
data
len(data)
d2l.set_figsize()
d2l.plt.scatter(data[:100, 0].asnumpy(), data[:100, 1].asnumpy());
print(f'The covariance matrix is\n{np.dot(A.T, A)}')
d2l.set_figsize()
d2l.plt.scatter(data[:100, 0].numpy(), data[:100, 1].numpy());
d2l.plt.scatter(data[:100, 0].numpy(), data[:100, 1].numpy())
d2l.plt.scatter(data[:100, 0].numpy(), data[:100, 1].numpy());
d2l.plt.scatter(data[:100, 0].numpy(), data[:100, 1].numpy())
d2l.plt.scatter(data[:100, 0].numpy(), data[:100, 1].numpy());
print(f'The covariance matrix is\n{torch.mm(A.T, A)}')
batch_size = 8
data_iter = d2l.load_array((data,), batch_size)
net_G = nn.Sequential()
net_G.add(nn.Dense(2))
import torch.nn as nn
```
https://pytorch.org/docs/master/generated/torch.nn.Sequential.html
```
net_G = nn.Sequential()
net_G.add(nn.Dense(2))
```
https://discuss.pytorch.org/t/add-sequential-model-to-sequential/71765
```
net_G..add_module(nn.Dense(2))
net_G.add_module(nn.Dense(2))
```
https://pytorch.org/hub/pytorch_vision_densenet/
https://pypi.org/project/densenet-pytorch/
https://mxnet.incubator.apache.org/versions/1.6/api/python/docs/api/gluon/nn/index.html
https://discuss.pytorch.org/t/pytorch-equivalent-of-keras/29412
PyTorch深度实践,GAN章节
```
net_G.add_module(nn.Linear(2))
net_G.add_module(nn.Linear(2, n_out))
net_G.add_module(nn.Linear(2, 1))
```
.add_module expects a name and module as the arguments.
https://discuss.pytorch.org/t/add-module-missing-1-required-positional-argument-module/73999
```
net_G.add_module("Linear", nn.Linear(2, 1))
```
TODO:net_G.add_module(nn.Linear(2,
```
net_D = nn.Sequential()
https://pytorch.org/docs/master/generated/torch.nn.Sequential.html
https://pytorch.org/docs/master/generated/torch.nn.Tanh.html
net_D.add_module(nn.Tanh(5, 3),
nn.Tanh(3, 1),
nn.Tanh(1))
net_D.add_module("Tanh1",nn.Tanh(5, 3))
net_D.add_module("Tanh2",nn.Tanh(3, 1))
net_D.add_module("Tanh3",nn.Tanh(1, 1))
net_D.add_module("Tanh1",nn.Tanh(5))
net_D.add_module("Tanh2",nn.Tanh(3))
net_D.add_module("Tanh3",nn.Tanh(1))
net_D.add_module(nn.Tanh(5))
net_D.add_module(nn.Tanh(3))
net_D.add_module(nn.Tanh(1))
nn.Tanh(5)
```
给类的实例化传进来两个参数,一个是self,一个是input。所以会报错需要一个参数,却给了两个,https://blog.csdn.net/york1996/article/details/81875736
```
nn.Tanh()
```
https://discuss.pytorch.org/t/torch-tanh-vs-torch-nn-functional-tanh/15897/2
```
nn.functional.tanh(5)
nn.tanh()(5)
```
https://gist.github.com/zhanghang1989/3d646f71d60c17048cf8ad582393ac6c
```
net_D.add_module("Tanh1",nn.Linear(5, nonlinearity='tanh')
net_D.add_module("Tanh2",nn.Linear(3, nonlinearity='tanh')
net_D.add_module("Tanh3",nn.Linear(1, nonlinearity='tanh')
net_D.add_module("Tanh1",nn.Linear(5, nonlinearity='tanh')
net_D.add_module("Tanh2",nn.Linear(3, nonlinearity='tanh')
net_D.add_module("Tanh3",nn.Linear(1, nonlinearity='tanh')
net_D.add_module("Tanh1",nn.Linear(5, nonlinearity='tanh'))
net_D.add_module("Tanh2",nn.Linear(3, nonlinearity='tanh'))
net_D.add_module("Tanh3",nn.Linear(1, nonlinearity='tanh'))
```
https://docs.dgl.ai/en/0.4.x/api/python/nn.pytorch.html
```
net_D.add_module("Tanh1",nn.Linear(5, activation='tanh'))
net_D.add_module("Tanh2",nn.Linear(3, activation='tanh'))
net_D.add_module("Tanh3",nn.Linear(1, activation='tanh'))
```
https://discuss.pytorch.org/t/cant-recover-keras-results-using-pytorch/40035
```
net_D.add_module(torch.tanh(nn.Linear(5, 3))
net_D.add_module(torch.tanh(nn.Linear(5, 3)))
```
PyTorch深度学习 布拉马尼亚 DCGAN
```
net_D.add_module("Linear1t", nn.Linear(5, 3))
net_D.add_module("1t", nn.Tanh())
net_D.add_module("Linear2t", nn.Linear(3, 1))
net_D.add_module("2t", nn.Tanh())
net_D.add_module("Linear3t", nn.Linear(1, 1))
net_D.add_module("3t", nn.Tanh())
```
net_D.add(nn.Dense(5, activation='tanh'),
nn.Dense(3, activation='tanh'),
nn.Dense(1))
```
#@save
def update_D(X, Z, net_D, net_G, loss, trainer_D):
"""Update discriminator."""
batch_size = X.shape[0]
ones = np.ones((batch_size,), ctx=X.ctx)
zeros = np.zeros((batch_size,), ctx=X.ctx)
with autograd.record():
real_Y = net_D(X)
fake_X = net_G(Z)
# Do not need to compute gradient for `net_G`, detach it from
# computing gradients.
fake_Y = net_D(fake_X.detach())
loss_D = (loss(real_Y, ones) + loss(fake_Y, zeros)) / 2
loss_D.backward()
trainer_D.step(batch_size)
return float(loss_D.sum())
```
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
#
# load data and inspect
#
df = pd.read_csv('../Datasets/austin_weather.csv')
#
print(df.head())
print(df.tail())
df = df.loc[:, ['Date', 'TempAvgF']]
df.head()
#
# add some useful columns
#
df.loc[:, 'Year'] = df.loc[:, 'Date'].str.slice(0, 4).astype('int')
df.loc[:, 'Month'] = df.loc[:, 'Date'].str.slice(5, 7).astype('int')
df.loc[:, 'Day'] = df.loc[:, 'Date'].str.slice(8, 10).astype('int')
#
print(df.head())
print(df.tail())
#
# set a 20 day window then use that to smooth
# temperature in a new column
window = 20
df['20_d_mov_avg'] = df.TempAvgF.rolling(window).mean()
print(df.head())
print(df.tail())
#
# now let's slice exactly one year on the
# calendar start and end dates
# we see from the previous output that
# 2014 is the first year with complete data,
# however it will still have NaN values for
# the moving average, so we'll use 2015
#
df_one_year = df.loc[df.Year == 2015, :].reset_index()
df_one_year['Day_of_Year'] = df_one_year.index + 1
print(df_one_year.head())
print(df_one_year.tail())
fig = plt.figure(figsize=(10, 7))
ax = fig.add_axes([1, 1, 1, 1]);
#
# Raw data
#
ax.scatter(df_one_year.Day_of_Year,
df_one_year.TempAvgF,
label = 'Raw Data', c = 'k')
#
# Moving averages
#
ax.plot(df_one_year.Day_of_Year,
df_one_year['20_d_mov_avg'],
c = 'r',
linestyle = '--',
label = f'{window} day moving average')
#
ax.set_title('Air Temperature Measurements',
fontsize = 16)
ax.set_xlabel('Day',
fontsize = 14)
ax.set_ylabel('Temperature ($^\circ$F)',
fontsize = 14)
ax.set_xticks(range(df_one_year.Day_of_Year.min(),
df_one_year.Day_of_Year.max(),
30))
ax.tick_params(labelsize = 12)
ax.legend(fontsize = 12)
plt.show()
#
# fit a linear model
#
linear_model = LinearRegression(fit_intercept = True)
linear_model.fit(df_one_year['Day_of_Year'].values.reshape((-1, 1)),
df_one_year.TempAvgF)
print('model slope: ', linear_model.coef_)
print('model intercept: ', linear_model.intercept_)
print('model r squared: ',
linear_model.score(df_one_year['Day_of_Year'].values.reshape((-1, 1)),
df_one_year.TempAvgF))
#
# make predictions using the training data
#
y_pred = linear_model.predict(df_one_year['Day_of_Year'].values.
reshape((-1, 1)))
x_pred = df_one_year.Day_of_Year
fig = plt.figure(figsize=(10, 7))
ax = fig.add_axes([1, 1, 1, 1]);
#
# Raw data
#
ax.scatter(df_one_year.Day_of_Year,
df_one_year.TempAvgF,
label = 'Raw Data', c = 'k')
#
# Moving averages
#
ax.plot(df_one_year.Day_of_Year,
df_one_year['20_d_mov_avg'],
c = 'r',
linestyle = '--',
label = f'{window} day moving average')
#
# linear model
#
ax.plot(x_pred, y_pred,
c = "blue",
linestyle = '-.',
label = 'linear model')
#
ax.set_title('Air Temperature Measurements',
fontsize = 16)
ax.set_xlabel('Day',
fontsize = 14)
ax.set_ylabel('Temperature ($^\circ$F)',
fontsize = 14)
ax.set_xticks(range(df_one_year.Day_of_Year.min(),
df_one_year.Day_of_Year.max(),
30))
ax.tick_params(labelsize = 12)
ax.legend(fontsize = 12)
plt.show()
```
| github_jupyter |
```
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import pickle
import brian2 as br
from time import time
from Reservoir.reservoir import Reservoir
import torchvision.datasets as datasets
from sklearn.preprocessing import normalize as norm
from sklearn.model_selection import train_test_split
def tr_res(image):
return image.resize((10, 10))
dat = datasets.MNIST(root='./data', train=True, download=True, transform=tr_res)
y=pd.Series(y)
y=np.array(y.replace({11:0,13:1,211:2,321:3,2212:4}))
particle_name={0:'electron',1:'muon',2:'pion',3:'kaon',4:'proton'}
def show_images(X,y,n):
fig,axes=plt.subplots(int(np.floor(n/4)),4,figsize=(14,int(np.floor(n/4))*8))
#sampling examples from data
for i in range(n):
sample_id=np.random.choice([i for i in range(len(os.listdir(directory)))])
#plotting image with particle name:
plt.subplot(np.floor(n/4),4,i+1)
plt.imshow(X[i],cmap='gray')
plt.title(f'{particle_name[y[i]]}')
plt.colorbar(label='intensity',
fraction=0.4,orientation='horizontal')
plt.axis('off')
plt.tight_layout()
plt.grid('off')
plt.show()
pd.Series([particle_name[i] for i in y]).value_counts()
X.shape
X_two_c = X[y <= 1]
y_two_c = y[y <= 1]
X_two_c = X_two_c.reshape((X_two_c.shape[0], 1, X_two_c.shape[1], X_two_c.shape[2]))
x_train,x_val,y_train,y_val=train_test_split(X,y,test_size=0.3,shuffle=True)
import torch
from torch import dtype, max_pool2d, nn
class CNN_classifier(nn.Module):
def __init__(self, output_size, device="cpu"):
super(CNN_classifier, self).__init__()
self.conv = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=3, padding=1),
nn.ReLU(),
nn.Conv2d(16, 1, kernel_size=3, padding=1),
nn.ReLU(),
nn.Flatten()
)
self.proj = nn.Sequential(
nn.Linear(100, 25), nn.ReLU(),
nn.Linear(25, 25), nn.ReLU(),
nn.Linear(25, output_size), nn.Softmax()
)
def forward(self, im):
x = self.conv(im)
return self.proj(x)
class Dataset(torch.utils.data.Dataset):
'Characterizes a dataset for PyTorch'
def __init__(self, x, labels):
'Initialization'
self.labels = labels
self.x = x
def __len__(self):
'Denotes the total number of samples'
return len(self.x)
def __getitem__(self, index):
'Generates one sample of data'
# Load data and get label
X = self.x[index]
y = self.labels[index]
return X, y
# Parameters
params = {'batch_size': 64,
'shuffle': True}
use_cuda = torch.cuda.is_available()
device = torch.device("cuda:0" if use_cuda else "cpu")
training_set = Dataset(x_train, y_train)
training_generator = torch.utils.data.DataLoader(training_set, **params)
validation_set = Dataset(x_val, y_val)
validation_generator = torch.utils.data.DataLoader(validation_set, **params)
params = {'batch_size': 64,
'shuffle': True}
training_generator = torch.utils.data.DataLoader(dat, **params)
lr = 0.0001
max_epochs = 300
print(f"Using {device}")
model = CNN_classifier(output_size=2)
model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
crit = nn.NLLLoss()
train_losses = []
val_losses = []
t0 = time()
for epoch in range(max_epochs):
train_losses_epoch = []
val_losses_epoch = []
# Training
i = 0
for local_batch, local_labels in training_generator:
# Transfer to GPU
local_batch, local_labels = local_batch.to(device), local_labels.to(device)
pred = model(local_batch.float())
loss = crit(pred.float(), local_labels)
model.zero_grad()
loss.backward()
optimizer.step()
train_losses_epoch.append(loss.item())
'''i = 0
with torch.set_grad_enabled(False):
for local_batch, local_labels in validation_generator:
# Transfer to GPU
local_batch, local_labels = local_batch.to(device), local_labels.to(device)
pred = model(local_batch.float())
loss = crit(pred.float(), local_labels)
val_losses_epoch.append(loss.item())'''
train_losses.append(np.mean(train_losses_epoch))
#val_losses.append(np.mean(val_losses_epoch))
if (epoch+1)%10==0:
print(f"Epoch {epoch+1}/{max_epochs}, train loss {train_losses[-1]}, val loss {val_losses[-1]}, time {(time() - t0)/60} ETA {(time() - t0)*max_epochs/(epoch+1)/60}")
```
| github_jupyter |
```
from music21 import converter,instrument,note,chord,stream
import os
import numpy as np
```
### Dataset_test
```
def get_notes(filename):
res_list_in = []
res_list_tar = []
notes_in = []
notes_tar = []
notes_to_parse_in = None
notes_to_parse_tar = None
file =converter.parse(filename)
s2 = instrument.partitionByInstrument(file)
index_list = [0] * len(s2.parts)
count_flag = 0
for index,part in enumerate(s2.parts):
index_list[index] = str(part)
if (any('Part>' in x for i,x in enumerate(index_list)) and count_flag < 2):
index = [i for i,x in enumerate(index_list) if ('Part>' in x)]
notes_to_parse_tar = s2.parts[index[0]].recurse()
notes_tar = extract(notes_to_parse_tar)
count_flag+=1
if (any('Electric Bass' in x for i,x in enumerate(index_list)) and count_flag < 2):
index = [i for i,x in enumerate(index_list) if ('Electric Bass' in x)]
notes_to_parse_in = s2.parts[index[0]].recurse()
notes_in = extract(notes_to_parse_in)
count_flag+=1
if (any('Acoustic Bass' in x for i,x in enumerate(index_list)) and count_flag < 2):
index = [i for i,x in enumerate(index_list) if ('Acoustic Bass' in x)]
notes_to_parse_in = s2.parts[index[0]].recurse()
notes_in = extract(notes_to_parse_in)
count_flag+=1
if (any('Bass' in x for i,x in enumerate(index_list)) and count_flag < 2):
index = [i for i,x in enumerate(index_list) if ('Bass' in x)]
notes_to_parse_in = s2.parts[index[0]].recurse()
notes_in = extract(notes_to_parse_in)
if (count_flag == 2):
res_dict_in = dict((element[0],[]) for element in notes_in )
res_dict_tar = dict((element[0],[]) for element in notes_tar)
for value in notes_in:
res_dict_in[value[0]].append(value[1])
for value in notes_tar:
res_dict_tar[value[0]].append(value[1])
for key,value in res_dict_in.items():
if float(key) > 30 and float(key) < 100:
res_list_in.append(value)
for key,value in res_dict_tar.items():
if float(key) > 30 and float(key) < 100:
res_list_tar.append(value)
new_list_in = [item for sublist in res_list_in for item in sublist]
new_list_tar = [item for sublist in res_list_tar for item in sublist]
return np.array(new_list_in),np.array(new_list_tar)
def extract(parsed_notes):
notes_tar = []
for element in parsed_notes:
if isinstance(element, note.Note):
notes_tar.append([element.offset,str(element.pitch)])
if isinstance(element, chord.Chord):
notes_tar.append([element.offset,'.'.join(str(n) for n in element.normalOrder)])
return notes_tar
path = r"/Users/rishitdholakia/Downloads/Drum_beat_generation/Jazz Midi/"
files = [file for file in os.listdir(path) if file.endswith(".mid")]
input_list = []
target_list = []
#count = count+1
input,target = get_notes(path+files[40])
input_list.append(input)
target_list.append(target)
input_list = []
target_list = []
count = 0
files = [file for file in os.listdir(path) if file.endswith(".mid")]
for file in files:
try:
count = count+1
print(count)
input,target = get_notes(path+file)
input_list.append(input)
target_list.append(target)
except:
continue
#### Test
with open('input.npy', 'wb') as f:
np.save(f,input_list)
with open('input.npy', 'rb') as f:
a = np.load(f,allow_pickle= True)
with open('target.npy', 'wb') as f:
np.save(f, target_list)
```
### Get_this_to_convert_to midi
```
def convert_to_midi(prediction_output):
offset = 30
output_notes = []
# create note and chord objects based on the values generated by the model
for pattern in prediction_output:
# pattern is a chord
if ('.' in pattern) or pattern.isdigit():
notes_in_chord = pattern.split('.')
notes = []
for current_note in notes_in_chord:
cn=int(current_note)
new_note = note.Note(cn)
notes.append(new_note)
new_chord = chord.Chord(notes)
new_chord.offset = offset
output_notes.append(new_chord)
# pattern is a note
else:
new_note = note.Note(pattern)
new_note.offset = offset
output_notes.append(new_note)
# increase offset each iteration so that notes do not stack
offset += 1
midi_stream = stream.Stream(output_notes)
midi_stream.write('midi', fp='test_drums.mid')
convert_to_midi(new_list_in)
s = converter.parse('test_drums.mid')
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Credit Card Interest
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Credit_Card_Interest.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Credit_Card_Interest.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Let’s imagine that you would like to estimate the interest rate on your credit card one year from now. Suppose the current prime rate is 2% and your credit card company charges you 10% plus prime. Given the strength of the current economy, you believe that the Federal Reserve is more likely to raise interest rates than not. The Fed will meet eight times in the next twelve months and will either raise the federal funds rate by 0.25% or leave it at the previous level.
We use the binomial distribution to model your credit card’s interest rate at the end of the twelve-month period. Specifically, we’ll use the TensorFlow Probability Binomial distribution class with the following parameters: total_count = 8 (number of trials or meetings), probs = {.6, .7, .8, .9}, for our range of estimates about the probability of the Fed raising the federal funds rate by 0.25% at each meeting.
### Dependencies & Prerequisites
```
#@title TensorFlow Probability Installation settings { display-mode: "form" }
TFP_Installation = "Stable TFP" #@param ["Most Recent TFP", "Stable TFP", "Stable TFP-GPU", "Most Recent TFP-GPU", "TFP Already Installed"]
if TFP_Installation == "Most Recent TFP":
!pip install -q tfp-nightly
print("Most recent TFP version installed")
elif TFP_Installation == "Stable TFP":
!pip install -q --upgrade tensorflow-probability
print("Up-to-date, stable TFP version installed")
elif TFP_Installation == "Stable TFP-GPU":
!pip install -q --upgrade tensorflow-probability-gpu
print("Up-to-date, stable TFP-GPU version installed")
print("(make sure GPU is properly configured)")
elif TFP_Installation == "Most Recent TFP-GPU":
!pip install -q tfp-nightly-gpu
print("Most recent TFP-GPU version installed")
print("(make sure GPU is properly configured)")
elif TFP_Installation == "TFP Already Installed":
print("TFP already installed in this environment")
pass
else:
print("Installation Error: Please select a viable TFP installation option.")
#@title Imports and Global Variables (make sure to run this cell) { display-mode: "form" }
from __future__ import absolute_import, division, print_function
warning_status = "ignore" #@param ["ignore", "always", "module", "once", "default", "error"]
import warnings
warnings.filterwarnings(warning_status)
with warnings.catch_warnings():
warnings.filterwarnings(warning_status, category=DeprecationWarning)
warnings.filterwarnings(warning_status, category=UserWarning)
import numpy as np
import os
matplotlib_style = 'fivethirtyeight' #@param ['fivethirtyeight', 'bmh', 'ggplot', 'seaborn', 'default', 'Solarize_Light2', 'classic', 'dark_background', 'seaborn-colorblind', 'seaborn-notebook']
import matplotlib.pyplot as plt; plt.style.use(matplotlib_style)
import matplotlib.axes as axes;
from matplotlib.patches import Ellipse
%matplotlib inline
import seaborn as sns; sns.set_context('notebook')
notebook_screen_res = 'png' #@param ['retina', 'png', 'jpeg', 'svg', 'pdf']
%config InlineBackend.figure_format = notebook_screen_res
import tensorflow as tf
# Eager Execution
use_tf_eager = True #@param {type:"boolean"}
# Use try/except so we can easily re-execute the whole notebook.
if use_tf_eager:
try:
tf.compat.v1.enable_eager_execution()
except:
reset_session()
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
def default_session_options(enable_gpu_ram_resizing=True,
enable_xla=False):
"""Creates default options for Graph-mode session."""
config = tf.ConfigProto()
config.log_device_placement = True
if enable_gpu_ram_resizing:
# `allow_growth=True` makes it possible to connect multiple
# colabs to your GPU. Otherwise the colab malloc's all GPU ram.
config.gpu_options.allow_growth = True
if enable_xla:
# Enable on XLA. https://www.tensorflow.org/performance/xla/.
config.graph_options.optimizer_options.global_jit_level = (
tf.OptimizerOptions.ON_1)
return config
def reset_session(options=None):
"""Creates a new global, interactive session in Graph-mode."""
if tf.executing_eagerly():
return
global sess
try:
tf.reset_default_graph()
sess.close()
except:
pass
if options is None:
options = default_session_options()
sess = tf.InteractiveSession(config=options)
def evaluate(tensors):
"""Evaluates Tensor or EagerTensor to Numpy `ndarray`s.
Args:
tensors: Object of `Tensor` or EagerTensor`s; can be `list`, `tuple`,
`namedtuple` or combinations thereof.
Returns:
ndarrays: Object with same structure as `tensors` except with `Tensor` or
`EagerTensor`s replaced by Numpy `ndarray`s.
"""
if tf.executing_eagerly():
return tf.contrib.framework.nest.pack_sequence_as(
tensors,
[t.numpy() if tf.contrib.framework.is_tensor(t) else t
for t in tf.contrib.framework.nest.flatten(tensors)])
return sess.run(tensors)
class _TFColor(object):
"""Enum of colors used in TF docs."""
red = '#F15854'
blue = '#5DA5DA'
orange = '#FAA43A'
green = '#60BD68'
pink = '#F17CB0'
brown = '#B2912F'
purple = '#B276B2'
yellow = '#DECF3F'
gray = '#4D4D4D'
def __getitem__(self, i):
return [
self.red,
self.orange,
self.green,
self.blue,
self.pink,
self.brown,
self.purple,
self.yellow,
self.gray,
][i % 9]
TFColor = _TFColor()
```
### Compute Probabilities
Compute the probabilities of possible credit card interest rates in 12 months.
```
# First we encode our assumptions.
num_times_fed_meets_per_year = 8.
possible_fed_increases = tf.range(
start=0.,
limit=num_times_fed_meets_per_year + 1)
possible_cc_interest_rates = 2. + 10. + 0.25 * possible_fed_increases
prob_fed_raises_rates = tf.constant([0.6, 0.7, 0.8, 0.9]) # Wild guesses.
# Now we use TFP to compute probabilities in a vectorized manner.
# Pad a dim so we broadcast fed probs against CC interest rates.
prob_fed_raises_rates = prob_fed_raises_rates[..., tf.newaxis]
prob_cc_interest_rate = tfd.Binomial(
total_count=num_times_fed_meets_per_year,
probs=prob_fed_raises_rates).prob(possible_fed_increases)
```
### Execute TF Code
```
# Convert from TF to numpy.
[
possible_cc_interest_rates_,
prob_cc_interest_rate_,
prob_fed_raises_rates_,
] = evaluate([
possible_cc_interest_rates,
prob_cc_interest_rate,
prob_fed_raises_rates,
])
```
### Visualize Results
```
plt.figure(figsize=(14, 9))
for i, pf in enumerate(prob_fed_raises_rates_):
plt.subplot(2, 2, i+1)
plt.bar(possible_cc_interest_rates_,
prob_cc_interest_rate_[i],
color=TFColor[i],
width=0.23,
label="$p = {:.1f}$".format(pf[0]),
alpha=0.6,
edgecolor=TFColor[i],
lw="3")
plt.xticks(possible_cc_interest_rates_ + 0.125, possible_cc_interest_rates_)
plt.xlim(12, 14.25)
plt.ylim(0, 0.5)
plt.ylabel("Probability of cc interest rate")
plt.xlabel("Credit card interest rate (%)")
plt.title("Credit card interest rates: "
"prob_fed_raises_rates = {:.1f}".format(pf[0]));
plt.suptitle("Estimates of credit card interest rates in 12 months.",
fontsize="x-large",
y=1.02)
plt.tight_layout()
```
| github_jupyter |
## Dependencies
```
import warnings, glob
from tensorflow.keras import Sequential, Model
from cassava_scripts import *
seed = 0
seed_everything(seed)
warnings.filterwarnings('ignore')
```
### Hardware configuration
```
# TPU or GPU detection
# Detect hardware, return appropriate distribution strategy
strategy, tpu = set_up_strategy()
AUTO = tf.data.experimental.AUTOTUNE
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
```
# Model parameters
```
BATCH_SIZE = 8 * REPLICAS
HEIGHT = 380
WIDTH = 380
CHANNELS = 3
N_CLASSES = 5
TTA_STEPS = 0 # Do TTA if > 0
```
# Augmentation
```
def data_augment(image, label):
p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_1 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_2 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_3 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# Flips
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
if p_spatial > .75:
image = tf.image.transpose(image)
# Rotates
if p_rotate > .75:
image = tf.image.rot90(image, k=3) # rotate 270º
elif p_rotate > .5:
image = tf.image.rot90(image, k=2) # rotate 180º
elif p_rotate > .25:
image = tf.image.rot90(image, k=1) # rotate 90º
# # Pixel-level transforms
# if p_pixel_1 >= .4:
# image = tf.image.random_saturation(image, lower=.7, upper=1.3)
# if p_pixel_2 >= .4:
# image = tf.image.random_contrast(image, lower=.8, upper=1.2)
# if p_pixel_3 >= .4:
# image = tf.image.random_brightness(image, max_delta=.1)
# Crops
if p_crop > .7:
if p_crop > .9:
image = tf.image.central_crop(image, central_fraction=.7)
elif p_crop > .8:
image = tf.image.central_crop(image, central_fraction=.8)
else:
image = tf.image.central_crop(image, central_fraction=.9)
elif p_crop > .4:
crop_size = tf.random.uniform([], int(HEIGHT*.8), HEIGHT, dtype=tf.int32)
image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
# # Crops
# if p_crop > .6:
# if p_crop > .9:
# image = tf.image.central_crop(image, central_fraction=.5)
# elif p_crop > .8:
# image = tf.image.central_crop(image, central_fraction=.6)
# elif p_crop > .7:
# image = tf.image.central_crop(image, central_fraction=.7)
# else:
# image = tf.image.central_crop(image, central_fraction=.8)
# elif p_crop > .3:
# crop_size = tf.random.uniform([], int(HEIGHT*.6), HEIGHT, dtype=tf.int32)
# image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
return image, label
```
## Auxiliary functions
```
# Datasets utility functions
def resize_image(image, label):
image = tf.image.resize(image, [HEIGHT, WIDTH])
image = tf.reshape(image, [HEIGHT, WIDTH, CHANNELS])
return image, label
def process_path(file_path):
name = get_name(file_path)
img = tf.io.read_file(file_path)
img = decode_image(img)
# img, _ = scale_image(img, None)
# img = center_crop(img, HEIGHT, WIDTH)
return img, name
def get_dataset(files_path, shuffled=False, tta=False, extension='jpg'):
dataset = tf.data.Dataset.list_files(f'{files_path}*{extension}', shuffle=shuffled)
dataset = dataset.map(process_path, num_parallel_calls=AUTO)
if tta:
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.map(resize_image, num_parallel_calls=AUTO)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO)
return dataset
```
# Load data
```
database_base_path = '/kaggle/input/cassava-leaf-disease-classification/'
submission = pd.read_csv(f'{database_base_path}sample_submission.csv')
display(submission.head())
TEST_FILENAMES = tf.io.gfile.glob(f'{database_base_path}test_tfrecords/ld_test*.tfrec')
NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES)
print(f'GCS: test: {NUM_TEST_IMAGES}')
model_path_list = glob.glob('/kaggle/input/134-cassava-leaf-effnetb4-bs-32-380x380/*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
```
# Model
```
def model_fn(input_shape, N_CLASSES):
inputs = L.Input(shape=input_shape, name='input_image')
base_model = tf.keras.applications.EfficientNetB4(input_tensor=inputs,
include_top=False,
weights=None)
x = L.GlobalAveragePooling2D()(base_model.output)
x = L.Dropout(.5)(x)
output = L.Dense(N_CLASSES, activation='softmax', name='output')(x)
model = Model(inputs=inputs, outputs=output)
return model
with strategy.scope():
model = model_fn((None, None, CHANNELS), N_CLASSES)
model.summary()
```
# Test set predictions
```
files_path = f'{database_base_path}test_images/'
test_size = len(os.listdir(files_path))
test_preds = np.zeros((test_size, N_CLASSES))
for model_path in model_path_list:
print(model_path)
K.clear_session()
model.load_weights(model_path)
if TTA_STEPS > 0:
test_ds = get_dataset(files_path, tta=True).repeat()
ct_steps = TTA_STEPS * ((test_size/BATCH_SIZE) + 1)
preds = model.predict(test_ds, steps=ct_steps, verbose=1)[0][:(test_size * TTA_STEPS)]
preds = np.mean(preds.reshape(test_size, TTA_STEPS, N_CLASSES, order='F'), axis=1)
test_preds += preds / len(model_path_list)
else:
test_ds = get_dataset(files_path, tta=False)
x_test = test_ds.map(lambda image, image_name: image)
test_preds += model.predict(x_test)[0] / len(model_path_list)
test_preds = np.argmax(test_preds, axis=-1)
test_names_ds = get_dataset(files_path)
image_names = [img_name.numpy().decode('utf-8') for img, img_name in iter(test_names_ds.unbatch())]
submission = pd.DataFrame({'image_id': image_names, 'label': test_preds})
submission.to_csv('submission.csv', index=False)
display(submission.head())
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%load_ext tensorboard
```
## Imports
```
from pathlib import Path
import shutil
from tqdm import tqdm
import tensorflow as tf
from tensorflow.keras.callbacks import (
ReduceLROnPlateau,
EarlyStopping,
ModelCheckpoint,
TensorBoard
)
import sys
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report, confusion_matrix
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
from utils.model import make_model, freeze_all_vgg, unfreeze_last_vgg
from utils.data import train_test_valid_split, filter_binary_labels, optimize_dataset, create_split, delete_folder
```
## Image files management -> train, test, valid splits
```
DATASET_SOURCE_PATH = Path(r'../data/dataset')
SPLITS_DESTINATION_PATH = Path(r'../data')
UNDERSAMPLE_RATIO = 0.8 # controls the under sampling for majority class. Use 'None' to disable
TEST_SIZE = 0.15
VALID_SIZE = 0.15
X_train, X_test, X_valid = train_test_valid_split(DATASET_SOURCE_PATH, test_size=TEST_SIZE, valid_size=VALID_SIZE, undersample_ratio=UNDERSAMPLE_RATIO)
splits = [('train', X_train), ('test', X_test), ('valid', X_valid)]
for split in splits:
destination_path = Path(SPLITS_DESTINATION_PATH) / split[0]
delete_folder(destination_path)
create_split(split[1], destination_path)
```
## Dataset loading
```
IMG_HEIGHT = 224
IMG_WIDTH = 224
BATCH_SIZE = 64
SEED = None
train_path = SPLITS_DESTINATION_PATH / 'train'
train_ds = tf.keras.preprocessing.image_dataset_from_directory(train_path, image_size=(IMG_HEIGHT, IMG_WIDTH),\
batch_size=BATCH_SIZE, shuffle=True, \
label_mode='categorical', seed=SEED)
valid_path = SPLITS_DESTINATION_PATH / 'valid'
valid_ds = tf.keras.preprocessing.image_dataset_from_directory(valid_path, image_size=(IMG_HEIGHT, IMG_WIDTH),\
batch_size=BATCH_SIZE, shuffle=True, \
label_mode='categorical', seed=SEED)
class_names = train_ds.class_names
assert class_names == valid_ds.class_names
AUTOTUNE = tf.data.AUTOTUNE
if len(class_names) == 2: # take the one-hot-encoded matrix of labels and convert to a vector if binary classification
train_ds = train_ds.map(filter_binary_labels, num_parallel_calls=AUTOTUNE)
valid_ds = valid_ds.map(filter_binary_labels, num_parallel_calls=AUTOTUNE)
train_ds = optimize_dataset(train_ds)
valid_ds = optimize_dataset(valid_ds)
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
label_idx = labels.numpy()[i][0] if len(class_names) == 2 else np.argmax(labels.numpy()[i], axis=0)
plt.title(class_names[label_idx])
plt.axis("off")
```
## Model
```
N_HIDDEN = 512
BASE_LR = 0.001
model = make_model(n_classes=len(class_names), n_hidden=N_HIDDEN, img_height=IMG_HEIGHT, img_width=IMG_WIDTH)
freeze_all_vgg(model)
loss = tf.keras.losses.CategoricalCrossentropy() if len(class_names) > 2 else tf.keras.losses.BinaryCrossentropy()
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=BASE_LR),
loss=loss, metrics=['accuracy'])
model.summary()
```
#### Classifier initial training
```
# TODO - delete last trained model
LOG_PATH = Path(r'../models/vgg16/logs')
CHECKPOINTS_PATH = Path(r'../models/vgg16/checkpoints')
BASE_EPOCHS = 30
tb = TensorBoard(log_dir=LOG_PATH)
checkpoint = ModelCheckpoint(CHECKPOINTS_PATH / 'train_{epoch}.tf', verbose=1, save_weights_only=True,
save_best_only=True, monitor='val_loss')
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=4, verbose=1)
early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=15, verbose=1)
history = model.fit(train_ds, epochs=BASE_EPOCHS, validation_data=valid_ds, callbacks=[tb, checkpoint, reduce_lr,
early_stopping])
```
#### Fine tuning
```
FINE_TUNE_AT_LAYER = 15
FINE_TUNING_EPOCHS = 30
FINE_TUNING_LR = 0.001
FINAL_MODEL_NAME = 'trained_weights'
FINAL_MODEL_SAVE_PATH = CHECKPOINTS_PATH / FINAL_MODEL_NAME
unfreeze_last_vgg(model, which_freeze=FINE_TUNE_AT_LAYER)
total_epochs = BASE_EPOCHS + FINE_TUNING_EPOCHS
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=FINE_TUNING_LR),
loss=loss, metrics=['accuracy'])
history = model.fit(train_ds, epochs=total_epochs, validation_data=valid_ds, callbacks=[tb, checkpoint, reduce_lr, early_stopping], \
initial_epoch=history.epoch[-1])
model.save(FINAL_MODEL_SAVE_PATH, save_format='h5')
```
#### Model evaluation
```
# TODO - load model if not trained
test_path = Path(r'../data/test')
test_ds = tf.keras.preprocessing.image_dataset_from_directory(test_path, image_size=(IMG_HEIGHT, IMG_WIDTH), \
batch_size=BATCH_SIZE, shuffle=False, \
label_mode='categorical')
assert class_names == test_ds.class_names
if len(class_names) == 2: # take the one-hot-encoded matrix of labels and convert to a vector if binary classification
test_ds = test_ds.map(filter_binary_labels, num_parallel_calls=AUTOTUNE)
test_ds = optimize_dataset(test_ds)
metrics = model.evaluate(test_ds)
print('Loss: {} --------- Accuracy: {}%'.format(metrics[0], np.round(metrics[1]*100, 2)))
y_pred = model.predict(test_ds)
y_true = tf.concat([y for x, y in test_ds], axis=0)
if len(class_names) == 2: # uses a threshold for the predictions if binary classification problem
y_pred[y_pred >= 0.5] = 1
y_pred[y_pred < 0.5] = 0
y_true = y_true.numpy()
else: # uses argmax if not binary classification
y_pred = np.argmax(y_pred, axis=1)
y_true = np.argmax(y_true.numpy(), axis=1)
print(classification_report(y_true, y_pred, target_names=class_names, digits=2))
pred_labels = [('PRED_' + class_name) for class_name in class_names]
real_labels = [('REAL_' + class_name) for class_name in class_names]
pd.DataFrame(confusion_matrix(y_true, y_pred), columns=pred_labels, index=real_labels)
```
| github_jupyter |
# Feature Analysis Using TensorFlow Data Validation and Facets
## Learning Objectives
1. Use TFRecords to load record-oriented binary format data
2. Use TFDV to generate statistics and Facets to visualize the data
3. Use the TFDV widget to answer questions
4. Analyze label distribution for subset groups
## Introduction
Bias can manifest in any part of a typical machine learning pipeline, from an unrepresentative dataset, to learned model representations, to the way in which the results are presented to the user. Errors that result from this bias can disproportionately impact some users more than others.
[TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) (TFDV) is one tool you can use to analyze your data to find potential problems in your data, such as missing values and data imbalances - that can lead to Fairness disparities. The TFDV tool analyzes training and serving data to compute descriptive statistics, infer a schema, and detect data anomalies. [Facets Overview](https://pair-code.github.io/facets/) provides a succinct visualization of these statistics for easy browsing. Both the TFDV and Facets are tools that are part of the [Fairness Indicators](https://www.tensorflow.org/tfx/guide/fairness_indicators).
In this notebook, we use TFDV to compute descriptive statistics that provide a quick overview of the data in terms of the features that are present and the shapes of their value distributions. We use Facets Overview to visualize these statistics using the Civil Comments dataset.
Each learning objective will correspond to a __#TODO__ in the [student lab notebook](../labs/bias_tfdv_facets.ipynb) -- try to complete that notebook first before reviewing this solution notebook.
## Set up environment variables and load necessary libraries
We will start by importing the necessary dependencies for the libraries we'll be using in this exercise. First, run the cell below to install Fairness Indicators.
**NOTE:** You can ignore the "pip" being invoked by an old script wrapper, as it will not affect the lab's functionality.
```
!pip3 install fairness-indicators --user
```
<strong>Restart the kernel</strong> after you do a pip3 install (click on the <strong>Restart the kernel</strong> button above).
Next, import all the dependencies we'll use in this exercise, which include Fairness Indicators, TensorFlow Data Validation (tfdv), and the What-If tool (WIT) Facets Overview.
```
import sys, os
import warnings
warnings.filterwarnings('ignore')
#os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # Ignore deprecation warnings
import tempfile
import apache_beam as beam
import numpy as np
import pandas as pd
from datetime import datetime
import tensorflow_hub as hub
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_data_validation as tfdv
from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators
from tensorflow_model_analysis.addons.fairness.view import widget_view
import warnings
warnings.filterwarnings("ignore")
from witwidget.notebook.visualization import WitConfigBuilder
from witwidget.notebook.visualization import WitWidget
print(tf.version.VERSION)
print(tf) # This statement shows us what version of Python we are currently running.
```
### About the Civil Comments dataset
Click below to learn more about the Civil Comments dataset, and how we've preprocessed it for this exercise.
The Civil Comments dataset comprises approximately 2 million public comments that were submitted to the Civil Comments platform. [Jigsaw](https://jigsaw.google.com/) sponsored the effort to compile and annotate these comments for ongoing [research](https://arxiv.org/abs/1903.04561); they've also hosted competitions on [Kaggle](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification) to help classify toxic comments as well as minimize unintended model bias.
#### Features
Within the Civil Comments data, a subset of comments are tagged with a variety of identity attributes pertaining to gender, sexual orientation, religion, race, and ethnicity. Each identity annotation column contains a value that represents the percentage of annotators who categorized a comment as containing references to that identity. Multiple identities may be present in a comment.
**NOTE:** These identity attributes are intended *for evaluation purposes only*, to assess how well a classifier trained solely on the comment text performs on different tag sets.
To collect these identity labels, each comment was reviewed by up to 10 annotators, who were asked to indicate all identities that were mentioned in the comment. For example, annotators were posed the question: "What genders are mentioned in the comment?", and asked to choose all of the following categories that were applicable.
* Male
* Female
* Transgender
* Other gender
* No gender mentioned
**NOTE:** *We recognize the limitations of the categories used in the original dataset, and acknowledge that these terms do not encompass the full range of vocabulary used in describing gender.*
Jigsaw used these ratings to generate an aggregate score for each identity attribute representing the percentage of raters who said the identity was mentioned in the comment. For example, if 10 annotators reviewed a comment, and 6 said that the comment mentioned the identity "female" and 0 said that the comment mentioned the identity "male," the comment would receive a `female` score of `0.6` and a `male` score of `0.0`.
**NOTE:** For the purposes of annotation, a comment was considered to "mention" gender if it contained a comment about gender issues (e.g., a discussion about feminism, wage gap between men and women, transgender rights, etc.), gendered language, or gendered insults. Use of "he," "she," or gendered names (e.g., Donald, Margaret) did not require a gender label.
#### Label
Each comment was rated by up to 10 annotators for toxicity, who each classified it with one of the following ratings.
* Very Toxic
* Toxic
* Hard to Say
* Not Toxic
Again, Jigsaw used these ratings to generate an aggregate toxicity "score" for each comment (ranging from `0.0` to `1.0`) to serve as the [label](https://developers.google.com/machine-learning/glossary?utm_source=Colab&utm_medium=fi-colab&utm_campaign=fi-practicum&utm_content=glossary&utm_term=label#label), representing the fraction of annotators who labeled the comment either "Very Toxic" or "Toxic." For example, if 10 annotators rated a comment, and 3 of them labeled it "Very Toxic" and 5 of them labeled it "Toxic", the comment would receive a toxicity score of `0.8`.
**NOTE:** For more information on the Civil Comments labeling schema, see the [Data](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data) section of the Jigsaw Untended Bias in Toxicity Classification Kaggle competition.
### Preprocessing the data
For the purposes of this exercise, we converted toxicity and identity columns to booleans in order to work with our neural net and metrics calculations. In the preprocessed dataset, we considered any value ≥ 0.5 as True (i.e., a comment is considered toxic if 50% or more crowd raters labeled it as toxic).
For identity labels, the threshold 0.5 was chosen and the identities were grouped together by their categories. For example, if one comment has `{ male: 0.3, female: 1.0, transgender: 0.0, heterosexual: 0.8, homosexual_gay_or_lesbian: 1.0 }`, after processing, the data will be `{ gender: [female], sexual_orientation: [heterosexual, homosexual_gay_or_lesbian] }`.
**NOTE:** Missing identity fields were converted to False.
### Use TFRecords to load record-oriented binary format data
-------------------------------------------------------------------------------------------------------
The [TFRecord format](https://www.tensorflow.org/tutorials/load_data/tfrecord) is a simple [Protobuf](https://developers.google.com/protocol-buffers)-based format for storing a sequence of binary records. It gives you and your machine learning models to handle arbitrarily large datasets over the network because it:
1. Splits up large files into 100-200MB chunks
2. Stores the results as serialized binary messages for faster ingestion
If you already have a dataset in TFRecord format, you can use the tf.keras.utils functions for accessing the data (as you will below!). If you want to practice creating your own TFRecord datasets you can do so outside of this lab by [viewing the documentation here](https://www.tensorflow.org/tutorials/load_data/tfrecord).
#### TODO 1: Use the utility functions tf.keras to download and import our datasets
Run the following cell to download and import the training and validation preprocessed datasets.
```
download_original_data = False #@param {type:"boolean"}
# TODO 1
if download_original_data:
train_tf_file = tf.keras.utils.get_file('train_tf.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/train_tf.tfrecord')
validate_tf_file = tf.keras.utils.get_file('validate_tf.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/validate_tf.tfrecord')
# The identity terms list will be grouped together by their categories
# (see 'IDENTITY_COLUMNS') on threshold 0.5. Only the identity term column,
# text column and label column will be kept after processing.
train_tf_file = util.convert_comments_data(train_tf_file)
validate_tf_file = util.convert_comments_data(validate_tf_file)
# TODO 1a
else:
train_tf_file = tf.keras.utils.get_file('train_tf_processed.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/train_tf_processed.tfrecord')
validate_tf_file = tf.keras.utils.get_file('validate_tf_processed.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/validate_tf_processed.tfrecord')
```
### Use TFDV to generate statistics and Facets to visualize the data
TensorFlow Data Validation supports data stored in a TFRecord file, a CSV input format, with extensibility for other common formats. You can find the available data decoders [here](https://github.com/tensorflow/data-validation/tree/master/tensorflow_data_validation/coders). In addition, TFDV provides the [tfdv.generate_statistics_from_dataframe](https://www.tensorflow.org/tfx/data_validation/api_docs/python/tfdv/generate_statistics_from_dataframe) utility function for users with in-memory data represented as a pandas DataFrame.
In addition to computing a default set of data statistics, TFDV can also compute statistics for semantic domains (e.g., images, text). To enable computation of semantic domain statistics, pass a tfdv.StatsOptions object with enable_semantic_domain_stats set to True to tfdv.generate_statistics_from_tfrecord.Before we train the model, let's do a quick audit of our training data using [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started), so we can better understand our data distribution.
#### TODO 2: Use TFDV to get quick statistics on your dataset
The following cell may take 2–3 minutes to run. **NOTE:** Please ignore the deprecation warnings.
```
# TODO 2
# The computation of statistics using TFDV. The returned value is a DatasetFeatureStatisticsList protocol buffer.
stats = tfdv.generate_statistics_from_tfrecord(data_location=train_tf_file)
# TODO 2a
# A visualization of the statistics using Facets Overview.
tfdv.visualize_statistics(stats)
```
### TODO 3: Use the TensorFlow Data Validation widget above to answer the following questions.
#### **1. How many total examples are in the training dataset?**
#### Solution
See below solution.
**There are 1.08 million total examples in the training dataset.**
The count column tells us how many examples there are for a given feature. Each feature (`sexual_orientation`, `comment_text`, `gender`, etc.) has 1.08 million examples. The missing column tells us what percentage of examples are missing that feature.

Each feature is missing from 0% of examples, so we know that the per-feature example count of 1.08 million is also the total number of examples in the dataset.
#### **2. How many unique values are there for the `gender` feature? What are they, and what are the frequencies of each of these values?**
**NOTE #1:** `gender` and the other identity features (`sexual_orientation`, `religion`, `disability`, and `race`) are included in this dataset for evaluation purposes only, so we can assess model performance on different identity slices. The only feature we will use for model training is `comment_text`.
**NOTE #2:** *We recognize the limitations of the categories used in the original dataset, and acknowledge that these terms do not encompass the full range of vocabulary used in describing gender.*
#### Solution
See below solution.
The **unique** column of the **Categorical Features** table tells us that there are 4 unique values for the `gender` feature.
To view the 4 values and their frequencies, we can click on the **SHOW RAW DATA** button:

The raw data table shows that there are 32,208 examples with a gender value of `female`, 26,758 examples with a value of `male`, 1,551 examples with a value of `transgender`, and 4 examples with a value of `other gender`.
**NOTE:** As described [earlier](#scrollTo=J3R2QWkru1WN), a `gender` feature can contain zero or more of these 4 values, depending on the content of the comment. For example, a comment containing the text "I am a transgender man" will have both `transgender` and `male` as `gender` values, whereas a comment that does not reference gender at all will have an empty/false `gender` value.
#### **3. What percentage of total examples are labeled toxic? Overall, is this a class-balanced dataset (relatively even split of examples between positive and negative classes) or a class-imbalanced dataset (majority of examples are in one class)?**
**NOTE:** In this dataset, a `toxicity` value of `0` signifies "not toxic," and a `toxicity` value of `1` signifies "toxic."
#### Solution
See below solution.
**7.98 percent of examples are toxic.**
Under **Numeric Features**, we can see the distribution of values for the `toxicity` feature. 92.02% of examples have a value of 0 (which signifies "non-toxic"), so 7.98% of examples are toxic.

This is a [**class-imbalanced dataset**](https://developers.google.com/machine-learning/glossary?utm_source=Colab&utm_medium=fi-colab&utm_campaign=fi-practicum&utm_content=glossary&utm_term=class-imbalanced-dataset#class-imbalanced-dataset), as the overwhelming majority of examples (over 90%) are classified as nontoxic.
Notice that there is one numeric feature (count of toxic comments) and six categorical features.
### TODO 4: Analyze label distribution for subset groups
Run the following code to analyze label distribution for the subset of examples that contain a `gender` value**
**NOTE:** *The cell run should for just a few minutes*
```
#@title Calculate label distribution for gender-related examples
raw_dataset = tf.data.TFRecordDataset(train_tf_file)
toxic_gender_examples = 0
nontoxic_gender_examples = 0
# TODO 4
# There are 1,082,924 examples in the dataset
for raw_record in raw_dataset.take(1082924):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
if str(example.features.feature["gender"].bytes_list.value) != "[]":
if str(example.features.feature["toxicity"].float_list.value) == "[1.0]":
toxic_gender_examples += 1
else:
nontoxic_gender_examples += 1
# TODO 4a
print("Toxic Gender Examples: %s" % toxic_gender_examples)
print("Nontoxic Gender Examples: %s" % nontoxic_gender_examples)
```
#### **What percentage of `gender` examples are labeled toxic? Compare this percentage to the percentage of total examples that are labeled toxic from #3 above. What, if any, fairness concerns can you identify based on this comparison?**
#### Solution
Click below for one possible solution.
There are 7,189 gender-related examples that are labeled toxic, which represent 14.7% of all gender-related examples.
The percentage of gender-related examples that are toxic (14.7%) is nearly double the percentage of toxic examples overall (7.98%). In other words, in our dataset, gender-related comments are almost two times more likely than comments overall to be labeled as toxic.
This skew suggests that a model trained on this dataset might learn a correlation between gender-related content and toxicity. This raises fairness considerations, as the model might be more likely to classify nontoxic comments as toxic if they contain gender terminology, which could lead to [disparate impact](https://developers.google.com/machine-learning/glossary?utm_source=Colab&utm_medium=fi-colab&utm_campaign=fi-practicum&utm_content=glossary&utm_term=disparate-impact#disparate-impact) for gender subgroups.
Copyright 2021 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
# PCA: Principal Component Analysis
## Objectives and ideas
High dimensions generate a large variety of problems (curse of dimensionality, difficulties to visualize/represent the datas...) and thus many techniques have been developped to reduce the dimensionality while keeping as much information as possible. PCA is one of these methods.
It works by trying to find a new base of lower dimension on which the data can be projected and in which the data will be as spread as possible. The idea is that if data are all at the same place, it is going to be hard to distinguish one datapoint from another whilst if they are spread, it will be easier to differentiate them. In more technical terms, this method aims at finding a new base composed by the directions of **highest variance** for the given data.
To computer the new base, the idea is to modelize the data by a mutlivariate normal distribution (this is a strong prior). From it, we infer a covariance matrix. Its eigen vectors are going to be the axis of highest variance and the corresponding eigen values are the variance of the data along thoses axis. If you want a better understanding of what the eigen vectors of the covariance matrix are, please read this [quora answer](https://www.quora.com/What-is-an-eigenvector-of-a-covariance-matrix). The eigen vectors are going to form the new base. To reduce the dimensionality, one just needs to remove the eigen vectors with the smallest eigen values (it will also give an information about how much information is lost in the process).
Now, you may have notice one thing. The new base, as the eigen vectors are a linear combination of the initial base vectors (that is to say the features), will most certainly loose of its semantic meaning ($.7 \times age + 0.4 \times weight$ is a variable rather hard to interpret).
## Implementation steps
- generate the dataset
- compute the covariance matrix from the data
- diagonalize the latter
- find the eigen vector and sort them according to their eigen values
- plot the eigen vectors and the data projected on different sub-spaces
Note that most of the code is not dimension agnostic. You will need to modify some variables/part of the code to change the dimensionality.
```
# to display interactive plots within the notebook
%matplotlib notebook
# to define the size of the plotted images
from pylab import rcParams
rcParams['figure.figsize'] = (10, 8)
import matplotlib.pyplot as plt
import numpy as np
from fct import generate_multivariate, normalize, plot_3d, plot_2d, Arrow3D
```
## Generate the data
We generate random variables using a mulitvariate gaussian distribution. This correspond to the perfect case (the data actually follow a multivariate normal).
```
data = generate_multivariate(size=500, dimension=3)
plot_3d(data)
```
## Calculate the covariance matrix and get the eigen values/vectors
RQ: It is interesting to note that the obtained covariance matrix (cov in the code) is not necessarily the same as the one used to generate the data. This is because we use a finite number of datapoint to infer the matrix. If this number streches towards infinity, the computed covariance matrix and the original one will be identical.
```
# build the covariance matrix from the randomly generated data
cov = np.cov(data.T)
# get its eigen values and vectors
eigenvalues, eigenvectors = np.linalg.eig(cov)
# sorting the eigenvalues
idx = eigenvalues.argsort()[::-1]
eigenvalues = eigenvalues[idx]
eigenvectors = eigenvectors[:,idx]
```
## Plot the eigen vectors on the data
This part is not dimension agnostic, if you change the dimension when generating the data, you will need to change this part of the code.
```
fig = plt.figure()
# RQ: gca = get current axis
ax = fig.gca(projection='3d')
data_t = data.T
maxi = max([max(abs(el)) for el in data])
ax.set_xlim([-maxi, maxi])
ax.set_ylim([-maxi, maxi])
ax.set_zlim([-maxi, maxi])
ax.scatter(data_t[0], data_t[1], data_t[2], alpha=0.2)
plt.title('Data with vectors of the new base.')
for vector in eigenvectors:
# vectors are made bigger to better visualize them
vector_plt = 2 * vector
a = Arrow3D([0, vector_plt[0]],[0, vector_plt[1]],[0, vector_plt[2]],
mutation_scale=20, lw=1, arrowstyle="-|>", color="r")
ax.add_artist(a)
```
Now let's check that the eigen vectors form indeed a base
```
# The new vectors might not seem orthogonal because of scaling issues so
# here is a proof.
# Note that sometimes, as computers tend to have issues with
# floating numbers, you might not get 0.0 but a very very
# small number (10^-16 for instance).
v1, v2, v3 = eigenvectors
print(sum(v1 * v2))
print(sum(v1 * v3))
print(sum(v2 * v3))
```
## Projection of the data on the different planes
To project we use the following formula:
$\mathbf{x} \cdot \mathbf{y} = \sum_{i} x_i y_i$
This part of the code could be greatly improved by using matrix multiplication.
```
def projection(data, vectors):
"""Return the dataset projected on the two vectors given."""
v1, v2 = vectors
data_projected = []
for datapoint in data:
# we use a scalar product to project on the new base (v1, v2)
# RQ: the multiplication datapoint * v is only possible
# because datapoint is a ndarray.
new_data = []
new_data.append(sum(datapoint * v1))
new_data.append(sum(datapoint * v2))
data_projected.append(np.array(new_data))
return data_projected
def plot_projection(data, vectors, title=''):
data_projected = projection(data, vectors)
fig = plt.figure()
maxi = max([max(abs(el)) for el in data])
plot_2d(data_projected, color='b', alpha=1, maxi=maxi, fig=fig, title=title)
plot_projection(data, [v1, v2], title='two best vectors')
plot_projection(data, [v1, v3], title='best and worst vectors')
plot_projection(data, [v2, v3], title='two worst vectors')
```
The scale used to represent the three projections is the same.
We can notice that the datapoints are more spread in the first plot that in the last one.
| github_jupyter |
[Watson Knowledge Studio](https://www.ibm.com/marketplace/cloud/supervised-machine-learning/us/en-us) is a tool from IBM that allows you to train a natural language model on your own custom domain. Documents from a domain are uploaded to the service and annotated using a custom system of entities and relationships. The end result is a model that can be exported to the Alchemy API.
For the Voice of the Customer use case, demonstrated by the demo in this repository, the files for training a costumized Alchemy model through WKS can be found under the ``data`` folder. In order to reproduce the steps on your own account, follow step 2 of this notebook in order to acquire a WKS account. After you have signed up, follow the steps:
1. Upload the types.json file under the ``Type System`` > ``Entity Types`` tab.
2. Upload the dictionary.zip file under the ``Dictionaries`` tab.
3. Upload the documents.zip file under the ``Documents`` tab.
4. Create a ``Machine Learning`` annotator under the ``Annotator Component`` tab.
5. Once the annotator has finished training, click on the Detail button and then ``Take Snapshot``.
6. After a Snapshot of the annotator is generated, click on ``Deploy`` and provide your Alchemy API key.
Following the above steps will allow you to replicate the customized Alchemy model created for this use case. In case you want to create a customize Alchemy model for your own domain data, follow the steps below, which detail how to create entity types, relations, ground truth and how to train the model.
# 1. Making sure you have the data you need
The first step for creating your customized natural language model is to make sure you have the data that you will use to train it. You need text that is typical of the data that you are going to be using for this application. For this example, Amazon Product Reviews are used as the data for training the WKS model.
Generally, you would want to have as many examples of text as feasible for you to annotate. It is recommended that you have several hundred examples. For this sample application 100 reviews were annotated.
# 2. Signing up for WKS
The next step is to go to Watson Knowledge Studio and create an account. One option os to create a trial account. Click on "Free 30-Day Trial" button on the webpage provided on the previous step. After submitting the form to create an account, a service instance will be provisioned. This step usually takes a few minutes, but you should receive an email once it is completed.
# 3. Annotation step
Once you receive the confirmation email, you are ready for the next step. If you are annotating alone, go ahead and start the service up. If you plan on annotating as a team, you need to expand the option box and use the "Add new user:" option. If you want them to only annotate, leave the box unchecked, but if you want a team member to help configure anything about the documents, the type system, or the annotator, check the "Make administrator" button.
Once you have started the service, click on the url that is presented, it will lead you to the instance of WKS that you just started. From there, you are ready to start your project. Create a new project. You will need to give it a name but you do not need to worry about any of the other options.
## Creating entities and relationships
From there, you’ll be taken to your project dashboard. The first screen that you see is the type system management screen. Here you can see all of the entities and relationships (none yet) that you have defined for this project. At this point, you need to start getting a little creative. For each type of data that you want to process at a later point, you need to define an entity. For instance, if you want information about the features of a product, information on customer service, and information on the defects people find, so you would create ``Feature``, ``Customer_Service``, and ``Defect`` entities.
You will most likely not need to define subtypes or mentions for the entities, but if you would like to get more sophistocated, you can. ``Roles`` are the ways that entities can act like other entities. For instance, a camera is a product but it can also be a feature of something, so if somebody is talking about their phone's camera, it could be a product that has the role of a feature. ``Subtypes`` are the different classes that are a more specific type of another class, ``online customer serv`` is a subtype of ``customer service``. For more information on these, please visit the [WKS documentation](https://www.ibm.com/watson/developercloud/doc/wks/wks_overview.shtml) on Type Systems.
## Uploading your data
In order to upload the data to be annotated, click the "Documents" link in the top bar of the page. This is where you can upload the .csv files containing your data. Click import, find the files and import it into WKS. It seems counter-intuitive, but you cannot use those as they are, your texts need to be divided into document sets. Click the “Create Sets” button, from there you can define everything about the sets you are about to create. ``Overlap`` is the percentage of documents that are shared between each set. If there is more than one annotator assigned to your project, this is used to measure how much agreement there is between the different annotators. If you are annotating by yourself, it may be useful to see how consistent you are, but it is recommended that you set it low so you are not annotating the same thing again and again. We have found that it is almost always better to create lots of little sets, rather than one large one. The machine learning component will not accept documents from sets that aren't completely annotated, so small sets will let you add to the machine learning component more frequently.
## Creating dictionaries
Now that you have the documents all set up, you can add dictionaries if you want. Dictionaries are not necessary, but if you find yourself annotating the same word over and over again, they can help. Dictionaries will pre-annotate case-sensitive string matches to those in the entry. If you would like to use them, you can create one and add entries to it. The ``surface`` form of a word is the inflected version of a word that may show up, while the ``lemma`` is the base form of the word. "Running" is one surface form that can result from the lemma "Run".
It is important to note that, if you would like to use dictionaries, you need to run the dictionary pre-annotator on your document sets before you start annotating. For more information on that please visit the WKS documentation on Dictionaries.
## Annotating documents
Now is finally the time to annotate. Follow the "Human Annotation" link in the top bar to arrive at the annotation page. From there, you can check on the status of annotation tasks as well as configure hotkeys and some cosmetics of the editor. Click on the "Add Task" button and add all of the document sets that you would like to annotate. Once you're all set up, click on one of the document sets, then on one of the documents, to start annotating.
Anytime in your text that some word is mentioned that fits into one of the entity types that you defined, click on the word and then click (or press the hotkey of) the type on the side bar. Once you have combed through the entire text picking out entity types, you can annotate relationships (in case you defined any). Select the ``relationships`` view on the left side then click the first entity in the relationship, then the second, then click (or press the hotkey of) the relationship type in the sidebar.
Now you can annotate something called ``coreference``. Coreference refers to when there is a word that is actually talking about something else. For instance, in the sentence, "Joe went to the store because he was out of milk" both ``Joe`` and ``he`` are corefferents; they refer to the same thing. So, anytime that the exact same thing is mentioned more than once (by different referring expressions or pronouns), you can mark it as a coreference chain. In order to do this, click each word that is referring to the same thing, and then when you're all done double click the last word. There are also options to combine or delete chains on the right side.
## Creating the ground truth for training
Once you are finished annotating, it is time to accept those to the ``Ground Truth``. Navigate back to the screen where you can check the status of your annotation task and accept the document sets. If there was any overlap, there will most likely be conflicts between those documents. You can use the provided tool to check those by going through and accepting the correct annotations. It is important to be careful on this step. If one document set is accepted, and then another one is at a later time the second will overwrite the first. If you have a high document overlap, that would result in the second set overwriting everything in the first set with no chance to resolve conflicts.
After that last step, you should have a ground truth all set up, which means you are ready to let the machine start learning. Navigate to "Annotator Component" using the top bar. Click the button to create an annotator and select the option to create the machine learning annotator. Then, select all of the document sets that you have annotated, and define a training/testing split (the default is usually good). Click the option to train and evaluate, then come back in a couple of minutes to see how you did.
# 4. Publishing your model to Alchemy
Go into the detail view of the machine annotator and click on the "Take Snapshot" button. It will take a few minutes in order to save the snapshot. When it completes, click "Deploy". Take your API key from the Alchemy Language API in Bluemix (which you should have already created under the ``Advanced`` plan), and insert into the correct field. You should now have the model-ID of a machine learning powered natural language processing model that you can plug into Alchemy Language to use instead of the available standard model.
| github_jupyter |
# Secure Data Science on AWS
The most common security considerations for building secure data science projects in the cloud touch the areas of compute and network isolation, authentication and authorization, data encryption, artifact management, auditability, monitoring and governance.
```
import boto3
region = boto3.Session().region_name
session = boto3.session.Session()
ec2 = boto3.Session().client(service_name='ec2', region_name=region)
sm = boto3.Session().client(service_name='sagemaker', region_name=region)
```
## Retrieve the Notebook Instance Name
```
import json
notebook_instance_name = None
try:
with open('/opt/ml/metadata/resource-metadata.json') as notebook_info:
data = json.load(notebook_info)
resource_arn = data['ResourceArn']
region = resource_arn.split(':')[3]
notebook_instance_name = data['ResourceName']
print('Notebook Instance Name: {}'.format(notebook_instance_name))
except:
print('+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++')
print('[ERROR]: COULD NOT RETRIEVE THE NOTEBOOK INSTANCE METADATA.')
print('+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++')
```
# Compute and Network Isolation
This SageMaker notebook instance has been set up **without** Internet access. The notebook instance runs within a VPC without Internet connectivity but still maintains access to specific AWS services such as Elastic Container Registry and Amazon S3. Access to a shared services VPC has also been configured to allow connectivity to a centralized repository of Python packages.
```
response = sm.describe_notebook_instance(
NotebookInstanceName=notebook_instance_name
)
print(response)
```
## Review The Following Settings
```
print('SubnetId: {}'.format(response['SubnetId']))
print('SecurityGroups: {}'.format(response['SecurityGroups']))
print('IAM Role: {}'.format(response['RoleArn']))
print('NetworkInterfaceId: {}'.format(response['NetworkInterfaceId']))
print('DirectInternetAccess: {}'.format(response['DirectInternetAccess']))
response = sm.describe_notebook_instance(
NotebookInstanceName='string'
)
```
## Verify That Internet Access Is Disabled
Expected result:
You should see a timeout without a path to the Internet or a proxy server.
```Failed to connect to aws.amazon.com port 443: Connection timed out```
```
!curl https://www.datascienceonaws.com/
```
By removing public internet access in this way, we have created a secure environment where all the dependencies are installed, but the notebook now has no way to access the internet, and internet traffic cannot reach the notebook either.
# Authentication and Authorization
SageMaker notebooks need to be assigned a role for accessing AWS services. Fine grained access control over which services a SageMaker notebook is allowed to access can be provided using Identity and Access Management (IAM).
To control access at a user level, data scientists should typically not be allowed to create notebooks, provision or delete infrastructure. In some cases, even console access can be removed by creating PreSigned URLs, that directly launch a hosted Jupyter environment for data scientists to use from their laptops.
Moreover, admins can use resource [tags for attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) to ensure that different teams of data scientists, with the same high-level IAM role, have different access rights to AWS services, such as only allowing read/write access to specific S3 buckets which match tag criteria.
For customers with even more stringent data and code segregation requirements, admins can provision different accounts for individual teams and manage the billing from these accounts in a centralized Organizational Unit.
## Review IAM Role and Region For This Notebook Instance
```
import sagemaker
sess = sagemaker.Session()
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
sm = boto3.Session().client(service_name='sagemaker', region_name=region)
print("IAM Role: {}".format (role))
print("Region: {}".format(region))
```
## TODO: List IAM Role and Policies
## Grant `Least Privilege` for IAM Roles and Policies
IAM roles and policies help you control access to AWS resources. You create policies which define permissions, and attach the policies to IAM users, groups of users, or roles.
Policies types include identity-based and resource-based policies among others. Identity-based policies are tied to an identity, such as IAM users or roles. In contrast, resource-based policies are attached to a resource such as an Amazon S3 bucket.
Here is a sample policy attached to the SageMaker notebook instance IAM role. This policy restricts the IAM Role to one specific S3 bucket.
## TODO: Attach this policy to IAM Role
```
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::sagemaker-us-east-1-123456789-secure",
"arn:aws:s3:::sagemaker-us-east-1-123456789-secure/*"
]
}
]
}
```
## Let's try to copy data to this S3 bucket
```
!echo s3://$bucket-secure/
!aws s3 cp security.ipynb s3://$bucket-secure/
```
## Let's try to copy data over to a different S3 bucket!
```
!aws s3 cp ./security.ipynb s3://$bucket/
```
# Train the Model
## Train Without a VPC Configured
To test the networking controls, run the following cell below. Here you will first attempt to train the model without an associated network configuration. You should see that the training job is stopped around the same time as the "Downloading - Downloading input data" message is emitted.
#### Detective control explained
The training job was terminated by an AWS Lambda function that was executed in response to a CloudWatch Event that was triggered when the training job was created.
To learn more about how the detective control does this, assume the role of the Data Science Administrator and review the code of the [AWS Lambda function SagemakerTrainingJobVPCEnforcer](https://console.aws.amazon.com/lambda/home?#/functions/SagemakerTrainingJobVPCEnforcer?tab=configuration).
You can also review the [CloudWatch Event rule SagemakerTrainingJobVPCEnforcementRule](https://console.aws.amazon.com/cloudwatch/home?#rules:name=SagemakerTrainingJobVPCEnforcementRule) and take note of the event which triggers execution of the Lambda function.
---
```
from sagemaker.amazon.amazon_estimator import get_image_uri
image = get_image_uri(boto3.Session().region_name, 'xgboost', '0.90-1')
s3_input_train = sagemaker.s3_input(s3_data='s3://sagemaker-workshop-cloudformation-{}/quickstart/train_data.csv'.format (region), content_type='csv')
s3_input_test = sagemaker.s3_input(s3_data='s3://sagemaker-workshop-cloudformation-{}/quickstart/test_data.csv'.format (region), content_type='csv')
print ("Training data at: {}".format (s3_input_train.config['DataSource']['S3DataSource']['S3Uri']))
print ("Test data at: {}".format (s3_input_test.config['DataSource']['S3DataSource']['S3Uri']))
xgb = sagemaker.estimator.Estimator(
image,
role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
train_max_run=3600,
output_path='s3://{}/{}/models'.format(output_bucket, prefix),
sagemaker_session=sess,
train_use_spot_instances=True,
train_max_wait=3600,
encrypt_inter_container_traffic=False
)
xgb.set_hyperparameters(
max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
verbosity=0,
objective='binary:logistic',
num_round=100)
xgb.fit(inputs={'train': s3_input_train})
```
# Train with VPC
This time provide the training job with the network settings that were defined above. This time we shouldn't see the **Client Error** as before!
```
s3_input_train = sagemaker.s3_input(s3_data='s3://{}/{}/'.format(data_bucket, traindataprefix), content_type='csv')
s3_input_test = sagemaker.s3_input(s3_data='s3://{}/{}/'.format(data_bucket, testdataprefix), content_type='csv')
print ("Training data at: {}".format (s3_input_train.config['DataSource']['S3DataSource']['S3Uri']))
print ("Test data at: {}".format (s3_input_test.config['DataSource']['S3DataSource']['S3Uri']))
preprocessing_trial_component = tracker.trial_component
trial_name = f"cc-fraud-training-job-{int(time.time())}"
cc_trial = Trial.create(
trial_name=trial_name,
experiment_name=cc_experiment.experiment_name,
sagemaker_boto_client=sm)
cc_trial.add_trial_component(preprocessing_trial_component)
cc_training_job_name = "cc-training-job-{}".format(int(time.time()))
xgb = sagemaker.estimator.Estimator(
image,
role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
train_max_run=3600,
output_path='s3://{}/{}/models'.format(output_bucket, prefix),
sagemaker_session=sess,
train_use_spot_instances=True,
train_max_wait=3600,
subnets=subnets,
security_group_ids=
sec_groups,
train_volume_kms_key=cmk_id,
encrypt_inter_container_traffic=False
)
xgb.set_hyperparameters(
max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
verbosity=0,
objective='binary:logistic',
num_round=100)
xgb.fit(
inputs={'train': s3_input_train},
job_name=cc_training_job_name,
experiment_config={
"TrialName":
cc_trial.trial_name, #log training job in Trials for lineage
"TrialComponentDisplayName": "Training",
},
wait=True,
)
```
# Encryption
To ensure that the processed data is encrypted at rest on the processing cluster, we provide a customer managed key to the volume_kms_key command below. This instructs Amazon SageMaker to encrypt the EBS volumes used during the processing job with the specified key. Since our data stored in Amazon S3 buckets are already encrypted, data is encrypted at rest at all times.
Amazon SageMaker always uses TLS encrypted tunnels when working with Amazon SageMaker so data is also encrypted in transit when traveling from or to Amazon S3.
```
## Use SageMaker Processing with SKLearn. -- combine data into train and test at this stage if possible.
from sagemaker.sklearn.processing import SKLearnProcessor
sklearn_processor = SKLearnProcessor(
framework_version='0.20.0',
role=role,
instance_type='ml.c4.xlarge',
instance_count=1,
network_config=network_config, # attach SageMaker resources to your VPC
volume_kms_key=cmk_id # encrypt the EBS volume attached to SageMaker Processing instance
)
from sagemaker.processing import ProcessingInput, ProcessingOutput
sklearn_processor.run(
code=codeupload,
inputs=[
ProcessingInput(
source=raw_data_location,
destination='/opt/ml/processing/input'
)
],
outputs=[
ProcessingOutput(
output_name='train_data',
source='/opt/ml/processing/train',
destination=train_data_location),
ProcessingOutput(
output_name='test_data',
source='/opt/ml/processing/test',
destination=test_data_location),
ProcessingOutput(
output_name='train_data_headers',
source='/opt/ml/processing/train_headers',
destination=train_header_location)
],
arguments=['--train-test-split-ratio', '0.2'])
preprocessing_job_description = sklearn_processor.jobs[-1].describe()
output_config = preprocessing_job_description['ProcessingOutputConfig']
for output in output_config['Outputs']:
if output['OutputName'] == 'train_data':
preprocessed_training_data = output['S3Output']['S3Uri']
if output['OutputName'] == 'test_data':
preprocessed_test_data = output['S3Output']['S3Uri']
```
# Model development and Training
```
# Store the values used in this notebook for use in the second demo notebook:
trial_name = trial_name
experiment_name = cc_experiment.experiment_name
training_job_name = cc_training_job_name
%store trial_name
%store experiment_name
%store training_job_name
```
| github_jupyter |
### We have data about users who hit our site: whether they converted or not as well as some of their characteristics such as their country, the marketing channel, their age, whether they are repeated usersand the number of pages wisited during that session (as a proxy for site activity/time spent on site).
### Your project is to:
#### (1) Predict conversion rate
#### (2) Come up with recommendations for the product team and the marketing team to improve conversion rate.
### Load the package would be used
```
import pandas as pd
pd.set_option("display.max_columns", 10)
pd.set_option("display.width", 350)
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams.update({"figure.autolayout": True})
import seaborn as sns
sns.set(style = "white")
sns.set(style = "whitegrid", color_codes = True)
import numpy as np
from sklearn.metrics import confusion_matrix, auc, roc_curve, classification_report
import h2o
from h2o.frame import H2OFrame
from h2o.estimators.random_forest import H2ORandomForestEstimator
from h2o.grid.grid_search import H2OGridSearch
```
### Read in the data set
```
#### Read file
dat0 = pd.read_csv("../Datasets/conversion_data.csv")
print(dat0.head())
```
### Look into data set
#### Inspect the data to look for weird behavior/wrong data
```
print(dat0.describe())
print(dat0.groupby(["country"]).size())
print(dat0.groupby(["source"]).size())
```
Everything seems to be reasonable except for the max age $123$ years old.
```
print(sorted(dat0["age"].unique(), reverse = True))
```
Those $123$ and $111$ values seem unrealistic.
```
print(dat0[dat0["age"] > 80])
```
It is just $2$ users. We may remove them directly.
```
dat = dat0[dat0["age"] < 110]
```
### Visualization
#### By country
```
grp_country = dat[["country", "converted"]].groupby("country").mean().reset_index()
plt.figure(figsize = [12, 6])
sns.barplot(x = "country", y = "converted", data = grp_country, palette = "PuBuGn")
plt.title("Conversion Rate by Country", fontsize = 16)
plt.xlabel("Country", fontsize = 12)
plt.ylabel("Conversion Rate", fontsize = 12)
```
It Looks like Chinese convert at a much lower rate than other countries.
#### By marketing channels
```
grp_source = dat[["source", "converted"]].groupby("source").mean().reset_index()
plt.figure(figsize = [12, 6])
sns.barplot(x = "source", y = "converted", data = grp_source, palette = "PuBuGn")
plt.title("Conversion Rate by Source", fontsize = 16)
plt.xlabel("Source", fontsize = 12)
plt.ylabel("Conversion Rate", fontsize = 12)
```
#### By if new users
```
grp_newuser = dat[["new_user", "converted"]].groupby("new_user").mean().reset_index()
plt.figure(figsize = [12, 6])
sns.barplot(x = "new_user", y = "converted", data = grp_newuser, palette = "PuBuGn")
plt.title("Conversion Rate by User Type", fontsize = 16)
plt.xlabel("New User", fontsize = 12)
plt.ylabel("Conversion Rate", fontsize = 12)
```
#### By total pages visited
```
dat[["total_pages_visited", "converted"]].groupby("total_pages_visited").mean().plot()
plt.title("Conversion Rate by Total Pages Visited", fontsize = 16)
plt.xlabel("Total Pages Visited", fontsize = 12)
plt.ylabel("Conversion Rate", fontsize = 12)
```
#### By age
```
dat[["age", "converted"]].groupby("age").mean().plot()
plt.title("Conversion Rate by Age", fontsize = 16)
plt.xlabel("Age", fontsize = 12)
plt.ylabel("Conversion Rate", fontsize = 12)
```
### Machine Learning Model
```
#### Initialize H2O cluster
h2o.init()
h2o.remove_all()
```
#### Convert the data set to the h2o frame and convert all categorical variables to factors
```
dat_h2o = H2OFrame(dat)
dat_h2o["converted"] = dat_h2o["converted"].asfactor()
dat_h2o["country"] = dat_h2o["country"].asfactor()
dat_h2o["source"] = dat_h2o["source"].asfactor()
dat_h2o["new_user"] = dat_h2o["new_user"].asfactor()
dat_h2o.summary()
```
#### Split data set to the training set and testing set
```
index = dat_h2o["converted"].stratified_split(test_frac = 0.34, seed = 2019)
train_dat = dat_h2o[index == "train"]
test_dat = dat_h2o[index == "test"]
X0 = ["country", "source", "new_user", "age", "total_pages_visited"]
Y = "converted"
```
#### Build a random forest model
```
RF0 = H2ORandomForestEstimator(balance_classes = False, ntrees = 100, max_depth = 20,
mtries = -1, seed = 2019, score_each_iteration = True)
RF0.train(x = X0, y = Y, training_frame = train_dat)
```
#### Examine if overfitting (though it's unlikely for a random forest)
```
train_true = train_dat.as_data_frame()["converted"].values
test_true = test_dat.as_data_frame()["converted"].values
train_pred = RF0.predict(train_dat).as_data_frame()["predict"].values
test_pred = RF0.predict(test_dat).as_data_frame()["predict"].values
train_confusion = confusion_matrix(train_true, train_pred)
print (train_confusion/train_confusion.astype(np.float).sum(axis = 1))
test_confusion = confusion_matrix(test_true, test_pred)
print (test_confusion/test_confusion.astype(np.float).sum(axis = 1))
train_fpr, train_tpr, _ = roc_curve(train_true, train_pred)
test_fpr, test_tpr, _ = roc_curve(test_true, test_pred)
train_auc = np.round(auc(train_fpr, train_tpr), 3)
test_auc = np.round(auc(test_fpr, test_tpr), 3)
print(classification_report(y_true=test_true, y_pred = (test_pred > 0.5).astype(int)))
```
#### Variable importance
```
RF0.varimp_plot()
```
From the above plot, the most important feature is the total page visits, which makes sense. Generally, users who reviewed more pages have higher willingness to pay for the product/service. But it's not very useful since we basically can do nothing for that feature. Let's remove the feature and redo the random forest.
#### Remove the total_pages_visited variable and redo the analysis
```
X = ["country", "source", "new_user", "age"]
Y = "converted"
RF = H2ORandomForestEstimator(balance_classes = False, ntrees = 100, max_depth = 20,
mtries = -1, seed = 2019, score_each_iteration = True)
RF.train(x = X, y = Y, training_frame = train_dat)
train_true = train_dat.as_data_frame()["converted"].values
test_true = test_dat.as_data_frame()["converted"].values
train_pred = RF.predict(train_dat).as_data_frame()["predict"].values
test_pred = RF.predict(test_dat).as_data_frame()["predict"].values
train_confusion = confusion_matrix(train_true, train_pred)
print (train_confusion/train_confusion.astype(np.float).sum(axis = 1))
test_confusion = confusion_matrix(test_true, test_pred)
print (test_confusion/test_confusion.astype(np.float).sum(axis = 1))
train_fpr, train_tpr, _ = roc_curve(train_true, train_pred)
test_fpr, test_tpr, _ = roc_curve(test_true, test_pred)
train_auc = np.round(auc(train_fpr, train_tpr), 3)
test_auc = np.round(auc(test_fpr, test_tpr), 3)
print (classification_report(y_true=test_true, y_pred = (test_pred > 0.5).astype(int)))
RF.varimp_plot()
RF.partial_plot(data = train_dat, cols = ["new_user", "country", "age", "source"], plot = True)
```
In terms of model performance, the new model is worse than the previous one, but from the new model, we can get valuable insight: the conversion rate among returned users was pretty high.
The project is asking for insights in terms of how to improve the conversion rate, so the model performance is less important (the accuracy of the refitted model was even worse than a random guess, though we never evaluate a classifier using the accuracy in the data with an unbalanced outcome variable).
For how to compare classifier and decide a cut-off, please see my solution using R.
| github_jupyter |
```
import sys
sys.path.append("../codes/")
from Readfiles import getFnames
from DCdata import readReservoirDC
from pymatsolver import PardisoSolver
%pylab inline
from SimPEG.EM.Static import DC
from SimPEG import EM
from SimPEG import Mesh
from SimPEG.Survey import Data
def removeRxsfromDC(survey, inds, DClow=-np.inf, DChigh=np.inf, surveyType="2D"):
srcList = survey.srcList
srcListNew = []
dobs = survey.dobs
dobs[inds] = np.nan
data = Data(survey, survey.dobs)
rxData = []
for iSrc, src in enumerate(srcList):
rx = src.rxList[0]
data_temp = data[src, rx]
rxinds = np.isnan(data_temp) | (np.logical_or(DClow>data_temp, DChigh<data_temp))
nrxact_temp = rxinds.sum()
nrx_temp = len(rxinds)
rxlocM = rx.locs[0]
rxlocN = rx.locs[1]
srcloc = src.loc
rxData.append(data_temp[~rxinds])
# All Rxs are active
if nrxact_temp == 0:
if surveyType == "2D":
rxNew = DC.Rx.Dipole_ky(rxlocM, rxlocN)
else:
rxNew = DC.Rx.Dipole(rxlocM, rxlocN)
srcNew = DC.Src.Dipole([rxNew], srcloc[0], srcloc[1])
srcListNew.append(srcNew)
# All Rxs are nan then remove src
elif nrx_temp == nrxact_temp:
print ("Remove %i-th Src") % (iSrc)
# Some Rxs are not active
else:
if surveyType == "2D":
rxNew = DC.Rx.Dipole_ky(rxlocM[~rxinds,:], rxlocN[~rxinds,:])
else:
rxNew = DC.Rx.Dipole(rxlocM[~rxinds,:], rxlocN[~rxinds,:])
srcNew = DC.Src.Dipole([rxNew], srcloc[0], srcloc[1])
srcListNew.append(srcNew)
if surveyType == "2D":
surveyNew = DC.Survey_ky(srcListNew)
else:
surveyNew = DC.Survey(srcListNew)
surveyNew.dobs = np.hstack(rxData)
return surveyNew
#EM.Static.Utils.StaticUtils.plot_pseudoSection?
fname = "../data/ChungCheonDC/20151130000000.apr"
survey = readReservoirDC(fname)
dobsAppres = survey.dobs
fig, ax = plt.subplots(1,1, figsize = (10, 2))
dat = EM.Static.Utils.StaticUtils.plot_pseudoSection(survey, ax)
cb = dat[2]
cb.set_label("Apparent resistivity (ohm-m)")
geom = np.hstack(dat[3])
dobsDC = dobsAppres * geom
plt.plot(abs(dobsAppres))
np.argwhere(dobsAppres>160.)
# print dobsAppres[6:]
# surveyNew = removeRxsfromDC(survey, [346], DClow=40, DChigh=145, surveyType="2D")
surveyNew = removeRxsfromDC(survey, [330], surveyType="2D")
surveyNew = removeRxsfromDC(survey, [338], surveyType="2D")
surveyNew = removeRxsfromDC(survey, [346], surveyType="2D")
surveyNew = removeRxsfromDC(survey, [351], surveyType="2D")
surveyNew = removeRxsfromDC(survey, [358], surveyType="2D")
surveyNew = removeRxsfromDC(survey, [339], surveyType="2D")
fig, ax = plt.subplots(1,1, figsize = (10, 2))
dat = EM.Static.Utils.StaticUtils.plot_pseudoSection(surveyNew, ax, dataType='volt', sameratio=False)
cb = dat[2]
cb.set_label("Apparent resistivity (ohm-m)")
geom = np.hstack(dat[3])
dobsDC = surveyNew.dobs * geom
surveyNew.dobs = dobsDC
fig, ax = plt.subplots(1,1, figsize = (10, 2))
dat = EM.Static.Utils.StaticUtils.plot_pseudoSection(surveyNew, ax, dataType='appResistivity', sameratio=False)
plt.plot(abs(dobsDC))
# problem = DC.Problem2D_CC(mesh)
cs = 2.5
npad = 6
hx = [(cs,npad, -1.3),(cs,160),(cs,npad, 1.3)]
hy = [(cs,npad, -1.3),(cs,20)]
mesh = Mesh.TensorMesh([hx, hy])
mesh = Mesh.TensorMesh([hx, hy],x0=[-mesh.hx[:6].sum()-0.25, -mesh.hy.sum()])
def from3Dto2Dsurvey(survey):
srcLists2D = []
nSrc = len(survey.srcList)
for iSrc in range (nSrc):
src = survey.srcList[iSrc]
locsM = np.c_[src.rxList[0].locs[0][:,0], np.ones_like(src.rxList[0].locs[0][:,0])*-0.75]
locsN = np.c_[src.rxList[0].locs[1][:,0], np.ones_like(src.rxList[0].locs[1][:,0])*-0.75]
rx = DC.Rx.Dipole_ky(locsM, locsN)
locA = np.r_[src.loc[0][0], -0.75]
locB = np.r_[src.loc[1][0], -0.75]
src = DC.Src.Dipole([rx], locA, locB)
srcLists2D.append(src)
survey2D = DC.Survey_ky(srcLists2D)
return survey2D
from SimPEG import (Mesh, Maps, Utils, DataMisfit, Regularization,
Optimization, Inversion, InvProblem, Directives)
# from pymatsolver import MumpsSolver
mapping = Maps.ExpMap(mesh)
survey2D = from3Dto2Dsurvey(surveyNew)
problem = DC.Problem2D_N(mesh, sigmaMap=mapping)
# Old statement
# problem = DC.Problem2D_N(mesh, mapping=mapping)
problem.pair(survey2D)
problem.Solver = PardisoSolver
m0 = np.ones(mesh.nC)*np.log(1e-2)
from ipywidgets import interact
nSrc = len(survey2D.srcList)
def foo(isrc):
figsize(10, 5)
mesh.plotImage(np.ones(mesh.nC)*np.nan, gridOpts={"color":"k", "alpha":0.5}, grid=True)
# isrc=0
src = survey2D.srcList[isrc]
plt.plot(src.loc[0][0], src.loc[0][1], 'bo')
plt.plot(src.loc[1][0], src.loc[1][1], 'ro')
locsM = src.rxList[0].locs[0]
locsN = src.rxList[0].locs[1]
plt.plot(locsM[:,0], locsM[:,1], 'ko')
plt.plot(locsN[:,0], locsN[:,1], 'go')
plt.gca().set_aspect('equal', adjustable='box')
interact(foo, isrc=(0, nSrc-1, 1))
pred = survey2D.dpred(m0)
# data_anal = []
# nSrc = len(survey.srcList)
# for isrc in range(nSrc):
# src = survey.srcList[isrc]
# locA = src.loc[0]
# locB = src.loc[1]
# locsM = src.rxList[0].locs[0]
# locsN = src.rxList[0].locs[1]
# rxloc=[locsM, locsN]
# a = EM.Analytics.DCAnalyticHalf(locA, rxloc, 1e-3, earth_type="halfspace")
# b = EM.Analytics.DCAnalyticHalf(locB, rxloc, 1e-3, earth_type="halfspace")
# data_anal.append(a-b)
# data_anal = np.hstack(data_anal)
survey.dobs = pred
fig, ax = plt.subplots(1,1, figsize = (10, 2))
dat = EM.Static.Utils.StaticUtils.plot_pseudoSection(surveyNew, ax, dataType='appResistivity', sameratio=False, scale="linear", clim=(0, 200))
out = hist(np.log10(abs(dobsDC)), bins = 100)
weight = 1./abs(mesh.gridCC[:,1])**1.5
mesh.plotImage(np.log10(weight))
survey2D.dobs = dobsDC
survey2D.eps = 10**(-2.3)
survey2D.std = 0.02
dmisfit = DataMisfit.l2_DataMisfit(survey2D)
regmap = Maps.IdentityMap(nP=int(mesh.nC))
reg = Regularization.Simple(mesh,mapping=regmap,cell_weights=weight)
opt = Optimization.ProjectedGNCG(maxIter=10)
opt.upper = np.log(1e0)
opt.lower = np.log(1./300)
invProb = InvProblem.BaseInvProblem(dmisfit, reg, opt)
# Create an inversion object
beta = Directives.BetaSchedule(coolingFactor=5, coolingRate=2)
betaest = Directives.BetaEstimate_ByEig(beta0_ratio=1e0)
target = Directives.TargetMisfit()
inv = Inversion.BaseInversion(invProb, directiveList=[beta, betaest, target])
problem.counter = opt.counter = Utils.Counter()
opt.LSshorten = 0.5
opt.remember('xc')
mopt = inv.run(m0)
xc = opt.recall("xc")
fig, ax = plt.subplots(1,1, figsize = (10, 1.2))
iteration = 4
sigma = mapping*xc[iteration]
dat = mesh.plotImage(1./sigma, grid=False, ax=ax, pcolorOpts={"cmap":"jet"}, clim=(0, 200))
ax.set_ylim(-30, 0)
ax.set_xlim(-10, 290)
plt.colorbar(dat[0])
print np.log10(sigma).min(), np.log10(sigma).max()
1./sigma.max()
1./sigma.min()
surveyNew.dobs = invProb.dpred
fig, ax = plt.subplots(1,1, figsize = (10, 2))
#dat = EM.Static.Utils.StaticUtils.plot_pseudoSection(surveyNew, ax, dtype='appr', sameratio=False, clim=(40, 170))
dat = EM.Static.Utils.StaticUtils.plot_pseudoSection(surveyNew, ax, dataType='appResistivity', sameratio=False, clim=(40, 170))
surveyNew.dobs = dobsDC
fig, ax = plt.subplots(1,1, figsize = (10, 2))
dat = EM.Static.Utils.StaticUtils.plot_pseudoSection(surveyNew, ax, dataType='appResistivity', sameratio=False, clim=(40, 170))
surveyNew.dobs = abs(dmisfit.Wd*(dobsDC-invProb.dpred))
fig, ax = plt.subplots(1,1, figsize = (10, 2))
dat = EM.Static.Utils.StaticUtils.plot_pseudoSection(surveyNew, ax, dataType='volt', sameratio=False, clim=(0, 2))
# sigma = np.ones(mesh.nC)
modelname = "sigma1130Oct.npy"
np.save(modelname, sigma)
```
| github_jupyter |
```
import bert
from bert import run_classifier
from bert import optimization
from bert import tokenization
from bert import modeling
import numpy as np
import json
import tensorflow as tf
import itertools
from unidecode import unidecode
import re
import sentencepiece as spm
# !git clone https://github.com/huseinzol05/Malaya-Dataset.git
# Change to your local Malaya-Dataset
import glob
files = glob.glob('../Malaya-Dataset/emotion/translated*')
files
texts, labels = [], []
for file in files:
with open(file) as fopen:
dataset = fopen.readlines()
print(len(dataset))
texts.extend(dataset)
labels.extend([file.split('/')[-1].split('-')[1]] * len(dataset))
files = glob.glob('../Malaya-Dataset/emotion/*malaysia.json')
files
import json
for file in files:
with open(file) as fopen:
dataset = json.load(fopen)
print(len(dataset))
texts.extend(dataset)
labels.extend([file.split('/')[-1].split('-')[0]] * len(dataset))
len(texts), len(labels)
np.unique(labels)
from sklearn.preprocessing import LabelEncoder
unique_labels = np.unique(labels).tolist()
labels = LabelEncoder().fit_transform(labels)
unique_labels
labels = labels.tolist()
from prepro_utils import preprocess_text, encode_ids, encode_pieces
sp_model = spm.SentencePieceProcessor()
sp_model.Load('sp10m.cased.v4.model')
with open('sp10m.cased.v4.vocab') as fopen:
v = fopen.read().split('\n')[:-1]
v = [i.split('\t') for i in v]
v = {i[0]: i[1] for i in v}
class Tokenizer:
def __init__(self, v):
self.vocab = v
pass
def tokenize(self, string):
return encode_pieces(sp_model, string, return_unicode=False, sample=False)
def convert_tokens_to_ids(self, tokens):
return [sp_model.PieceToId(piece) for piece in tokens]
def convert_ids_to_tokens(self, ids):
return [sp_model.IdToPiece(i) for i in ids]
tokenizer = Tokenizer(v)
BERT_INIT_CHKPNT = 'pretraining_output3/model.ckpt-1000000'
BERT_CONFIG = 'checkpoint/small_config.json'
MAX_SEQ_LENGTH = 100
tokenizer.tokenize(texts[1])
list(v.keys())[:10]
from tqdm import tqdm
input_ids, input_masks, segment_ids = [], [], []
for text in tqdm(texts):
tokens_a = tokenizer.tokenize(text)
if len(tokens_a) > MAX_SEQ_LENGTH - 2:
tokens_a = tokens_a[:(MAX_SEQ_LENGTH - 2)]
tokens = ["<cls>"] + tokens_a + ["<sep>"]
segment_id = [0] * len(tokens)
input_id = tokenizer.convert_tokens_to_ids(tokens)
input_mask = [1] * len(input_id)
padding = [0] * (MAX_SEQ_LENGTH - len(input_id))
input_id += padding
input_mask += padding
segment_id += padding
input_ids.append(input_id)
input_masks.append(input_mask)
segment_ids.append(segment_id)
bert_config = modeling.BertConfig.from_json_file(BERT_CONFIG)
epoch = 10
batch_size = 60
warmup_proportion = 0.1
num_train_steps = int(len(texts) / batch_size * epoch)
num_warmup_steps = int(num_train_steps * warmup_proportion)
class Model:
def __init__(
self,
dimension_output,
learning_rate = 2e-5,
):
self.X = tf.placeholder(tf.int32, [None, None])
self.segment_ids = tf.placeholder(tf.int32, [None, None])
self.input_masks = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None])
model = modeling.BertModel(
config=bert_config,
is_training=True,
input_ids=self.X,
input_mask=self.input_masks,
token_type_ids=self.segment_ids,
use_one_hot_embeddings=False)
output_layer = model.get_pooled_output()
self.logits = tf.layers.dense(output_layer, dimension_output)
self.cost = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
logits = self.logits, labels = self.Y
)
)
self.optimizer = optimization.create_optimizer(self.cost, learning_rate,
num_train_steps, num_warmup_steps, False)
correct_pred = tf.equal(
tf.argmax(self.logits, 1, output_type = tf.int32), self.Y
)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
dimension_output = np.unique(labels).shape[0]
learning_rate = 2e-5
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(
dimension_output,
learning_rate
)
sess.run(tf.global_variables_initializer())
var_lists = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope = 'bert')
saver = tf.train.Saver(var_list = var_lists)
saver.restore(sess, BERT_INIT_CHKPNT)
from sklearn.cross_validation import train_test_split
train_input_ids, test_input_ids, train_input_masks, test_input_masks, train_segment_ids, test_segment_ids, train_Y, test_Y = train_test_split(
input_ids, input_masks, segment_ids, labels, test_size = 0.2
)
import time
EARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 3, 0, 0, 0
while True:
lasttime = time.time()
if CURRENT_CHECKPOINT == EARLY_STOPPING:
print('break epoch:%d\n' % (EPOCH))
break
train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0
pbar = tqdm(
range(0, len(train_input_ids), batch_size), desc = 'train minibatch loop'
)
for i in pbar:
index = min(i + batch_size, len(train_input_ids))
batch_x = train_input_ids[i: index]
batch_masks = train_input_masks[i: index]
batch_segment = train_segment_ids[i: index]
batch_y = train_Y[i: index]
acc, cost, _ = sess.run(
[model.accuracy, model.cost, model.optimizer],
feed_dict = {
model.Y: batch_y,
model.X: batch_x,
model.segment_ids: batch_segment,
model.input_masks: batch_masks
},
)
assert not np.isnan(cost)
train_loss += cost
train_acc += acc
pbar.set_postfix(cost = cost, accuracy = acc)
pbar = tqdm(range(0, len(test_input_ids), batch_size), desc = 'test minibatch loop')
for i in pbar:
index = min(i + batch_size, len(test_input_ids))
batch_x = test_input_ids[i: index]
batch_masks = test_input_masks[i: index]
batch_segment = test_segment_ids[i: index]
batch_y = test_Y[i: index]
acc, cost = sess.run(
[model.accuracy, model.cost],
feed_dict = {
model.Y: batch_y,
model.X: batch_x,
model.segment_ids: batch_segment,
model.input_masks: batch_masks
},
)
test_loss += cost
test_acc += acc
pbar.set_postfix(cost = cost, accuracy = acc)
train_loss /= len(train_input_ids) / batch_size
train_acc /= len(train_input_ids) / batch_size
test_loss /= len(test_input_ids) / batch_size
test_acc /= len(test_input_ids) / batch_size
if test_acc > CURRENT_ACC:
print(
'epoch: %d, pass acc: %f, current acc: %f'
% (EPOCH, CURRENT_ACC, test_acc)
)
CURRENT_ACC = test_acc
CURRENT_CHECKPOINT = 0
else:
CURRENT_CHECKPOINT += 1
print('time taken:', time.time() - lasttime)
print(
'epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'
% (EPOCH, train_loss, train_acc, test_loss, test_acc)
)
EPOCH += 1
real_Y, predict_Y = [], []
pbar = tqdm(
range(0, len(test_input_ids), batch_size), desc = 'validation minibatch loop'
)
for i in pbar:
index = min(i + batch_size, len(test_input_ids))
batch_x = test_input_ids[i: index]
batch_masks = test_input_masks[i: index]
batch_segment = test_segment_ids[i: index]
batch_y = test_Y[i: index]
predict_Y += np.argmax(sess.run(model.logits,
feed_dict = {
model.Y: batch_y,
model.X: batch_x,
model.segment_ids: batch_segment,
model.input_masks: batch_masks
},
), 1, ).tolist()
real_Y += batch_y
from sklearn import metrics
print(
metrics.classification_report(
real_Y, predict_Y, target_names = unique_labels,digits=5
)
)
```
| github_jupyter |
# Simple KernelPCA
This code template of simple KPCA(Kernel PCA) in python is a non-linear technique for dimensionality reduction through the use of Kernel.
### Required Packages
```
import warnings
import itertools
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
from sklearn.decomposition import KernelPCA
from sklearn.preprocessing import LabelEncoder
from numpy.linalg import eigh
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
#y_value
target= ''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df = pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
### Choosing the number of components
We have to estimate how many components are needed to describe the data. This can be determined by looking at the cumulative explained variance ratio as a function of the number of components.
This curve quantifies how much of the total, dimensional variance is contained within the first N components.
### Explained Variance
Explained variance refers to the variance explained by each of the principal components (eigenvectors). It can be represented as a function of ratio of related eigenvalue and sum of eigenvalues of all eigenvectors.
The function below returns a list with the values of explained variance and also plots cumulative explained variance
```
def explained_variance_plot(X):
cov_matrix = np.cov(X, rowvar=False) #this function returns the co-variance matrix for the features
egnvalues, egnvectors = eigh(cov_matrix) #eigen decomposition is done here to fetch eigen-values and eigen-vectors
total_egnvalues = sum(egnvalues)
var_exp = [(i/total_egnvalues) for i in sorted(egnvalues, reverse=True)]
plt.plot(np.cumsum(var_exp))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
return var_exp
var_exp=explained_variance_plot(X)
```
#### Scree plot
The scree plot helps you to determine the optimal number of components. The eigenvalue of each component in the initial solution is plotted. Generally, you want to extract the components on the steep slope. The components on the shallow slope contribute little to the solution.
```
plt.plot(var_exp, 'ro-', linewidth=2)
plt.title('Scree Plot')
plt.xlabel('Principal Component')
plt.ylabel('Proportion of Variance Explained')
plt.show()
```
### Model
Kernel PCA in python is a non-linear technique for dimensionality reduction through the use of Kernel.
Kernel PCA uses a kernel function to project dataset into a higher dimensional feature space, where it is linearly separable
Refer [API](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.KernelPCA.html) for the parameters
```
X_embedded = KernelPCA(n_components=3,random_state=24,kernel = 'linear').fit_transform(X)
```
#### Output Dataframe
```
finalDf = pd.DataFrame(data = X_embedded)
finalDf.columns=['comp1','comp2','comp3']
finalDf['Y']=Y
finalDf.head()
```
#### Creator: Akshar Nerkar , Github: [Profile](https://github.com/Akshar777)
| github_jupyter |
# IBM Streams database sample application
This sample demonstrates creating a Streams Python application to connect to a Db2 database, performing some analytics, and viewing the results.
In this notebook, you'll see examples of how to:
1. [Setup your database connection](#setup)
2. [Create the application](#create)
3. [Submit the application](#submit)
4. [Connect to the running application to view data](#view)
# Overview
**About the sample**
This application simulates data tuples that are inserted as rows into a Db2 database table.
**How it works**
The Python application created in this notebook is submitted to the IBM Streams service for execution. Once the application is running in the service, you can connect to it from the notebook to retrieve the results.

### Documentation
- [Streams Python development guide](https://ibmstreams.github.io/streamsx.documentation/docs/latest/python/)
- [Streams Python API](https://streamsxtopology.readthedocs.io/)
## <a name="setup"> </a> 1. Setup
### 1.1 Add credentials for the IBM Streams service
In order to submit a Streams application you need to provide the name of the Streams instance.
1. From the navigation menu, click **Services > Instances**.
2. Update the value of `streams_instance_name` in the cell below according to your Streams instance name.
```
from icpd_core import icpd_util
streams_instance_name = "sample-streams" ## Change this to Streams instance
try:
cfg=icpd_util.get_service_instance_details(name=streams_instance_name, instance_type="streams")
except TypeError:
cfg=icpd_util.get_service_instance_details(name=streams_instance_name)
```
### 1.2 Import the `streamsx.database` package and verify the package version
```
import streamsx.database as db
import streamsx.topology.context
print("INFO: streamsx package version: " + streamsx.topology.context.__version__)
print("INFO: streamsx.database package version: " + db.__version__)
```
### <a name="credentials"> </a> 1.3 Configure the connection to Db2 Warehouse
We need Db2 credentials to connect to a Db2 database.
Select one of the following options:
* OPTION 1: Use external connection configured in Cloud Pak for Data
* OPTION 2: Use credentials from IBM cloud Db2 Warehouse service
* OPTION 3: Custom connetion, for example when running this notebook outside Cloud Pak for Data or having a database not located in IBM cloud or Cloud Pak for Data
### 1.3.1 OPTION 1: Use a configured external connection
Perform the steps [Connecting to data source](https://www.ibm.com/support/producthub/icpdata/docs/content/SSQNUZ_current/cpd/access/connect-data-sources.html)
and create an external configuration for your Db2 connection.
List the connections with the cell below:
```
ext_connections = icpd_util.get_connections('external')
print (ext_connections)
```
Change the `connection_name` and run the cell below
```
connection_name = 'auto-dashdb'
db2credentials = icpd_util.get_connection(connection_name, conn_class='external')
print (db2credentials)
```
### 1.3.2 OPTION 2: Use credentials from IBM cloud Db2 Warehouse service
1. Create a Db2 Warehouse service on IBM cloud.
you need to have an IBM account to create a Db2 service.
https://console.bluemix.net/catalog/?search=db2
2. Create a service credential for Db2 service on IBM cloud.
3. Copy the credentials in clipboard.
4. Uncomment and run the cell below
4. Paste the credentials into Db2 Warehouse credentials prompt below.
```
#import getpass
#db2_service_credentials=getpass.getpass('Db2 Warehouse credentials:')
#import json
#db2credentials = json.loads(db2_service_credentials)
```
### 1.3.3 OPTION 3: Custom connection
If you want to use another Db2 database, you can create a dict with the following attributes:
{
"username": "your-db-user-name",
"password": "your-db-password",
"jdbcurl": "jdbc:db2://your-db2-hostname:50000/your-database-name"
}
```
#db2credentials = {
# "username": "your-db-user-name",
# "password": "your-db-password",
# "jdbcurl": "jdbc:db2://your-db2-hostname:50000/your-database-name"
# }
```
# <a name="create"> </a> 2. Create the application
All Streams applications start with a Topology object, so start by creating one:
```
#Imports
from streamsx.topology.topology import *
from streamsx.topology.context import *
from streamsx.topology.schema import StreamSchema
import streamsx.database as db
# create a Topology object
topo = Topology(name="DatabaseSample", namespace="sample")
```
### How to use the streamsx.database package
The streamsx.database package is the python wrapper for the [streamsx.jdbc](https://ibmstreams.github.io/streamsx.jdbc/doc/spldoc/html) toolkit
To interact with a Db2 database from Streams, you pass a SQL statement to the `streamsx.database.JDBCStatement` class.
`JDBCStatement` is the main class of `streamsx.database` package.
It executes a SQL statement and produces a [`Stream`](https://streamsxtopology.readthedocs.io/en/stable/streamsx.topology.topology.html#stream) of the results.
It needs at least two mandatory parameters, the first one is the input `Stream` and the second parameter is the database credentials in JSON format.
There are 2 ways to execute SQL statements using `db.JDBCStatement`:
- `Stream` containing the statements to execute. This is used for statements like creating or dropping tables.
- Set the `sql` parameter to the SQL statement and the `stream` contains the data you want to send to Db2. Use this when inserting data.
This application will show both ways. It executes SQL statements that:
- Drop the Db2 table, if exists.
- Create a new table in a Db2 database.
- Insert some rows into the table.
- Select all rows from a table.
### Define the SQL statements and table name
```
table_name = 'RUN_SAMPLE_DEMO'
# SQL statements
sql_drop = 'DROP TABLE ' + table_name
sql_create = 'CREATE TABLE ' + table_name + ' (ID INT, NAME CHAR(30), AGE INT)'
sql_insert = 'INSERT INTO ' + table_name + ' (ID, NAME, AGE) VALUES (? , ?, ?)'
sql_select = 'SELECT * FROM ' + table_name
```
## <a name="drop"> </a> 2.1. Create the table
In the following step the `topo.source` creates a `Stream` containing the two SQL statements to drop and create the table.
`db.JDBCStatement` executes the two statements in the input stream, so it will drop the table and create a new table.
```
from streamsx.topology.schema import CommonSchema, StreamSchema
# The crt_table is a Stream containing the two SQL statements: sql_drop and sql_create
crt_table = topo.source([sql_drop, sql_create]).as_string()
# drop the table if exist and create a new table in database
crt_table.map(db.JDBCStatement(credentials=db2credentials), name='CREATE_TABLE', schema=CommonSchema.String)
```
## <a name="insert"> </a> 2.2. Insert streaming data into the table
Next, we generate a stream of data and insert it into the table we created.
The function `generate_data()` generates some data with schema `(ID INTEGER, NAME STRING, AGE INTEGER)` that will be inserted into the database.
Before it can be inserted in the database, we have to change the [schema](https://streamsxtopology.readthedocs.io/en/stable/streamsx.topology.schema.html) of the input data `Stream` to the [StreamsSchema](https://streamsxtopology.readthedocs.io/en/stable/streamsx.topology.schema.html#streamsx.topology.schema.StreamSchema) type, which is the format accepted by the `JDBCStatement` class. See the [list of mappings from Python types to StreamSchema types](https://streamsxtopology.readthedocs.io/en/stable/streamsx.topology.schema.html#streamsx.topology.schema.StreamSchema)
The `gen_data` `Stream` contains the data produced by the `generate_data()` function.
We again use `db.JDBCStatement` but in the following step, it uses `gen_data` as input stream and the predefined `sql_insert` variable as the SQL statement.
```
import random
import time
# generates some data with schema (ID, NAME, AGE)
def generate_data():
counter = 0
while True:
#yield a random id, name and age
counter = counter +1
yield {"NAME": "Name_" + str(random.randint(0,500)), "ID": counter, "AGE": random.randint(10,99)}
time.sleep(0.10)
# convert it to SPL schema for the database operator JDBCStatement
tuple_schema = StreamSchema("tuple<int64 ID, rstring NAME, int32 AGE>")
# Generates data for a stream of three attributes. Each attribute maps to a column using the same name of the Db2 database table.
gen_data = topo.source(generate_data, name="GeneratedData").map(lambda tpl: (tpl["ID"], tpl["NAME"], tpl["AGE"]),
schema=tuple_schema)
# insert generated rows into table
config_insert = {
"sql": sql_insert,
"sql_params": 'ID, NAME, AGE'
}
insertResults = gen_data.map(db.JDBCStatement(db2credentials, **config_insert), name='INSERT', schema=tuple_schema)
```
## <a name="select"> </a> 2.3. Retrieve data from the table
In this step the `JDBCStatement` runs the SQL statement `"SELECT * FROM RUN_SAMPLE_DEMO"` and returns the results in tuple schema `tuple<int64 ID, rstring NAME, int32 AGE>` .
```
# select all rows from table
config_select = {
"sql": sql_select
}
selectResults= gen_data.map(db.JDBCStatement(db2credentials, **config_select), name='SELECT', schema='tuple<int64 ID, rstring NAME, int32 AGE>')
selectResults.print()
# create a view to check retrieving data from a table
selectView = selectResults.view(name="selectRecords", description="Sample of selected records")
```
# <a name="submit"> </a> 3. Submit the application
A running Streams application is called a *job*. This next cell submits the application for execution and prints the resulting job id.
```
from streamsx.topology import context
# Disable SSL certificate verification if necessary
cfg[context.ConfigParams.SSL_VERIFY] = False
# submit the topology 'topo'
submission_result = context.submit ("DISTRIBUTED", topo, config = cfg)
# The submission_result object contains information about the running application, or job
if submission_result.job:
streams_job = submission_result.job
print ("JobId: ", streams_job.id , "\nJob name: ", streams_job.name)
```
# <a name="view"> </a> 4. Use the View to access data from the job
Now that the job is started, use the View object you have already created to start retrieving data from a table in database.
```
# Connect to the view and display the selected data
queue = selectView.start_data_fetch()
try:
for val in range(20):
print(queue.get(timeout=60))
finally:
selectView.stop_data_fetch()
```
## 5. See job status
The tools available to monitor the running application depend on the version of Streams and your development environment.
- **If you are using a Cloud Pak for Data 3.5 project:** When you submit the `Topology`, you create a new <i>job run</i>. The job represents the application and the job run represents a single instance of the running application.
1. Open your project and click on the **Jobs** tab. This will show a list of the project's jobs.
1. Under the **Job name** column, find your job based on the `Job Name` [printed when you submitted the job](#launch). This will list all of the job runs for that job.
1. Click the **Run name** to open the job run. The run name will be the same as the `Job Name` printed above.
1. This will open the Job Details page.
1. To open the Job Graph, click the **Streams job graph** link.
1. To download logs, click the **Logs** tab and click **Create snapshot**, then download the snapshot.
- **For all other development environments and versions of Streams**, [see this page for more information](http://ibmstreams.github.io/streamsx.documentation/docs/spl/quick-start/qs-4).
# <a name="cancel"></a> 6. Cancel the job
The Streams job is running in the Streams service. You can cancel it within the notebook or delete it from **Projects** > **Jobs**.
```
# cancel the job directly using the Job object
streams_job.cancel()
```
## Summary
We created an application which connects to Db2 database, dropped a table, created a table, inserted some rows into table and reads the rows.
After submitting the application to the Streams service, we checked the application logs to see the progress.
It is also possible to check the contents of the test table on Db2 console with the following command.
db2 "SELECT * FROM RUN_SAMPLE_DEMO"
### Next steps
Learn more about the [Python API from the documentation](https://streamsxtopology.readthedocs.io/en/stable/index.html/). You can also visit the [Streams community for more resources](https://ibm.biz/streams-articles).
| github_jupyter |
# Import Libraries and Load Data
```
# import sklearn
!pip install plydata
!pip install plotnine
!pip install hyperopt
import pandas as pd
import numpy as np
from plydata import *
from plotnine import *
from hyperopt import hp, tpe, fmin
from joblib import dump,load
import pickle
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split, cross_validate
from sklearn.preprocessing import normalize
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import MinMaxScaler
from google.colab import drive
drive.mount('/content/drive')
# Import the scored data.
all_scored = pd.read_csv('/content/drive/My Drive/5_datascience/2_context_chatbot/data/graded/all_graded_with_scores.csv')
```
# Preprocessing: Check for missing data and split the data
```
# Drop any rows where the label is NA.
all_scored_full = all_scored[all_scored['NCJ_Rating'].notnull()] # Ask Jacobson about this.
# Split the data into features and labels.
X = all_scored_full >> select('-input','-output_list', '-NCJ_Rating')
X = X.reindex(sorted(X.columns), axis=1)
y = all_scored_full['NCJ_Rating']
print(X.shape)
print(y.shape)
# Split into a 66/33 training-test set.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= .2)
scaler = MinMaxScaler()
X_train_scaled = scaler.fit_transform(X_train)
pickle.dump(scaler, open('/content/drive/My Drive/5_datascience/2_context_chatbot/data/models/scaler.p', "wb"))
```
# Training the Models
## Using sklearn defaults and 5 fold cross validation
```
scores = ['neg_mean_squared_error']
rf = RandomForestRegressor(verbose=0)
rf_cv_result = cross_validate(rf, X_train_scaled, y_train, cv=5, scoring=scores, return_estimator=True)
rf_cv_result
# Dump the random forest classifier as a saved model
dump(rf_cv_result['estimator'][4], '/content/drive/My Drive/5_datascience/2_context_chatbot/data/models/default_rf.joblib')
xgb = GradientBoostingRegressor()
xgb_cv_result = cross_validate(xgb, X_train, y_train, cv=5, scoring=scores, return_estimator=True)
xgb_cv_result
lr = LinearRegression()
lr_cv_result = cross_validate(lr, X_train, y_train, cv=3, scoring=scores)
lr_cv_result
```
## Bayesian Hyperparameter Optimization
```
def objective_function(params):
"""Objective function to minimize: -MSE"""
model_name = params['model'] # Gets the model name
del params['model'] # Deletes model name, rest of params are hyperparameters
# Initialize model with parameters
if model_name == "RandomForestRegressor":
model = RandomForestRegressor(**params)
elif model_name == "GradientBoostingRegressor":
model = GradientBoostingRegressor(**params)
elif model_name == "LinearRegression":
model = LinearRegression(**params)
scores = ['neg_mean_squared_error','max_error']
cv_scores = cross_validate(model, X, y, cv=5, scoring = scores, )
cv_neg_MSE = np.mean(cv_scores['test_neg_mean_squared_error'])
return -cv_neg_MSE
def get_space(model):
model_name = type(model).__name__
print(model_name)
if model_name == "RandomForestRegressor":
space_dict = {'model' : model_name,
'max_depth': hp.choice('max_depth', range(1,1000000000)),
'max_features': hp.choice('max_features', range(1,1000000000)),
'n_estimators': hp.choice('n_estimators', range(1,1000000000))}
elif model_name == "GradientBoostingRegressor":
space_dict = {'model' : model_name,
'max_depth': hp.choice('max_depth', range(1,50)),
'n_estimators': hp.choice('n_estimators', range(1,100))}
return space_dict
rf = RandomForestRegressor()
lr = LinearRegression()
xgb = GradientBoostingRegressor()
model_list = [rf]
for model in model_list:
space = get_space(model)
best_h = fmin(fn=objective_function, space=space, algo=tpe.suggest, max_evals=10)
np.isinf(X_train).sum()
pd.DataFrame(X_train).isna().sum().sum()
np.isinf(y_train).sum()
pd.DataFrame(y_train).isna().sum().sum()
```
# Test Set
```
default_rf = load('/content/drive/My Drive/5_datascience/2_context_chatbot/data/models/default_rf.joblib')
rf_pred = default_rf.predict(X_test)
mean_squared_error(y_test, rf_pred)
sns.distplot(rf_pred)
sns.distplot(y_test)
import scipy
test_obj = pd.DataFrame([y_test, rf_pred])
test_obj
pd.Series(rf_pred).isna().sum()
dataset=pd.DataFrame({"y_test":y_test,"rf_pred":rf_pred},columns=["y_test","rf_pred"])
dataset
ss = dataset.loc[dataset['y_test'] >= 5,]
ss
from scipy import stats
stats.spearmanr(ss) # implement a spearmanr with a threshold for the minimum NCJ rating.
stats.spearmanr(dataset)
```
| github_jupyter |
Need to run in root folder.
# Manual data processing if issues
### Note: used this before in top folder.
- May have issues with imports.
- Can move back to top folder if using.
```
from numpy.core.fromnumeric import product
from scipy.sparse import data
import torch
import torch.nn.functional as F
from torch_scatter import scatter
from torch_geometric.data import InMemoryDataset, DataLoader # , Data
from torch_geometric.data.data import Data
from rdkit import Chem
from rdkit.Chem.rdchem import HybridizationType
from rdkit.Chem.rdchem import BondType as BT
from tqdm import tqdm
def process_geometry_file(geometry_file, list = None):
""" Code mostly lifted from QM9 dataset creation https://pytorch-geometric.readthedocs.io/en/latest/_modules/torch_geometric/datasets/qm9.html
Transforms molecules to their atom features and adjacency lists.
"""
types = {'H': 0, 'C': 1, 'N': 2, 'O': 3, 'F': 4}
bonds = {BT.SINGLE: 0, BT.DOUBLE: 1, BT.TRIPLE: 2, BT.AROMATIC: 3}
limit = 100
data_list = list if list else []
full_path = r'data' + geometry_file
geometries = Chem.SDMolSupplier(full_path, removeHs=False, sanitize=False)
# get atom and edge features for each geometry
for i, mol in enumerate(tqdm(geometries)):
# temp soln cos of split edge memory issues
if i == limit:
break
N = mol.GetNumAtoms()
# get atom positions as matrix w shape [num_nodes, num_dimensions] = [num_atoms, 3]
atom_data = geometries.GetItemText(i).split('\n')[4:4 + N]
atom_positions = [[float(x) for x in line.split()[:3]] for line in atom_data]
atom_positions = torch.tensor(atom_positions, dtype=torch.float)
# all the features
type_idx = []
atomic_number = []
aromatic = []
sp = []
sp2 = []
sp3 = []
num_hs = []
# atom/node features
for atom in mol.GetAtoms():
type_idx.append(types[atom.GetSymbol()])
atomic_number.append(atom.GetAtomicNum())
aromatic.append(1 if atom.GetIsAromatic() else 0)
hybridisation = atom.GetHybridization()
sp.append(1 if hybridisation == HybridizationType.SP else 0)
sp2.append(1 if hybridisation == HybridizationType.SP2 else 0)
sp3.append(1 if hybridisation == HybridizationType.SP3 else 0)
# !!! should do the features that lucky does: whether bonded, 3d_rbf
# bond/edge features
row, col, edge_type = [], [], []
for bond in mol.GetBonds():
start, end = bond.GetBeginAtomIdx(), bond.GetEndAtomIdx()
row += [start, end]
col += [end, start]
# edge type for each bond type; *2 because both ways
edge_type += 2 * [bonds[bond.GetBondType()]]
# edge_index is graph connectivity in COO format with shape [2, num_edges]
edge_index = torch.tensor([row, col], dtype=torch.long)
edge_type = torch.tensor(edge_type, dtype=torch.long)
# edge_attr is edge feature matrix with shape [num_edges, num_edge_features]
edge_attr = F.one_hot(edge_type, num_classes=len(bonds)).to(torch.float)
# order edges based on combined ascending order
perm = (edge_index[0] * N + edge_index[1]).argsort() # TODO
edge_index = edge_index[:, perm]
edge_type = edge_type[perm]
edge_attr = edge_attr[perm]
row, col = edge_index
z = torch.tensor(atomic_number, dtype=torch.long)
hs = (z == 1).to(torch.float) # hydrogens
num_hs = scatter(hs[row], col, dim_size=N).tolist() # scatter helps with one-hot
x1 = F.one_hot(torch.tensor(type_idx), num_classes=len(types))
x2 = torch.tensor([atomic_number, aromatic, sp, sp2, sp3, num_hs], dtype=torch.float).t().contiguous()
x = torch.cat([x1.to(torch.float), x2], dim=-1)
data = Data(x=x, z=z, pos=atom_positions, edge_index=edge_index, edge_attr=edge_attr, idx=i)
data_list.append(data)
return data_list
# concat train r and test r
reactants = []
reactants = process_geometry_file('/raw/train_reactants.sdf', reactants)
reactants = process_geometry_file('/raw/test_reactants.sdf', reactants)
# concat train ts and test ts
ts = []
ts = process_geometry_file('/raw/train_ts.sdf', ts)
ts = process_geometry_file('/raw/test_ts.sdf', ts)
# concat train p and test p
products = []
products = process_geometry_file('/raw/train_products.sdf', products)
products = process_geometry_file('/raw/test_products.sdf', products)
assert len(reactants) == len(ts) == len(products)
print(type(reactants[0]), type(ts[0]), type(products[0]))
class ReactionTriple(Data):
def __init__(self, r = None, ts = None, p = None):
super(ReactionTriple, self).__init__()
self.r = r
self.ts = ts
self.p = p
def __inc__(self, key, value):
if key == 'r':
return self.r.edge_index.size(0)
elif key == 'ts':
return self.ts.edge_index.size(0)
elif key == 'p':
return self.p.edge_index.size(0)
else:
return super().__inc__(key, value)
class OtherReactionTriple(Data):
# seeing if this works
def __init__(self, r, ts, p):
super(OtherReactionTriple, self).__init__()
# initial checks
if r and ts and p:
assert r.idx == ts.idx == p.idx, \
"The IDs of each mol don't match. Are you sure your data processing is correct?"
assert len(r.z) == len(ts.z) == len(p.z), \
"The mols have different number of atoms."
self.idx = r.idx
self.num_atoms = len(r.z)
# reactant
self.edge_attr_r = r.edge_attr
self.edge_index_r = r.edge_index
self.pos_r = r.pos
self.x_r = r.x
# ts
self.edge_attr_ts = ts.edge_attr
self.edge_index_ts = ts.edge_index
self.pos_ts = ts.pos
self.x_ts = ts.x
# product
self.edge_attr_p = p.edge_attr
self.edge_index_p = p.edge_index
self.pos_p = p.pos
self.x_p = p.x
else:
NameError("Reactant, TS, or Product not defined for this reaction.")
def __inc__(self, key, value):
if key == 'edge_index_r' or key == 'edge_attr_r':
return self.x_r.size(0)
if key == 'edge_index_ts' or key == 'edge_attr_ts':
return self.x_ts.size(0)
if key == 'edge_index_p' or key == 'edge_attr_p':
return self.x_p.size(0)
else:
return super().__inc__(key, value)
def __cat_dim__(self, key, item):
# NOTE: automatically figures out .x and .pos
if key == 'edge_attr_r' or key == 'edge_attr_ts' or key == 'edge_attr_p':
return 0
if key == 'edge_index_r' or key == 'edge_index_ts' or key == 'edge_index_p':
return 1
else:
return super().__cat_dim__(key, item)
rxns = []
for rxn_id in range(len(reactants)):
rxn = OtherReactionTriple(reactants[rxn_id], ts[rxn_id], products[rxn_id])
rxns.append(rxn)
to_follow = ['edge_index_r', 'edge_index_ts', 'edge_index_p', 'edge_attr_r', 'edge_attr_ts', 'edge_attr_p'
'pos_r', 'pos_ts', 'pos_p', 'x_r', 'x_ts', 'x_p']
loader = DataLoader(rxns, batch_size = 2, follow_batch = to_follow)
batch = next(iter(loader))
```
## Data functions
```
def edge2adj(z, edge_index, sigmoid = True):
value = (z[edge_index[0]] * z[edge_index[1]]).sum(dim = 1)
return torch.sigmoid(value) if sigmoid else value
# their model
# so they take their nodes, edges, edge_attr and actual adj
# adj_pred, z = model(nodes, edges, edge_attr)
# bce, kl = loss(adj_pred, adj_gt)
from torch_geometric.utils import to_dense_adj
node_fs = mol_graph.x
edge_index = mol_graph.edge_index
edge_attr = mol_graph.edge_attr
num_nodes = len(mol_graph.z)
latent_dim = 3
max_num_nodes = 21
def sparse_to_dense_adj(num_nodes, edge_index):
# edge_index is sparse_adj matrix (given in coo format for graph connectivity)
sparse_adj = torch.cat([edge_index[0].unsqueeze(0), edge_index[1].unsqueeze(0)])
# the values we put in at each tuple; that's why length of sparse_adj
ones = torch.ones(sparse_adj.size(1))
# FloatTensor() creates sparse coo tensor in torch format, then to_dense()
dense_adj = torch.sparse.FloatTensor(sparse_adj, ones, torch.Size([num_nodes, num_nodes])).to_dense() # to_dense adds the zeroes needed
return dense_adj
adj_egnn = sparse_to_dense_adj(num_nodes, edge_index)
# with edge_attr, we get a [1, num_nodes, num_nodes] for each edge_type
adj_pyg = to_dense_adj(edge_index, edge_attr = edge_attr, max_num_nodes = num_nodes)
# get_dense_graph(): returns self.nodes, self.edges_dense, self.edge_attr_dense, self.adj
# adj = sparse2dense(n_nodes, self.edges); adjust for loops
# compare sparse2dense (egnn) vs to_dense_adj (pyg)
# adj_egnn.shape
# (adj_pyg == adj_egnn).all()
# gcn = GCNConv(num_nodes, latent_dim)
# z = gcn(node_fs, edge_index)
# adj_pred = adj_pred * (1 - torch.eye(num_nodes).to(self.device)) # removes self_loops
# * is hadamard product
# coords always same, maybe node and edge features too? need to pad adj matrix
# dataset dims
elements = "HCNO"
num_elements = len(elements)
max_n_atoms = max([r.GetNumAtoms() for r,ts,p in data])
num_coords = 3
num_bond_fs
# want to pad exist features
def prepare_batch(batch_mols):
# initialise batch
batch_size = len(batch_mols)
atom_fs = torch.zeros((batch_size, max_n_atoms, num_elements + 1), dtype = torch.float32) # num_atoms, max_num_atoms,
bond_fs = torch.zeros((batch_size, max_n_atoms, max_n_atoms, num_bond_fs), dtype = torch.float32)
sizes = torch.zeros(batch_size, dtype = torch.float32)
coords = torch.zeros((batch_size, max_size, num_coords), dtype = torch.float32)
pass
def pad_sequence(sequences: List[torch.Tensor], max_length: int, padding_value=0) -> torch.Tensor:
# assuming trailing dimensions and type of all the Tensors
# in sequences are same and fetching those from sequences[0]
max_size = sequences[0].size()
trailing_dims = max_size[1:]
out_dims = (len(sequences), max_length) + trailing_dims
out_tensor = sequences[0].data.new(*out_dims).fill_(padding_value) # type: ignore
for i, tensor in enumerate(sequences):
length = tensor.size(0)
# use index notation to prevent duplicate references to the tensor
out_tensor[i, :length, ...] = tensor
return out_tensor
```
# ts_gen processing
## Testing
```
from rdkit import Chem
import numpy as np
import torch
from torch_geometric.data import DataLoader
from torch_geometric.data.data import Data
import tqdm
class TSGenData(Data):
# seeing if this works
def __init__(self, x = None, pos = None, edge_attr = None, idx = None):
super(TSGenData, self).__init__(x = x, pos = pos, edge_attr = edge_attr)
self.idx = idx
def __inc__(self, key, value):
if key == 'edge_attr':
return self.x.size(0)
else:
return super().__inc__(key, value)
def __cat_dim__(self, key, item):
# NOTE: automatically figures out .x and .pos
if key == 'edge_attr':
return (0, 1) # since N x N x edge_attr
else:
return super().__cat_dim__(key, item)
# constants
MAX_D = 10.
COORD_DIM = 3
ELEM_TYPES = {'H': 0, 'C': 1, 'N': 2, 'O': 3, 'F': 4}
NUM_EDGE_ATTR = 3
TEMP_MOLS_LIMIT = 10
def process():
# reactants
r_train = Chem.SDMolSupplier('data/raw/train_reactants.sdf', removeHs = False, sanitize = False)
r_test = Chem.SDMolSupplier('data/raw/test_reactants.sdf', removeHs = False, sanitize = False)
rs = []
for mol in r_train:
rs.append(mol)
for mol in r_test:
rs.append(mol)
# transition states
ts_train = Chem.SDMolSupplier('data/raw/train_ts.sdf', removeHs = False, sanitize = False)
ts_test = Chem.SDMolSupplier('data/raw/test_ts.sdf', removeHs = False, sanitize = False)
tss = []
for mol in ts_train:
tss.append(mol)
for mol in ts_test:
tss.append(mol)
# products
p_train = Chem.SDMolSupplier('data/raw/train_products.sdf', removeHs = False, sanitize = False)
p_test = Chem.SDMolSupplier('data/raw/test_products.sdf', removeHs = False, sanitize = False)
ps = []
for mol in p_train:
ps.append(mol)
for mol in p_test:
ps.append(mol)
assert len(rs) == len(tss) == len(ps), f"Lengths of reactants ({len(rs)}), transition states \
({len(tss)}), products ({len(ps)}) don't match."
geometries = list(zip(rs, tss, ps))
data_list = process_geometries(geometries)
return data_list
# torch.save(self.collate(data_list), self.processed_paths[0])
def process_geometries(geometries):
"""Process all geometries in same manner as ts_gen."""
data_list = []
for rxn_id, rxn in enumerate(geometries):
if rxn_id == TEMP_MOLS_LIMIT:
break
r, ts, p = rxn
num_atoms = r.GetNumAtoms()
# dist matrices
D = (Chem.GetDistanceMatrix(r) + Chem.GetDistanceMatrix(p)) / 2
D[D > MAX_D] = MAX_D
D_3D_rbf = np.exp(-((Chem.Get3DDistanceMatrix(r) + Chem.Get3DDistanceMatrix(p)) / 2))
# node feats, edge attr init
type_ids, atomic_ns = [], [] # TODO: init of vec N
edge_attr = torch.zeros(num_atoms, num_atoms, NUM_EDGE_ATTR)
# ts ground truth coords
ts_gt_pos = torch.zeros((num_atoms, COORD_DIM))
ts_conf = ts.GetConformer()
for i in range(num_atoms):
# node feats
atom = r.GetAtomWithIdx(i)
type_ids.append(ELEM_TYPES[atom.GetSymbol()])
atomic_ns.append(atom.GetAtomicNum() / 10.)
# ts coordinates: atom positions as matrix w shape [num_atoms, 3]
pos = ts_conf.GetAtomPosition(i)
ts_gt_pos[i] = torch.tensor([pos.x, pos.y, pos.z])
# edge attrs
for j in range(num_atoms):
if D[i][j] == 1: # if stays bonded
edge_attr[i][j][0] = 1 # bonded?
if r.GetBondBetweenAtoms(i, j).GetIsAromatic():
edge_attr[i][j][1] = 1 # aromatic?
edge_attr[i][j][2] = D_3D_rbf[i][j] # 3d rbf
node_feats = torch.tensor([type_ids, atomic_ns], dtype = torch.float).t().contiguous()
atomic_ns = torch.tensor(atomic_ns, dtype = torch.long)
# edge_attr = torch.tensor([bonded, aromatic, rbf], dtype = torch.float).t().contiguous()
data = TSGenData(x = node_feats, pos = ts_gt_pos, edge_attr = edge_attr, idx = rxn_id)
data_list.append(data)
return data_list
data_list = process()
from torch import Tensor
from itertools import product
def collate(data_list):
keys = data_list[0].keys
data = data_list[0].__class__()
for key in keys:
data[key] = []
slices = {key: [0] for key in keys}
for item, key in product(data_list, keys):
data[key].append(item[key])
if isinstance(item[key], Tensor) and (item[key].dim() == 1 or item[key].dim() == 2):
cat_dim = item.__cat_dim__(key, item[key])
cat_dim = 0 if cat_dim is None else cat_dim
s = slices[key][-1] + item[key].size(cat_dim)
elif isinstance(item[key], Tensor) and (item[key].dim() > 2):
cat_dims = item.__cat_dim__(key, item[key])
# print(cat_dims)
s = slices[key][-1]
for cat_dim in cat_dims:
s += item[key].size(cat_dim)
else:
s = slices[key][-1] + 1
slices[key].append(s)
# print(slices)
if hasattr(data_list[0], '__num_nodes__'):
data.__num_nodes__ = []
for item in data_list:
data.__num_nodes__.append(item.num_nodes)
for key in keys:
item = data_list[0][key]
if isinstance(item, Tensor) and len(data_list) > 1:
if item.dim() == 1 or item.dim() == 2:
cat_dim = data.__cat_dim__(key, item)
cat_dim = 0 if cat_dim is None else cat_dim
data[key] = torch.cat(data[key], dim=cat_dim)
elif item.dim() > 2:
print(item.dim())
cat_dim = data.__cat_dim__(key, item)
# size = torch.tensor(item.sizes())[torch.tensor(cat_dim)]
# print(len(data[key]))
data[key] = torch.stack(data[key])
# data[key] = torch.cat(data[key], dim = 0)
# data[key] = torch.cat(data[key], dim = 1)
continue
else:
data[key] = torch.stack(data[key])
elif isinstance(item, Tensor): # Don't duplicate attributes...
data[key] = data[key][0]
elif isinstance(item, int) or isinstance(item, float):
data[key] = torch.tensor(data[key])
slices[key] = torch.tensor(slices[key], dtype=torch.long)
return data, slices
collate(data_list)
import torch
from torch.nn.utils.rnn import pad_sequence, pack_padded_sequence
a = torch.ones(2, 2, 10)
b = torch.ones(4, 4, 10)
c = torch.ones(6, 6, 10)
pack_padded_sequence(torch.cat([a, b, c]), [2, 4, 6])
loader = DataLoader(data_list, batch_size = 5)
```
## Masks
```
# you would need to do this for each batch
import torch
def sequence_mask(sizes, max_size = 21, dtype = torch.bool):
row_vector = torch.arange(0, max_size, 1)
matrix = torch.unsqueeze(sizes, dim = -1)
mask = row_vector < matrix
mask.type(dtype)
return mask
sizes = torch.tensor([10, 12, 20, 18]) # num_atoms in each graph
max_size = 21
mask = sequence_mask(sizes, max_size)
mask_n = torch.unsqueeze(mask, 2)
mask_v = torch.unsqueeze(mask_n, 1) * torch.unsqueeze(mask_n, 2)
mask_n.shape, mask_v.shape
```
## Final ts_gen code from .py file
```
from ts_vae.data_processors.ts_gen_processor import TSGenDataset
from torch_geometric.data import DataLoader
import numpy as np
from ts_vae.utils import remove_files
remove_files()
rxns = TSGenDataset(r'data')
tt_split = 0.8
num_rxns = len(rxns)
num_train = int(np.floor(tt_split * num_rxns))
batch_size = 5
train_loader = DataLoader(rxns[: num_train], batch_size = batch_size)
test_loader = DataLoader(rxns[num_train: ], batch_size = batch_size)
train_loader.dataset[0].edge_attr.size(2)
rxns[0]
batch = next(iter(train_loader))
batch
from experiments.building_on_mit.meta_eval.meta_eval import ablation_experiment
# from ts_vae.utils import remove_files
# remove_files()
# have to use batch_size = 1 right now
train_log, test_log = ablation_experiment(0.8, 1, 2, 2)
```
# Redoing with GraphDataLoader
- GraphDataLoader: takes collate_fn given by GraphCollater()
- GraphCollater -> Collater for ABC
- GraphBatch
<br/><br/>
- CustomDataLoader, CustomBatch, CustomCollater
- Then create my own collate() and Batch.from_data_list() funcs
- CustomDataLoader is super simple, the main logic would be in CustomCollater which defines the collate() func for the DataLoader
<br/><br/>
- All I need to do is create a DataLoader (which I have), then overwrite the collate() and
## Initial
```
data_list[0].__dict__.keys()
from torch_geometric.data import Data, Batch
from collections.abc import Mapping, Sequence
class TSGenBatch(TSGenData): # Data
def __init__(self, batch = None, ptr = None, **kwargs):
super(Batch, self).__init__(**kwargs)
for key, item in kwargs.items():
if key == 'num_nodes':
self.__num_nodes__ = item
else:
self[key] = item
self.batch = batch
self.ptr = ptr
self.__data_class__ = TSGenData # Data
self.__slices__ = None
self.__cumsum__ = None
self.__cat_dims__ = None
self.__num_nodes_list__ = None
self.__num_graphs__ = None
@classmethod
def from_data_list(cls, data_list, follow_batch = [], exclude_keys = []):
# construct batch from TSGenData objects
# get relevant graph keys
keys = list(set(data_list[0].keys) - set(exclude_keys))
assert 'batch' not in keys and 'ptr' not in keys
batch = cls()
for key in data_list[0].__dict__.keys():
# no batch for those intrinsic class fs
if key[:2] != '__' and key[-2:] != '__':
batch[key] = None
batch.__num_graphs__ = len(data_list)
batch.__data_class__ = data_list[0].__class__
# init all keys for the batch
for key in keys + ['batch']:
batch[key] = []
batch['ptr'] = [0] # pointer to this batch
device = None
slices = {key: [0] for key in keys}
cumsum = {key: [0] for key in keys}
cat_dims = {}
num_nodes_list = []
for i, data in enumerate(data_list):
for key in keys:
item = data[key]
# increase values by cumsum value
cum = cumsum[key][-1]
class TSGenCollater(object):
def __init__(self, follow_batch, exclude_keys):
self.follow_batch = follow_batch
self.exclude_keys = exclude_keys
def collate(self, batch):
# dgl: collate(self, items): items is list of data points or tuples; elems in list same length
# pyg: collate(self, batch)
elem = batch[0]
if isinstance(elem, TSGenData):
return Batch.from_data_list(batch, self.follow_batch, self.exclude_keys)
if isinstance(elem, Data):
return Batch.from_data_list(batch, self.follow_batch, self.exclude_keys)
elif isinstance(elem, torch.Tensor):
return default_collate(batch)
elif isinstance(elem, float):
return torch.tensor(batch, dtype=torch.float)
elif isinstance(elem, int):
return torch.tensor(batch)
elif isinstance(elem, str):
return batch
elif isinstance(elem, Mapping):
return {key: self.collate([d[key] for d in batch]) for key in elem}
elif isinstance(elem, tuple) and hasattr(elem, '_fields'):
return type(elem)(*(self.collate(samples) for samples in zip(*batch)))
elif isinstance(elem, Sequence) and not isinstance(elem, str):
return [self.collate(samples) for ssamples in zip(*batch)]
raise TypeError('DataLoader found invalid type: {}'.format(type(elem)))
def __call__(self, batch):
return self.collate(batch)
class TSGenDataLoader(torch.utils.data.DataLoader):
def __init__(self, dataset, batch_size = 1, shuffle = False, \
follow_batch = [], exclude_keys = [], **kwargs):
if "collate_fn" in kwargs:
del kwargs["collate_fn"]
self.follow_batch = follow_batch
self.exclude_keys = exclude_keys
super(TSGenDataLoader, self).__init__(dataset, batch_size, shuffle, \
collate_fn = TSGenCollater(follow_batch, exclude_keys), **kwargs)
```
## Collate function
```
import torch
# specific collate_fn in DataLoader
def collate_fn(batch):
batch = {key: batch_stack([graph[key] for graph in batch]) for key in batch[0].keys()}
batch = {key: drop_z}
```
| github_jupyter |
# Welcome to ExKaldi
In this section, we will further process the Kaldi decoding lattice and score the results.
```
import exkaldi
import os
dataDir = "librispeech_dummy"
```
Load the lattice file (generated in 09_decode_back_HMM-GMM_and_WFST).
```
latFile = os.path.join(dataDir, "exp", "train_delta", "decode_test", "test.lat")
lat = exkaldi.decode.wfst.load_lat(latFile)
lat
```
To be simple and straightforward, we get the 1-best result from lattice. Word-id table and HMM model are necessary.
Word-ID table can be __words.txt__ file (If decoded in word level) or __phones.txt__ file (If decoded in phone level) or Exkaldi __ListTable__ object.
Ideally, __LexiconBank__ object is also avaliable because you can get both "words" and "phones" from it.
```
wordsFile = os.path.join(dataDir, "exp", "words.txt")
hmmFile = os.path.join(dataDir, "exp", "train_delta", "final.mdl")
result = lat.get_1best(symbolTable=wordsFile, hmm=hmmFile, lmwt=1, acwt=0.5)
result.subset(nHead=1)
```
___result___ is a exkaldi __Transcription__ object.
The decoding result is int-ID format. If you want it by text-format, try this:
```
textResult = exkaldi.hmm.transcription_from_int(result, wordsFile)
textResult.subset(nHead=1)
```
Just for convenience, we restorage lexicons.
```
lexFile = os.path.join(dataDir, "exp", "lexicons.lex")
lexicons = exkaldi.load_lex(lexFile)
del textResult
```
Besides the __transcription_from_int__ function, we can transform transcription by using the __Transcription__'s own method, like this:
```
word2id = lexicons("words")
oovID = word2id[lexicons("oov")]
id2word = word2id.reverse()
textResult = result.convert(symbolTable=id2word, unkSymbol=oovID)
textResult.subset(nHead=1)
del result
```
Now we can score the decoding result. Typically, you can compute the WER(word err rate).
```
refFile = os.path.join(dataDir, "test", "text")
score = exkaldi.decode.score.wer(ref=refFile, hyp=textResult, mode="present")
score
```
Or some times, compute the edit distance score.
```
score = exkaldi.decode.score.edit_distance(ref=refFile, hyp=textResult, mode="present")
score
```
Then compute the accuracy of words levels.
```
1 - score.editDistance/score.words
```
We tested this and only get the WER 134.37, and the accuracy rate of words is 27.6%.
We support further process the lattice, for example, to add penalty or to scale it.
Here is a example to config different language model weight(LMWT) and penalty. (In Instead of text-format result, we use int-format reference file.)
```
refInt = exkaldi.hmm.transcription_to_int(refFile, lexicons("words"), unkSymbol=lexicons("oov"))
refIntFile = os.path.join(dataDir, "exp", "train_delta", "decode_test", "text.int")
refInt.save(refIntFile)
refInt.subset(nHead=1)
for penalty in [0., 0.5, 1.0]:
for LMWT in range(10, 15):
newLat = lat.add_penalty(penalty)
result = newLat.get_1best(lexicons("words"), hmmFile, lmwt=LMWT, acwt=0.5)
score = exkaldi.decode.score.wer(ref=refInt, hyp=result, mode="present")
print(f"Penalty {penalty}, LMWT {LMWT}: WER {score.WER}")
```
From the lattice, you can get the phone-level result.
```
phoneResult = lat.get_1best(lexicons("phones"), hmmFile, lmwt=1, acwt=0.5, phoneLevel=True)
phoneResult = exkaldi.hmm.transcription_from_int(phoneResult, lexicons("phones"))
phoneResult.subset(nHead=1)
```
From lattice, N-Best results can also be extracted.
```
result = lat.get_nbest(
n=3,
symbolTable=lexicons("words"),
hmm=hmmFile,
acwt=0.5,
phoneLevel=False,
requireCost=False,
)
for re in result:
print(re.name, type(re))
```
___result___ is a list of N-bests __Transcription__ objects. If ___requireCost___ is True, return the LM score and AM score sumultaneously.
```
result = lat.get_nbest(
n=3,
symbolTable=lexicons("words"),
hmm=hmmFile,
acwt=0.5,
phoneLevel=False,
requireCost=True,
)
for re in result[0]:
print(re.name, type(re))
for re in result[1]:
print(re.name, type(re))
for re in result[2]:
print(re.name, type(re))
```
And importantly, Alignment can be returned.
```
result = lat.get_nbest(
n=3,
symbolTable=lexicons("words"),
hmm=hmmFile,
acwt=0.5,
phoneLevel=False,
requireCost=False,
requireAli=True,
)
for re in result[1]:
print(re.name, type(re))
```
We will not train __LDA+MLLT__ and __SAT__ in this tutorial. If you need tutorial about them, please look the `examples` directory. We prepare some actual recipes for, for example, __TIMIT__ corpus.
| github_jupyter |
## CNN Architecture links
[AlexNet](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)
[VGGNet](https://arxiv.org/pdf/1409.1556.pdf)
[ResNet](https://arxiv.org/pdf/1512.03385v1.pdf)
Keras [documentation](https://keras.io/applications) for accessing some famous CNN architectures
Detailed treatment of [vaniching gradients](http://neuralnetworksanddeeplearning.com/chap5.html) problem
Github [repository](https://github.com/jcjohnson/cnn-benchmarks) of CNN architecture benchmarks
[ImageNet Large Scale Visual Recognition Competition (ILSVRC)](http://www.image-net.org/challenges/LSVRC/) website
# Optional Resources
If you would like to know more about interpreting CNNs and convolutional layers in particular, you are encouraged to check out these resources:
- Here's a [section](http://cs231n.github.io/understanding-cnn) from the Stanford's CS231n course on visualizing what CNNs learn.
- Check out this [demonstration](https://aiexperiments.withgoogle.com/what-neural-nets-see) of a cool [OpenFrameworks](http://openframeworks.cc/) app that visualizes CNNs in real-time, from user-supplied video!
- Here's a [demonstration](https://www.youtube.com/watch?v=AgkfIQ4IGaM&t=78s) of another visualization tool for CNNs. If you'd like to learn more about how these visualizations are made, check out this [video](https://www.youtube.com/watch?v=ghEmQSxT6tw&t=5s).
- Read this [Keras blog post](https://blog.keras.io/how-convolutional-neural-networks-see-the-world.html) on visualizing how CNNs see the world. In this post, you can find an accessible introduction to Deep Dreams, along with code for writing your own deep dreams in Keras. When you've read that:
- Also check out this [music video](https://www.youtube.com/watch?v=XatXy6ZhKZw) that makes use of Deep Dreams (look at 3:15-3:40)!
- Create your own Deep Dreams (without writing any code!) using this [website](https://deepdreamgenerator.com/).
- If you'd like to read more about interpretability of CNNs,
- here's an [article](https://blog.openai.com/adversarial-example-research/) that details some dangers from using deep learning models (that are not yet interpretable) in real-world applications.
- there's a lot of active research in this area. [These authors](https://arxiv.org/abs/1611.03530) recently made a step in the right direction.
- Matthew Zeiler and Rob Fergus' [deep visualization toolbox](https://www.youtube.com/watch?v=ghEmQSxT6tw), which lets us visualize what each layer in a CNN focuses on
| github_jupyter |
```
## Setting up tweepy authentication
from tweepy import OAuthHandler
from tweepy import API
# Consumer key authentication
auth = OAuthHandler(consumer_key, consumer_secret)
# Access key authentication
auth.set_access_token(access_token, access_token_secret)
# Set up the API with the authentication handler
api = API(auth)
## Collecting data on keywords
from tweepy import Stream
# Set up words to track
keywords_to_track = ["#rstats", "#python"]
# Instantiate the SListener object
listen = SListener(api)
# Instantiate the Stream object
stream = Stream(auth, listen)
# Begin collecting data
stream.filter(track = keywords_to_track)
## Loading and accessing tweets
# Load JSON
import json
# Convert from JSON to Python object
tweet = json.loads(tweet_json)
# Print tweet text
print(tweet['text'])
# Print tweet id
print(tweet['id'])
## Accessing user data
# Print user handle
print(tweet['user']['screen_name'])
# Print user follower count
print(tweet['user']['followers_count'])
# Print user location
print(tweet['user']['location'])
# Print user description
print(tweet['user']['description'])
## Accessing retweet data
# Print the text of the tweet
print(rt['text'])
# RT @hannawallach: ICYMI: NIPS/ICML/ICLR are looking for a
# full-time programmer to run the conferences' submission/review processes. More in…
# Print the text of tweet which has been retweeted
print(rt['retweeted_status']['text'])
# ICYMI: NIPS/ICML/ICLR are looking for a full-time programmer
# to run the conferences' submission/review processes. M… https://t.co/aB9Y5tTyHT
# Print the user handle of the tweet
print(rt['user']['screen_name'])
# alexhanna
# Print the user handle of the tweet which has been retweeted
print(rt['retweeted_status']['user']['screen_name'])
# hannawallach
```
```
## Tweet Items and Tweet Flattening
# Print the tweet text
print(quoted_tweet['text'])
# Print the quoted tweet text
print(quoted_tweet['quoted_status']['text'])
# Print the quoted tweet's extended (140+) text
print(quoted_tweet['quoted_status']['extended_tweet']['full_text'])
# Print the quoted user location
print(quoted_tweet['quoted_status']['user']['location'])
____________________________________________________________
# Store the user screen_name in 'user-screen_name'
quoted_tweet['user-screen_name'] = quoted_tweet['quoted_status']['user']['screen_name']
# Store the quoted_status text in 'quoted_status-text'
quoted_tweet['quoted_status-text'] = quoted_tweet['quoted_status']['text']
# Store the quoted tweet's extended (140+) text in
# 'quoted_status-extended_tweet-full_text'
quoted_tweet['quoted_status-extended_tweet-full_text'] = quoted_tweet['quoted_status']['extended_tweet']['full_text']
## A tweet flattening function
def flatten_tweets(tweets_json):
""" Flattens out tweet dictionaries so relevant JSON
is in a top-level dictionary."""
tweets_list = []
# Iterate through each tweet
for tweet in tweets_json:
tweet_obj = json.loads(tweet)
# Store the user screen name in 'user-screen_name'
tweet_obj['user-screen_name'] = tweet_obj['user']['screen_name']
# Check if this is a 140+ character tweet
if 'extended_tweet' in tweet_obj:
# Store the extended tweet text in 'extended_tweet-full_text'
tweet_obj['extended_tweet-full_text'] = tweet_obj['extended_tweet']['full_text']
if 'retweeted_status' in tweet_obj:
# Store the retweet user screen name in 'retweeted_status-user-screen_name'
tweet_obj['retweeted_status-user-screen_name'] = tweet_obj['retweeted_status']['user']['screen_name']
# Store the retweet text in 'retweeted_status-text'
tweet_obj['retweeted_status-text'] = tweet_obj['retweeted_status']['text']
tweets_list.append(tweet_obj)
return tweets_list
## Loading tweets into a DataFrame
# Import pandas
import pandas as pd
# Flatten the tweets and store in `tweets`
tweets = flatten_tweets(data_science_json)
# Create a DataFrame from `tweets`
ds_tweets = pd.DataFrame(tweets)
# Print out the first 5 tweets from this dataset
print(ds_tweets['text'].values[0:5])
```
```
## Finding keywords
# Flatten the tweets and store them
flat_tweets = flatten_tweets(data_science_json)
# Convert to DataFrame
ds_tweets = pd.DataFrame(flat_tweets)
# Find mentions of #python in 'text'
python = ds_tweets['text'].str.contains('#python', case=False)
# Print proportion of tweets mentioning #python
print("Proportion of #python tweets:", np.sum(python) / ds_tweets.shape[0] )
## Looking for text in all the wrong places
def check_word_in_tweet(word, data):
"""Checks if a word is in a Twitter dataset's text.
Checks text and extended tweet (140+ character tweets) for tweets,
retweets and quoted tweets.
Returns a logical pandas Series.
"""
contains_column = data['text'].str.contains(word, case = False)
contains_column |= data['extended_tweet-full_text'].str.contains(word, case = False)
contains_column |= data['quoted_status-text'].str.contains(word, case = False)
contains_column |= data['quoted_status-extended_tweet-full_text'].str.contains(word, case = False)
contains_column |= data['retweeted_status-text'].str.contains(word, case = False)
contains_column |= data['retweeted_status-extended_tweet-full_text'].str.contains(word, case = False)
return contains_column
## Comparing #python to #rstats
# Find mentions of #python in all text fields
python = check_word_in_tweet("#python", ds_tweets)
# Find mentions of #rstats in all text fields
rstats = check_word_in_tweet("#rstats", ds_tweets)
# Print proportion of tweets mentioning #python
print("Proportion of #python tweets:", np.sum(python) / ds_tweets.shape[0])
# Print proportion of tweets mentioning #rstats
print("Proportion of #rstats tweets:", np.sum(rstats) / ds_tweets.shape[0])
# <script.py> output:
# Proportion of #python tweets: 0.5733333333333334
# Proportion of #rstats tweets: 0.4693333333333333
## Creating time series data frame
# Print created_at to see the original format of datetime in Twitter data
print(ds_tweets['created_at'].head())
# Convert the created_at column to np.datetime object
ds_tweets['created_at'] = pd.to_datetime(ds_tweets['created_at'])
# Print created_at to see new format
print(ds_tweets['created_at'].head())
# Set the index of ds_tweets to created_at
ds_tweets = ds_tweets.set_index('created_at')
## Generating mean frequency
# Create a python column
ds_tweets['python'] = check_word_in_tweet('#python', ds_tweets)
# Create an rstats column
ds_tweets['rstats'] = check_word_in_tweet('#rstats', ds_tweets)
## Plotting mean frequency
# Average of python column by day
mean_python = ds_tweets['python'].resample('1 d').mean()
# Average of rstats column by day
mean_rstats = ds_tweets['rstats'].resample('1 d').mean()
# Plot mean python by day(green)/mean rstats by day(blue)
plt.plot(mean_python.index.day, mean_python, color = 'green')
plt.plot(mean_rstats.index.day, mean_rstats, color = 'blue')
# Add labels and show
plt.xlabel('Day'); plt.ylabel('Frequency')
plt.title('Language mentions over time')
plt.legend(('#python', '#rstats'))
plt.show()
## Loading VADER
# Load SentimentIntensityAnalyzer
from nltk.sentiment.vader import SentimentIntensityAnalyzer
# Instantiate new SentimentIntensityAnalyzer
sid = SentimentIntensityAnalyzer()
# Generate sentiment scores
sentiment_scores = ds_tweets['text'].apply(sid.polarity_scores)
## Calculating sentiment scores
# Print out the text of a positive tweet
print(ds_tweets[sentiment > .6]['text'].values[0])
# Print out the text of a negative tweet
print(ds_tweets[sentiment < -.6]['text'].values[0])
# Generate average sentiment scores for #python
sentiment_py = sentiment[check_word_in_tweet('#python', ds_tweets)].resample('1 d').mean()
# Generate average sentiment scores for #rstats
sentiment_r = sentiment[check_word_in_tweet('#rstats', ds_tweets)].resample('1 d').mean()
## Plotting sentiment scores
# Import matplotlib
import matplotlib.pyplot as plt
# Plot average #python sentiment per day
plt.plot(sentiment_py.index.day, sentiment_py, color = 'green')
# Plot average #rstats sentiment per day
plt.plot(sentiment_r.index.day, sentiment_r, color = 'blue')
plt.xlabel('Day')
plt.ylabel('Sentiment')
plt.title('Sentiment of data science languages')
plt.legend(('#python', '#rstats'))
plt.show()
```
```
## Creating retweet network
# Import networkx
import networkx as nx
# Create retweet network from edgelist
G_rt = nx.from_pandas_edgelist(
sotu_retweets,
source = 'user-screen_name',
target = 'retweeted_status-user-screen_name',
create_using = nx.DiGraph())
# Print the number of nodes
print('Nodes in RT network:', len(G_rt.nodes()))
# Print the number of edges
print('Edges in RT network:', len(G_rt.edges()))
## Creating reply network
# Import networkx
import networkx as nx
# Create reply network from edgelist
G_reply = nx.from_pandas_edgelist(
sotu_replies,
source = 'user-screen_name',
target = 'in_reply_to_screen_name',
create_using = nx.DiGraph())
# Print the number of nodes
print('Nodes in reply network:', len(G_reply.nodes()))
# Print the number of edges
print('Edges in reply network:', len(G_reply.edges()))
## Visualizing retweet network
# Create random layout positions
pos = nx.random_layout(G_rt)
# Create size list
sizes = [x[1] for x in G_rt.degree()]
# Draw the network
nx.draw_networkx(G_rt, pos,
with_labels = False,
node_size = sizes,
width = 0.1, alpha = 0.7,
arrowsize = 2, linewidths = 0)
# Turn axis off and show
plt.axis('off'); plt.show()
## In-degree centrality
# Generate in-degree centrality for retweets
rt_centrality = nx.in_degree_centrality(G_rt)
# Generate in-degree centrality for replies
reply_centrality = nx.in_degree_centrality(G_reply)
# Store centralities in DataFrame
rt = pd.DataFrame(list(rt_centrality.items()), columns = column_names)
reply = pd.DataFrame(list(reply_centrality.items()), columns = column_names)
# Print first five results in descending order of centrality
print(rt.sort_values('degree_centrality', ascending = False).head())
# Print first five results in descending order of centrality
print(reply.sort_values('degree_centrality', ascending = False).head())
## Betweenness Centrality
# Generate betweenness centrality for retweets
rt_centrality = nx.betweenness_centrality(G_rt)
# Generate betweenness centrality for replies
reply_centrality = nx.betweenness_centrality(G_reply)
# Store centralities in data frames
rt = pd.DataFrame(rt_centrality.items(), columns = column_names)
reply = pd.DataFrame(reply_centrality.items(), columns = column_names)
# Print first five results in descending order of centrality
print(rt.sort_values('betweenness_centrality', ascending = False).head())
# Print first five results in descending order of centrality
print(reply.sort_values('betweenness_centrality', ascending = False).head())
## Ratios
# Calculate in-degrees and store in DataFrame
degree_rt = pd.DataFrame(list(G_rt.in_degree()), columns = column_names)
degree_reply = pd.DataFrame(list(G_reply.in_degree()), columns = column_names)
# Merge the two DataFrames on screen name
ratio = degree_rt.merge(degree_reply, on = 'screen_name', suffixes = ('_rt', '_reply'))
# Calculate the ratio
ratio['ratio'] = ratio['degree_reply'] / ratio['degree_rt']
# Exclude any tweets with less than 5 retweets
ratio = ratio[ratio['degree_rt'] >= 5]
# Print out first five with highest ratio
print(ratio.sort_values('ratio', ascending = False).head())
```
```
## Accessing user-defined location
# Print out the location of a single tweet
print(tweet_json['user']['location'])
# Flatten and load the SOTU tweets into a dataframe
tweets_sotu = pd.DataFrame(flatten_tweets(tweets_sotu_json))
# Print out top five user-defined locations
print(tweets_sotu['user-location'].value_counts().head())
## Accessing bounding box
def getBoundingBox(place):
""" Returns the bounding box coordinates."""
return place['bounding_box']['coordinates']
# Apply the function which gets bounding box coordinates
bounding_boxes = tweets_sotu['place'].apply(getBoundingBox)
# Print out the first bounding box coordinates
print(bounding_boxes.values[0])
# <script.py> output:
# [[[-94.043628, 28.855128], [-94.043628, 33.019544], [-88.758389, 33.019544], [-88.758389, 28.855128]]]
## Calculating the centroid
def calculateCentroid(place):
""" Calculates the centroid from a bounding box."""
# Obtain the coordinates from the bounding box.
coordinates = place['bounding_box']['coordinates'][0]
longs = np.unique( [x[0] for x in coordinates] )
lats = np.unique( [x[1] for x in coordinates] )
if len(longs) == 1 and len(lats) == 1:
# return a single coordinate
return (longs[0], lats[0])
elif len(longs) == 2 and len(lats) == 2:
# If we have two longs and lats, we have a box.
central_long = np.sum(longs) / 2
central_lat = np.sum(lats) / 2
else:
raise ValueError("Non-rectangular polygon not supported: %s" %
",".join(map(lambda x: str(x), coordinates)) )
return (central_long, central_lat)
# Calculate the centroids of place
centroids = tweets_sotu['place'].apply(calculateCentroid)
## Creating Basemap map
# Import Basemap
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
# Set up the US bounding box
us_boundingbox = [-125, 22, -64, 50]
# Set up the Basemap object
m = Basemap(llcrnrlon = us_boundingbox[0],
llcrnrlat = us_boundingbox[1],
urcrnrlon = us_boundingbox[2],
urcrnrlat = us_boundingbox[3],
projection='merc')
# Draw continents in white,
# coastlines and countries in gray
m.fillcontinents(color='white')
m.drawcoastlines(color='gray')
m.drawcountries(color='gray')
# Draw the states and show the plot
m.drawstates(color='gray')
plt.show()
## Plotting centroid coordinates
# Calculate the centroids for the dataset
# and isolate longitudue and latitudes
centroids = tweets_sotu['place'].apply(calculateCentroid)
lon = [x[0] for x in centroids]
lat = [x[1] for x in centroids]
# Draw continents, coastlines, countries, and states
m.fillcontinents(color='white', zorder = 0)
m.drawcoastlines(color='gray')
m.drawcountries(color='gray')
m.drawstates(color='gray')
# Draw the points and show the plot
m.scatter(lon, lat, latlon = True, alpha = 0.7)
plt.show()
## Coloring by sentiment
# Generate sentiment scores
sentiment_scores = tweets_sotu['text'].apply(sid.polarity_scores)
# Isolate the compound element
sentiment_scores = [x['compound'] for x in sentiment_scores]
# Draw the points
m.scatter(lon, lat, latlon = True,
c = sentiment_scores,
cmap = 'coolwarm', alpha = 0.7)
# Show the plot
plt.show()
```
| github_jupyter |
```
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<a target="_blank" href="https://colab.research.google.com/github/GoogleCloudPlatform/keras-idiomatic-programmer/blob/master/workshops/Data_Engineering/Idiomatic%20Programmer%20-%20handbook%202%20-%20Codelab%201.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# Idiomatic Programmer Code Labs
## Code Labs #1 - Get Familiar with Data Preprocessing
## Prerequistes:
1. Familiar with Python
2. Completed Handbook 2/Part 7: Data Engineering
## Objectives:
1. Reading and Resizing Images
2. Assembling images into datasets
3. Setting the datatype
4. Normalizing/Standardizing images
## Setup:
Install the additional relevant packages to get started with Keras/OpenCV, and then import them.
```
# Install OpenCV computer vision package
!pip install -U opencv-python
# Import OpenCV
import cv2
# We will also be using the numpy package in this code lab
import numpy as np
print(cv2.__version__)
```
## Reading/Resizing Images
Let's read in an image and then resize it for input vector (for CNN) as 128x128.
You fill in the blanks (replace the ??), make sure it passes the Python interpreter, and then verify it's correctness with the summary output.
You will need to:
1. Set the parameter for reading an image in as color (RGB)
```
from urllib.request import urlopen
# Let's read in the image as a color image
url = "https://raw.githubusercontent.com/GoogleCloudPlatform/keras-idiomatic-programmer/master/workshops/apple.jpg"
request = urlopen(url)
img_array = np.asarray(bytearray(request.read()), dtype=np.uint8)
# HINT: the parameter value for a color image
image = cv2.imdecode(img_array, cv2.IMREAD_??)
# Let's verify we read it in correctly. We should see (584, 612, 3)
print(image.shape)
```
### Resize it to 128 x 128
Okay, we see that the image is 584 (height) x 612 (width). Hum, that's not square. We could simply resize it to 128 x 128. But if we do that, the image will be skewed. Why? Because the original height and width are not the same, and if we resize them as-is to the same length, we will distort the aspect ratio,
So, let's refit the image into a square frame and then resize it.
You will need to:
1. Set the padding for the top and bottom.
```
# Let's calculate the difference between width and height -- this should output 28
pad = (612 - 584)
print("pad size", pad)
# Split the padding evenly between the top and bottom
# HINT: even means half.
top = pad // ??
bottom = pad // ??
left = 0
right = 0
# Let's now make a copy of the image with the padded border.
# cv2.BORDER_CONSTANT means use a constant value for the padded border.
# [0, 0, 0] is the constant value (all black pixels)
color = [0, 0, 0]
image = cv2.copyMakeBorder(image, top, bottom, left, right, cv2.BORDER_CONSTANT,
value=color)
# This should output (612, 612, 3)
print("padded image", image.shape)
# Let's resize the image now to 128 x 128
# HINT: The tuple parameter is height x width
image = cv2.resize(image, (128, 128))
# This should output (128, 128, 3)
print("resized image", image.shape)
```
## Assemblying into a dataset.
Let's read in a group of images, resize them to the same size and assembly into a dataset (i.e., a single numpy multidimensional array).
You will need to:
1. Specify the numpy method to convert a list to a numpy array.
```
# Let's build a dataset of four images. We will start by using a list to append each image
# as it is read in.
images = []
for _ in range(4):
# Let's pretend we are reading in different images and resizing them,
# but instead we will just reuse our image from above.
images.append(image)
# convert the list of images to a numpy multidimensional array
# HINT: use the method that converts list to numpy array
images = np.??(images)
# This should output (4, 128, 128, 3, where the 4 indicates this is a batch of 4 images.
print("dataset", images.shape)
```
## Setting the Datatype
Next, we will set the data type of the pixel data to a single precision floating point value. That's a FLOAT32, which means 32 bits (which is 4 bytes).
You will need to:
1. Specify the data type for a 32-bit float.
```
# Set the datatype to single precision float (FLOAT32)
# HINT: It is lowercased.
images = images.astype(np.??)
# This should output: 4
print("bytes per pixel", images.itemsize)
```
## Normalizing/Standardizing the Pixel Data
Finally, we will standardize the pixel data:
1. Calculate the mean and standard deviation using numpy methods
2. Substract the mean from images and then divide by the standard deviation
You will need to:
1. Subtract the standard deviation
```
# Calculate the mean value across all the pixels
mean = np.mean(images)
# Calculate the corresponding standard deviation
std = np.std(images)
# Subtract the mean and divide by the standard deviation
# HINT: you calculate the standard deviation above.
images = (images - mean) / ??
# Let's print the before and after values:
# You should see: 3.1789145e-07 -7.0159636e-08
print(mean, np.mean(images))
```
## End of Code Lab
| github_jupyter |
# Association Analysis
## Import all necessary libraries
```
#!fc-list :lang=zh family
import os
import glob
from functools import reduce
#import pickle
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
#from scipy import stats
from sklearn.cluster import KMeans
from mlxtend.frequent_patterns import apriori
from mlxtend.frequent_patterns import association_rules
#pd.set_option('display.max_rows', 10)
plt.rcParams['font.sans-serif'] = ['Noto Sans Mono CJK TC', 'sans-serif']
plt.rcParams['axes.unicode_minus'] = False
%matplotlib inline
```
## Load Data
```
try:
from google.colab import drive
# Mount the folder "drive" on google drive to Colab Notebook
drive.mount('/content/drive')
path = '/content/drive/My Drive/wids-taipei/2020-WiDS-Taipei-MLCC-Workshop/dataset/*.csv'
except ModuleNotFoundError:
path = '../data/*.csv'
```
### Read data
```
# Read data
filenames = glob.glob(path)
pd_dict = {}
for filename in filenames:
name = filename.split("/")[-1].split(".")[0]
pd_dict[name] = pd.read_csv(os.path.join(filename))
purchase_data = pd_dict['customer_purchase_dataset']
payments_data = pd_dict['order_payments_dataset']
reviews_data = pd_dict['order_reviews_dataset']
orders_data = pd_dict['orders_dataset']
customers_data = pd_dict['customers_dataset']
```
## Association Analysis
```
def encode_units(x):
if x <=0 :
return 0
else:
return 1
# crosstab: compute a simple cross tabulation of two (or more) factors.
# Default -> frequence
basket = pd.crosstab(purchase_data['customer_unique_id'], purchase_data['product_main_category'])
basket_sets = basket.applymap(encode_units)
basket_sets
frequent_itemsets = apriori(basket_sets, min_support=0.05, use_colnames=True)
rules = association_rules(frequent_itemsets, metric="lift")
rules = rules[['antecedents', 'consequents', 'support', 'confidence', 'lift']]
# support(x, y) = number(x, y) / number(all samples)
rules['frequency'] = rules['support'] * len(basket_sets)
rules['length_1'] = rules['antecedents'].apply(lambda x: len(x))
rules['length_2'] = rules['consequents'].apply(lambda x: len(x))
rules_new = rules.loc[(rules['length_1'] == 1) & (rules['length_2'] == 1)]
rules_new["antecedents"] = rules_new["antecedents"].apply(lambda x: list(x)[0]).astype("unicode")
rules_new["consequents"] = rules_new["consequents"].apply(lambda x: list(x)[0]).astype("unicode")
rules_new
cm = sns.light_palette((260, 75, 60), input="husl", as_cmap=True)
support_data = rules_new.groupby(['antecedents', 'consequents']).apply(lambda x: x.sort_values('support', ascending=False))
support_data = support_data[['support']].droplevel(2)
s = support_data.style.background_gradient(cmap=cm)
s
```
| github_jupyter |
### Deep Kung-Fu with advantage actor-critic
In this notebook you'll build a deep reinforcement learning agent for Atari [Kung-Fu Master](https://gym.openai.com/envs/KungFuMaster-v0/) that uses a recurrent neural net.

```
import sys, os
if 'google.colab' in sys.modules:
# https://github.com/yandexdataschool/Practical_RL/issues/256
!pip uninstall tensorflow --yes
!pip uninstall keras --yes
!pip install tensorflow-gpu==1.13.1
!pip install keras==2.2.4
if not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/week08_pomdp/atari_util.py
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
"""
A thin wrapper for openAI gym environments that maintains a set of parallel games and has a method to generate
interaction sessions given agent one-step applier function.
"""
import numpy as np
# A whole lot of space invaders
class EnvPool(object):
def __init__(self, agent, make_env, n_parallel_games=1):
"""
A special class that handles training on multiple parallel sessions
and is capable of some auxilary actions like evaluating agent on one game session (See .evaluate()).
:param agent: Agent which interacts with the environment.
:param make_env: Factory that produces environments OR a name of the gym environment.
:param n_games: Number of parallel games. One game by default.
:param max_size: Max pool size by default (if appending sessions). By default, pool is not constrained in size.
"""
# Create atari games.
self.agent = agent
self.make_env = make_env
self.envs = [self.make_env() for _ in range(n_parallel_games)]
# Initial observations.
self.prev_observations = [env.reset() for env in self.envs]
# Agent memory variables (if you use recurrent networks).
self.prev_memory_states = agent.get_initial_state(n_parallel_games)
# Whether particular session has just been terminated and needs
# restarting.
self.just_ended = [False] * len(self.envs)
def interact(self, n_steps=100, verbose=False):
"""Generate interaction sessions with ataries (openAI gym atari environments)
Sessions will have length n_steps. Each time one of games is finished, it is immediately getting reset
and this time is recorded in is_alive_log (See returned values).
:param n_steps: Length of an interaction.
:returns: observation_seq, action_seq, reward_seq, is_alive_seq
:rtype: a bunch of tensors [batch, tick, ...]
"""
def env_step(i, action):
if not self.just_ended[i]:
new_observation, cur_reward, is_done, info = self.envs[i].step(action)
if is_done:
# Game ends now, will finalize on next tick.
self.just_ended[i] = True
# note: is_alive=True in any case because environment is still
# alive (last tick alive) in our notation.
return new_observation, cur_reward, True, info
else:
# Reset environment, get new observation to be used on next
# tick.
new_observation = self.envs[i].reset()
# Reset memory for new episode.
initial_memory_state = self.agent.get_initial_state(
batch_size=1)
for m_i in range(len(new_memory_states)):
new_memory_states[m_i][i] = initial_memory_state[m_i][0]
if verbose:
print("env %i reloaded" % i)
self.just_ended[i] = False
return new_observation, 0, False, {'end': True}
history_log = []
last_prev_mem_state = self.prev_memory_states
for i in range(n_steps):
dropout = False if i <= n_steps - 2 else True
new_memory_states, readout = self.agent.step(self.prev_memory_states,
self.prev_observations,
dropout)
sampled_actions = self.agent.sample_actions(readout)
new_observations, cur_rewards, is_alive, infos = zip(
*map(env_step, range(len(self.envs)), sampled_actions))
# Append data tuple for this tick.
history_log.append(
(self.prev_observations, sampled_actions, cur_rewards, is_alive))
last_prev_mem_state = self.prev_memory_states
self.prev_observations = new_observations
self.prev_memory_states = new_memory_states
# add last observation
#dummy_actions = [0] * len(self.envs)
#dummy_rewards = [0] * len(self.envs)
#dummy_mask = [1] * len(self.envs)
#history_log.append(
# (self.prev_observations,
# dummy_actions,
# dummy_rewards,
# dummy_mask))
# cast to numpy arrays,
# transpose from [time, batch, ...] to [batch, time, ...]
history_log = [
np.array(tensor).swapaxes(0, 1)
for tensor in zip(*history_log)
]
observation_seq, action_seq, reward_seq, is_alive_seq = history_log
return observation_seq, action_seq, reward_seq, is_alive_seq, last_prev_mem_state
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import display
```
For starters, let's take a look at the game itself:
* Image resized to 42x42 and converted to grayscale to run faster
* Agent sees last 4 frames of game to account for object velocity
```
import gym
from atari_util import PreprocessAtari
def make_env():
env = gym.make("KungFuMasterDeterministic-v0")
env = PreprocessAtari(
env, height=42, width=42,
crop=lambda img: img[60:-30, 5:],
dim_order='tensorflow',
color=False, n_frames=4)
return env
env = make_env()
obs_shape = env.observation_space.shape
n_actions = env.action_space.n
print("Observation shape:", obs_shape)
print("Num actions:", n_actions)
print("Action names:", env.env.env.get_action_meanings())
s = env.reset()
for _ in range(100):
s, _, _, _ = env.step(env.action_space.sample())
plt.title('Game image')
plt.imshow(env.render('rgb_array'))
plt.show()
plt.title('Agent observation (4-frame buffer)')
plt.imshow(s.transpose([0, 2, 1]).reshape([42,-1]))
plt.show()
```
### Simple agent for fully-observable MDP
Here's a code for an agent that only uses feedforward layers. Please read it carefully: you'll have to extend it later!
```
import tensorflow as tf
from keras.layers import Conv2D, Dense, Flatten, Dropout
from tensorflow.nn.rnn_cell import LSTMCell, LSTMStateTuple
tf.reset_default_graph()
sess = tf.InteractiveSession()
class SimpleRecurrentAgent:
def __init__(self, name, obs_shape, n_actions, reuse=False):
"""A simple actor-critic agent"""
with tf.variable_scope(name, reuse=reuse):
# Note: number of units/filters is arbitrary, you can and should change it at your will
# Note: number of units/filters is arbitrary, you can and should change it at your will
self.conv0 = Conv2D(32, (4, 4), strides=(2, 2), activation='relu')
self.conv1 = Conv2D(64, (3, 3), strides=(2, 2), activation='relu')
self.conv2 = Conv2D(64, (3, 3), strides=(1, 1), activation='relu')
self.flatten = Flatten()
self.hid = Dense(512, activation='relu')
# Actor: pi(a|s)
self.logits = Dense(n_actions)
# Critic: State Values
self.state_value = Dense(1)
# Recurrent Layer
self.hid_size = 512
self.rnn0 = LSTMCell(self.hid_size, state_is_tuple = True)
self.dropout = Dropout(9/10)
# prepare a graph for agent step
initial_state_c = tf.placeholder(dtype=tf.float32,
shape=[None, self.hid_size],
name="init_state_c")
initial_state_h = tf.placeholder(dtype=tf.float32,
shape=[None, self.hid_size],
name="init_state_h")
self.prev_state_placeholder = LSTMStateTuple(initial_state_c, initial_state_h)
self.obs_t = tf.placeholder(tf.float32, [None, ] + list(obs_shape))
self.next_state, self.agent_outputs = self.symbolic_step(self.prev_state_placeholder,
self.obs_t)
c = self.dropout(self.next_state[0])
h = self.dropout(self.next_state[1])
self.next_state_dropout = LSTMStateTuple(c, h)
def symbolic_step(self, prev_state, obs_t):
"""Takes agent's previous step and observation, returns next state and whatever it needs to learn (tf tensors)"""
nn = self.conv0(obs_t)
nn = self.conv1(nn)
nn = self.conv2(nn)
nn = self.flatten(nn)
nn = self.hid(nn)
# Apply recurrent neural net for one step here.
# The recurrent cell should take the last feedforward dense layer as input.
batch_ones = tf.ones(tf.shape(obs_t)[0])
new_out, new_state_ch = tf.nn.dynamic_rnn(self.rnn0, nn[:,None],
initial_state = prev_state,
sequence_length = batch_ones)
logits = self.logits(new_out[:,0])
state_value = self.state_value(new_out[:,0])
return new_state_ch, (logits, state_value)
def get_initial_state(self, batch_size):
# LSTMStateTuple([batch_size x hid_size], [batch_size x hid_size]]
a = np.zeros([batch_size, self.hid_size], dtype=np.float32)
return LSTMStateTuple(a, a)
# Instantiation
def step(self, prev_state, obs_t, dropout=False):
"""Same as symbolic state except it operates on numpy arrays"""
sess = tf.get_default_session()
feed_dict = {self.obs_t: obs_t,
self.prev_state_placeholder: prev_state}
state_ph = self.next_state if dropout==False else self.next_state_dropout
return sess.run([state_ph, self.agent_outputs], feed_dict)
def sample_actions(self, agent_outputs):
"""pick actions given numeric agent outputs (np arrays)"""
logits, state_values = agent_outputs
policy = np.exp(logits) / np.sum(np.exp(logits), axis=-1, keepdims=True)
return [np.random.choice(len(p), p=p) for p in policy]
def get_state_values(self, prev_state, obs_t):
sess = tf.get_default_session()
feed_dict = {self.obs_t: obs_t,
self.prev_state_placeholder: prev_state}
return sess.run(self.agent_outputs[1], feed_dict)
n_parallel_games = 10
rollout_length = 30
gamma = 0.99
agent = SimpleRecurrentAgent('agent_with_memory', obs_shape, n_actions)
sess.run(tf.global_variables_initializer())
state = [env.reset()]
_, (logits, value) = agent.step(agent.get_initial_state(1), state)
print("action logits:\n", logits)
print("state values:\n", value)
```
### Let's play!
Let's build a function that measures agent's average reward.
```
def evaluate(agent, env, n_games=1):
"""Plays an a game from start till done, returns per-game rewards """
game_rewards = []
for _ in range(n_games):
# initial observation and memory
observation = env.reset()
prev_memories = agent.get_initial_state(1)
total_reward = 0
while True:
new_memories, readouts = agent.step(
prev_memories, observation[None, ...])
action = agent.sample_actions(readouts)
observation, reward, done, info = env.step(action[0])
total_reward += reward
prev_memories = new_memories
if done:
break
game_rewards.append(total_reward)
return game_rewards
#import gym.wrappers
#with gym.wrappers.Monitor(make_env(), directory="videos", force=True) as env_monitor:
# rewards = evaluate(agent, env_monitor, n_games=3)
#print(rewards)
# Show video. This may not work in some setups. If it doesn't
# work for you, you can download the videos and view them locally.
#from pathlib import Path
#from IPython.display import HTML
#video_names = sorted([s for s in Path('videos').iterdir() if s.suffix == '.mp4'])
#HTML("""
#<video width="640" height="480" controls>
# <source src="{}" type="video/mp4">
#</video>
#""".format(video_names[-1])) # You can also try other indices
```
### Training on parallel games
We introduce a class called EnvPool - it's a tool that handles multiple environments for you. Here's how it works:

```
pool = EnvPool(agent, make_env, n_parallel_games)
# for each of n_parallel_games, take "rollout_length" steps
rollout_obs, rollout_actions, rollout_rewards, rollout_mask, last_prev_mem_state = pool.interact(rollout_length)
print("Actions shape:", rollout_actions.shape)
print("Rewards shape:", rollout_rewards.shape)
print("Mask shape:", rollout_mask.shape)
print("Observations shape: ", rollout_obs.shape)
print("Last Previous Memory State: ", (last_prev_mem_state[0].shape, last_prev_mem_state[1].shape))
```
# Actor-critic objective
Here we define a loss function that uses rollout above to train advantage actor-critic agent.
Our loss consists of three components:
* __The policy "loss"__
$$ \hat J = {1 \over T} \cdot \sum_t { \log \pi(a_t | s_t) } \cdot A_{const}(s,a) $$
* This function has no meaning in and of itself, but it was built such that
* $ \nabla \hat J = {1 \over N} \cdot \sum_t { \nabla \log \pi(a_t | s_t) } \cdot A(s,a) \approx \nabla E_{s, a \sim \pi} R(s,a) $
* Therefore if we __maximize__ J_hat with gradient descent we will maximize expected reward
* __The value "loss"__
$$ L_{td} = {1 \over T} \cdot \sum_t { [r + \gamma \cdot V_{const}(s_{t+1}) - V(s_t)] ^ 2 }$$
* Ye Olde TD_loss from q-learning and alike
* If we minimize this loss, V(s) will converge to $V_\pi(s) = E_{a \sim \pi(a | s)} R(s,a) $
* __Entropy Regularizer__
$$ H = - {1 \over T} \sum_t \sum_a {\pi(a|s_t) \cdot \log \pi (a|s_t)}$$
* If we __maximize__ entropy we discourage agent from predicting zero probability to actions
prematurely (a.k.a. exploration)
So we optimize a linear combination of $L_{td} - \hat J -H$
__One more thing:__ since we train on T-step rollouts, we can use N-step formula for advantage for free:
* At the last step, $A(s_t,a_t) = r(s_t, a_t) + \gamma \cdot V(s_{t+1}) - V(s) $
* One step earlier, $A(s_t,a_t) = r(s_t, a_t) + \gamma \cdot r(s_{t+1}, a_{t+1}) + \gamma ^ 2 \cdot V(s_{t+2}) - V(s) $
* Et cetera, et cetera. This way agent starts training much faster since it's estimate of A(s,a) depends less on his (imperfect) value function and more on actual rewards. There's also a [nice generalization](https://arxiv.org/abs/1506.02438) of this.
__Note:__ it's also a good idea to scale rollout_len up to learn longer sequences. You may wish set it to >=20 or to start at 10 and then scale up as time passes.
```
# [batch, time, h, w, c]
observations_ph = tf.placeholder('float32', [None, None,] + list(obs_shape))
sampled_actions_ph = tf.placeholder('int32', (None, None,))
mask_ph = tf.placeholder('float32', (None, None,))
rewards_ph = tf.placeholder('float32', (None, None,))
cumulative_rewards_ph = tf.placeholder('float32', (None, None,))
initial_memory_ph = agent.prev_state_placeholder
# get new_state, (actor->logits, critic->state_value)
next_state, dummy_outputs = agent.symbolic_step(initial_memory_ph,
observations_ph[:, 0])
print("dummy_outputs:", dummy_outputs,'\n')
next_memory_seq, outputs_seq = tf.scan(
lambda stack, obs_t: agent.symbolic_step(stack[0], obs_t),
# return new_state_ch, (logits, state_value)
initializer = (initial_memory_ph, dummy_outputs),
elems = tf.transpose(observations_ph, [1, 0, 2, 3, 4])
# elem.shape = [time, batch, h, w, c]
)
print("next_memory_seq", next_memory_seq,'\n')
print("outputs_seq:", outputs_seq,'\n')
# from [time, batch] back to [batch, time]
outputs_seq = [tf.transpose(
tensor, [1, 0] + list(range(2, tensor.shape.ndims))) for tensor in outputs_seq]
print("outputs_seq:", outputs_seq)
# actor-critic losses
# actor -> logits, with shape: [batch, time, n_actions]
# critic -> states, with shape: [batch, time, 1]
logits_seq, state_values_seq = outputs_seq
logprobs_seq = tf.nn.log_softmax(logits_seq)
logp_actions = tf.reduce_sum(logprobs_seq * tf.one_hot(sampled_actions_ph, n_actions),
axis=-1)[:, :-1]
current_rewards = rewards_ph[:, :-1] / 100.
current_state_values = state_values_seq[:, :-1, 0]
next_state_values = state_values_seq[:, 1:, 0] * mask_ph[:, :-1]
# policy gradient
# compute 1-step advantage using current_rewards, current_state_values and next_state_values
advantage = cumulative_rewards_ph[:, :-1] - current_state_values
assert advantage.shape.ndims == 2
# compute policy entropy given logits_seq. Mind the sign!
policy = tf.nn.softmax(logits_seq, axis=-1)
entropy = - tf.reduce_sum(policy * logprobs_seq, axis=-1)
assert entropy.shape.ndims == 2
actor_loss = - tf.reduce_mean(logp_actions * tf.stop_gradient(advantage))
actor_loss -= 0.001 * tf.reduce_mean(entropy)
# Prepare Temporal Difference error (States)
target_state_values = current_rewards + gamma * next_state_values
critic_loss = tf.reduce_mean(
(current_state_values - tf.stop_gradient(target_state_values))**2)
train_step = tf.train.AdamOptimizer(1e-5).minimize(actor_loss + critic_loss)
sess.run(tf.global_variables_initializer())
```
# Train
just run train step and see if agent learns any better
```
def acc_rewards(rewards, last_state_values, gamma=0.99):
# rewards at each step [batch, time]
# in a phase, last previous memory state [batch, state_dim]
# discount for reward
"""
Take a list of immediate rewards r(s,a) for the whole session
and compute cumulative returns (a.k.a. G(s,a) in Sutton '16).
G_t = r_t + gamma*r_{t+1} + gamma^2*r_{t+2} + ...
A simple way to compute cumulative rewards is to iterate from the last
to the first timestep and compute G_t = r_t + gamma*G_{t+1} recurrently
You must return an array/list of cumulative rewards with as many elements as in the initial rewards.
"""
curr_rewards = rewards / 100.
b_size, time = rewards.shape
acc_reward = np.zeros((b_size, time), dtype='float32')
acc_reward[:,time-1] = last_state_values
for i in reversed(np.arange(time-1)):
acc_reward[:,i] = curr_rewards[:,i] + gamma * acc_reward[:,i+1]
return acc_reward
def sample_batch(rollout_length=rollout_length):
rollout_obs, rollout_actions, rollout_rewards, rollout_mask, prev_state = pool.interact(rollout_length)
last_state_values = agent.get_state_values(prev_state, rollout_obs[:,-1])
rollout_cumulative_rewards = acc_rewards(rollout_rewards, last_state_values[:,0])
feed_dict = {
initial_memory_ph: pool.prev_memory_states,
observations_ph: rollout_obs,
sampled_actions_ph: rollout_actions,
mask_ph: rollout_mask,
rewards_ph: rollout_rewards,
cumulative_rewards_ph: rollout_cumulative_rewards,
}
return feed_dict
from IPython.display import clear_output
from tqdm import trange
from pandas import DataFrame
moving_average = lambda x, **kw: DataFrame(
{'x': np.asarray(x)}).x.ewm(**kw).mean().values
rewards_history = []
iters = 301
for i in trange(iters):
sess.run(train_step, sample_batch())
if i % 100 == 0:
rewards_history.append(np.mean(evaluate(agent, env, n_games=1)))
clear_output(True)
plt.plot(rewards_history, label='rewards')
plt.plot(moving_average(np.array(rewards_history),
span=rollout_length), label='rewards ewma@'+str(rollout_length))
plt.legend()
plt.show()
if rewards_history[-1] >= 20000:
print("Your agent has just passed the minimum homework threshold")
break
```
### "Final" evaluation
```
import gym.wrappers
with gym.wrappers.Monitor(make_env(), directory="videos", force=True) as env_monitor:
final_rewards = evaluate(agent, env_monitor, n_games=n_parallel_games)
print("Final mean reward", np.mean(final_rewards))
# Show video. This may not work in some setups. If it doesn't
# work for you, you can download the videos and view them locally.
from pathlib import Path
from IPython.display import HTML
video_names = sorted([s for s in Path('videos').iterdir() if s.suffix == '.mp4'])
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format(video_names[-1])) # You can also try other indices
```
### POMDP setting
The Atari game we're working with is actually a POMDP: your agent needs to know timing at which enemies spawn and move, but cannot do so unless it has some memory.
Let's design another agent that has a recurrent neural net memory to solve this.
__Note:__ it's also a good idea to scale rollout_len up to learn longer sequences. You may wish set it to >=20 or to start at 10 and then scale up as time passes.
### Now let's train it!
```
# A whole lot of your code here: train the new agent with GRU memory.
# - create pool
# - write loss functions and training op
# - train
# You can reuse most of the code with zero to few changes
```
| github_jupyter |
# Final Batching
```
# Set working directory
setwd("./WorkDir")
# Create holding directory for preprocessed datasets
if (!dir.exists("./CSI/Preprocessed")) {dir.create("./CSI/Preprocessed")}
# Load necessary libraries
suppressPackageStartupMessages(library(tidyverse))
```
### Import Kernels
```
# Genetic
Gene <- readRDS("./CSI/Preprocessed/kernel_Genetic.rds"); dim(Gene)
# Phenotype
Phen <- readRDS("./CSI/Preprocessed/kernel_Pheno.rds") %>% rename(SampleID = subjectkey); dim(Phen)
# Imaging
sMRI <- readRDS("./CSI/Preprocessed/kernel_sMRI.rds") %>% rename(SampleID = subjectkey); dim(sMRI)
dMRI <- readRDS("./CSI/Preprocessed/kernel_dMRI.rds") %>% rename(SampleID = subjectkey); dim(dMRI)
tsfMRI <- readRDS("./CSI/Preprocessed/kernel_tsfMRI.rds") %>% rename(SampleID = subjectkey); dim(tsfMRI)
rsfMRI <- readRDS("./CSI/Preprocessed/kernel_rsfMRI.rds") %>% rename(SampleID = subjectkey); dim(rsfMRI)
### Get a census of all names, intersect and compare to OCD Diagnosnis
OCD <- read.table("./OCD.pheno", sep="\t", header=F); dim(OCD)
UNION <- purrr::reduce(list(Gene$SampleID, Phen$SampleID, sMRI$SampleID, dMRI$SampleID, rsfMRI$SampleID, tsfMRI$SampleID), union); length(UNION)
INTERSECTION <- purrr::reduce(list(Gene$SampleID, Phen$SampleID, sMRI$SampleID, dMRI$SampleID, rsfMRI$SampleID, tsfMRI$SampleID), intersect); length(INTERSECTION)
```
### Batch samples
```
# Update OCD table
colnames(OCD) <- c("SampleID", "OCD")
OCD$OCD[OCD$OCD == 1] <- NA
OCD$OCD[OCD$OCD == 2] <- 1
saveRDS(OCD, file="./CSI/OCD.rds")
# Define ambiguous OCD cases into experimental batch
BAT <- OCD %>%
mutate(Experimental = ifelse(is.na(OCD), 1, 0))
# Create vectors for cases and controls
BAT_Case <- OCD %>%
filter((OCD==1) & (SampleID %in% INTERSECTION)) %>%
pull(SampleID)
BAT_Cont <- OCD %>%
filter((OCD==0) & (SampleID %in% INTERSECTION)) %>%
pull(SampleID)
# Create Test set
set.seed(1)
SEL <- c(sample(BAT_Case, 50), sample(BAT_Cont, 50))
BAT <- BAT %>%
mutate(Test = ifelse(SampleID %in% SEL, 1, 0))
BAT_Case <- BAT_Case[!BAT_Case %in% SEL]
BAT_Cont <- BAT_Cont[!BAT_Cont %in% SEL]
# Create Validation Set
set.seed(1)
SEL <- c(sample(BAT_Case, 50), sample(BAT_Cont, 50))
BAT <- BAT %>%
mutate(Valid = ifelse(SampleID %in% SEL, 1, 0))
BAT_Case <- BAT_Case[!BAT_Case %in% SEL]
BAT_Cont <- BAT_Cont[!BAT_Cont %in% SEL]
# Create a list of samples to remove from final UNION list, and remove them
REM <- c(
pull(filter(BAT, Experimental == 1), SampleID),
pull(filter(BAT, Test == 1), SampleID),
pull(filter(BAT, Valid == 1), SampleID)
)
# Remove those samples from UNION
UNION <- UNION[!UNION %in% REM]
# Create batches for genetic set
i=1
set.seed(1)
BAT_Case <- OCD %>%
filter((OCD==1) & (SampleID %in% UNION) & (SampleID %in% Gene$SampleID)) %>%
pull(SampleID)
BAT_Cont <- OCD %>%
filter((OCD==0) & (SampleID %in% UNION) & (SampleID %in% Gene$SampleID)) %>%
pull(SampleID)
BAT_GENE <- BAT %>% filter(SampleID %in% Gene$SampleID)
while(length(BAT_Cont)>=94){
name <- paste0("Train_",i,collapse="")
SEL <- c(sample(BAT_Case, 100), sample(BAT_Cont, 94))
BAT_GENE <- BAT_GENE %>%
mutate(!!sym(name) := ifelse(SampleID %in% SEL, 1, 0))
BAT_Cont <- BAT_Cont[!BAT_Cont %in% SEL]
i=i+1
cat(name, "-", length(BAT_Cont), "\n")
}
BAT_GENE <- BAT_GENE %>%
mutate_at(vars(Experimental:Train_77), as.numeric) %>%
rowwise() %>%
mutate(ALL = sum(c_across(all_of(names(BAT_GENE)[3:81]))),
TRAIN = sum(c_across(all_of(names(BAT_GENE)[6:81]))))
saveRDS(BAT_GENE, file="./CSI/Preprocessed/Batch_Gene.rds")
# Create batches for pheno set
i=1
set.seed(1)
BAT_Case <- OCD %>%
filter((OCD==1) & (SampleID %in% UNION) & (SampleID %in% Phen$SampleID)) %>%
pull(SampleID)
BAT_Cont <- OCD %>%
filter((OCD==0) & (SampleID %in% UNION) & (SampleID %in% Phen$SampleID)) %>%
pull(SampleID)
BAT_PHEN <- BAT %>% filter(SampleID %in% Phen$SampleID)
while(length(BAT_Cont)>=124){
name <- paste0("Train_",i,collapse="")
SEL <- c(sample(BAT_Case, 100), sample(BAT_Cont, 124))
BAT_PHEN <- BAT_PHEN %>%
mutate(!!sym(name) := ifelse(SampleID %in% SEL, 1, 0))
BAT_Cont <- BAT_Cont[!BAT_Cont %in% SEL]
i=i+1
cat(name, "-", length(BAT_Cont), "\n")
}
BAT_PHEN <- BAT_PHEN %>%
mutate_at(vars(Experimental:Train_82), as.numeric) %>%
rowwise() %>%
mutate(ALL = sum(c_across(all_of(names(BAT_PHEN)[3:81]))),
TRAIN = sum(c_across(all_of(names(BAT_PHEN)[6:81]))))
saveRDS(BAT_PHEN, file="./CSI/Preprocessed/Batch_Phen.rds")
# Create batches for sMRI set
i=1
set.seed(1)
BAT_Case <- OCD %>%
filter((OCD==1) & (SampleID %in% UNION) & (SampleID %in% sMRI$SampleID)) %>%
pull(SampleID)
BAT_Cont <- OCD %>%
filter((OCD==0) & (SampleID %in% UNION) & (SampleID %in% sMRI$SampleID)) %>%
pull(SampleID)
BAT_sMRI <- BAT %>% filter(SampleID %in% sMRI$SampleID)
while(length(BAT_Cont)>=109){
name <- paste0("Train_",i,collapse="")
SEL <- c(sample(BAT_Case, 100), sample(BAT_Cont, 109))
BAT_sMRI <- BAT_sMRI %>%
mutate(!!sym(name) := ifelse(SampleID %in% SEL, 1, 0))
BAT_Cont <- BAT_Cont[!BAT_Cont %in% SEL]
i=i+1
cat(name, "-", length(BAT_Cont), "\n")
}
BAT_sMRI <- BAT_sMRI %>%
mutate_at(vars(Experimental:Train_93), as.numeric) %>%
rowwise() %>%
mutate(ALL = sum(c_across(all_of(names(BAT_sMRI)[3:81]))),
TRAIN = sum(c_across(all_of(names(BAT_sMRI)[6:81]))))
saveRDS(BAT_sMRI, file="./CSI/Preprocessed/Batch_sMRI.rds")
# Create batches for dMRI set
i=1
set.seed(1)
BAT_Case <- OCD %>%
filter((OCD==1) & (SampleID %in% UNION) & (SampleID %in% dMRI$SampleID)) %>%
pull(SampleID)
BAT_Cont <- OCD %>%
filter((OCD==0) & (SampleID %in% UNION) & (SampleID %in% dMRI$SampleID)) %>%
pull(SampleID)
BAT_dMRI <- BAT %>% filter(SampleID %in% dMRI$SampleID)
while(length(BAT_Cont)>=109){
name <- paste0("Train_",i,collapse="")
SEL <- c(sample(BAT_Case, 100), sample(BAT_Cont, 109))
BAT_dMRI <- BAT_dMRI %>%
mutate(!!sym(name) := ifelse(SampleID %in% SEL, 1, 0))
BAT_Cont <- BAT_Cont[!BAT_Cont %in% SEL]
i=i+1
cat(name, "-", length(BAT_Cont), "\n")
}
BAT_dMRI <- BAT_dMRI %>%
mutate_at(vars(Experimental:Train_93), as.numeric) %>%
rowwise() %>%
mutate(ALL = sum(c_across(all_of(names(BAT_dMRI)[3:81]))),
TRAIN = sum(c_across(all_of(names(BAT_dMRI)[6:81]))))
saveRDS(BAT_dMRI, file="./CSI/Preprocessed/Batch_dMRI.rds")
# Create batches for rsfMRI set
i=1
set.seed(1)
BAT_Case <- OCD %>%
filter((OCD==1) & (SampleID %in% UNION) & (SampleID %in% rsfMRI$SampleID)) %>%
pull(SampleID)
BAT_Cont <- OCD %>%
filter((OCD==0) & (SampleID %in% UNION) & (SampleID %in% rsfMRI$SampleID)) %>%
pull(SampleID)
BAT_rsfMRI <- BAT %>% filter(SampleID %in% rsfMRI$SampleID)
while(length(BAT_Cont)>=109){
name <- paste0("Train_",i,collapse="")
SEL <- c(sample(BAT_Case, 100), sample(BAT_Cont, 109))
BAT_rsfMRI <- BAT_rsfMRI %>%
mutate(!!sym(name) := ifelse(SampleID %in% SEL, 1, 0))
BAT_Cont <- BAT_Cont[!BAT_Cont %in% SEL]
i=i+1
cat(name, "-", length(BAT_Cont), "\n")
}
BAT_rsfMRI <- BAT_rsfMRI %>%
mutate_at(vars(Experimental:Train_23), as.numeric) %>%
rowwise() %>%
mutate(ALL = sum(c_across(all_of(names(BAT_rsfMRI)[3:81]))),
TRAIN = sum(c_across(all_of(names(BAT_rsfMRI)[6:81]))))
saveRDS(BAT_rsfMRI, file="./CSI/Preprocessed/Batch_rsfMRI.rds")
# Create batches for tsfMRI set
i=1
set.seed(1)
BAT_Case <- OCD %>%
filter((OCD==1) & (SampleID %in% UNION) & (SampleID %in% tsfMRI$SampleID)) %>%
pull(SampleID)
BAT_Cont <- OCD %>%
filter((OCD==0) & (SampleID %in% UNION) & (SampleID %in% tsfMRI$SampleID)) %>%
pull(SampleID)
BAT_tsfMRI <- BAT %>% filter(SampleID %in% tsfMRI$SampleID)
while(length(BAT_Cont)>=109){
name <- paste0("Train_",i,collapse="")
SEL <- c(sample(BAT_Case, 100), sample(BAT_Cont, 109))
BAT_tsfMRI <- BAT_tsfMRI %>%
mutate(!!sym(name) := ifelse(SampleID %in% SEL, 1, 0))
BAT_Cont <- BAT_Cont[!BAT_Cont %in% SEL]
i=i+1
cat(name, "-", length(BAT_Cont), "\n")
}
BAT_tsfMRI <- BAT_tsfMRI %>%
mutate_at(vars(Experimental:Train_23), as.numeric) %>%
rowwise() %>%
mutate(ALL = sum(c_across(all_of(names(BAT_tsfMRI)[3:81]))),
TRAIN = sum(c_across(all_of(names(BAT_tsfMRI)[6:81]))))
saveRDS(BAT_tsfMRI, file="./CSI/Preprocessed/Batch_tsfMRI.rds")
# Save Kernels
saveRDS(Gene, file="./CSI/Preprocessed/Kernel_Gene.rds")
saveRDS(Phen, file="./CSI/Preprocessed/Kernel_Phen.rds")
saveRDS(sMRI, file="./CSI/Preprocessed/Kernel_sMRI.rds")
saveRDS(dMRI, file="./CSI/Preprocessed/Kernel_dMRI.rds")
saveRDS(rsfMRI, file="./CSI/Preprocessed/Kernel_rsfMRI.rds")
saveRDS(tsfMRI, file="./CSI/Preprocessed/Kernel_tsfMRI.rds")
```
### Specify Experimental Data
```
# Intersect all ambiguous cases with experimental data, we will use them for ML-PRS part
Intersection <- purrr::reduce(list(
filter(BAT_PHEN, Experimental == 1)$SampleID,
filter(BAT_sMRI, Experimental == 1)$SampleID,
filter(BAT_dMRI, Experimental == 1)$SampleID,
filter(BAT_rsfMRI, Experimental == 1)$SampleID,
filter(BAT_tsfMRI, Experimental == 1)$SampleID,
filter(BAT_GENE, Experimental == 1)$SampleID
), intersect)
length(Intersection)
# Pull Intersected data and save new table for analysis
BAT_EXP <- BAT_PHEN %>%
filter(SampleID %in% Intersection) %>%
select(c(SampleID, Experimental))
dim(BAT_EXP)
saveRDS(BAT_EXP, file="./CSI/Preprocessed/Batch_Experimental.rds")
```
| github_jupyter |
```
# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)
# Toggle cell visibility
from IPython.display import HTML
tag = HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide()
} else {
$('div.input').show()
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
Toggle cell visibility <a href="javascript:code_toggle()">here</a>.''')
display(tag)
# Hide the code completely
# from IPython.display import HTML
# tag = HTML('''<style>
# div.input {
# display:none;
# }
# </style>''')
# display(tag)
%matplotlib inline
import control
import numpy
import sympy as sym
from IPython.display import display, Markdown
import ipywidgets as widgets
import matplotlib.pyplot as plt
#print a matrix latex-like
def bmatrix(a):
"""Returns a LaTeX bmatrix - by Damir Arbula (ICCT project)
:a: numpy array
:returns: LaTeX bmatrix as a string
"""
if len(a.shape) > 2:
raise ValueError('bmatrix can at most display two dimensions')
lines = str(a).replace('[', '').replace(']', '').splitlines()
rv = [r'\begin{bmatrix}']
rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines]
rv += [r'\end{bmatrix}']
return '\n'.join(rv)
# Display formatted matrix:
def vmatrix(a):
if len(a.shape) > 2:
raise ValueError('bmatrix can at most display two dimensions')
lines = str(a).replace('[', '').replace(']', '').splitlines()
rv = [r'\begin{vmatrix}']
rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines]
rv += [r'\end{vmatrix}']
return '\n'.join(rv)
#matrixWidget is a matrix looking widget built with a VBox of HBox(es) that returns a numPy array as value !
class matrixWidget(widgets.VBox):
def updateM(self,change):
for irow in range(0,self.n):
for icol in range(0,self.m):
self.M_[irow,icol] = self.children[irow].children[icol].value
#print(self.M_[irow,icol])
self.value = self.M_
def dummychangecallback(self,change):
pass
def __init__(self,n,m):
self.n = n
self.m = m
self.M_ = numpy.matrix(numpy.zeros((self.n,self.m)))
self.value = self.M_
widgets.VBox.__init__(self,
children = [
widgets.HBox(children =
[widgets.FloatText(value=0.0, layout=widgets.Layout(width='90px')) for i in range(m)]
)
for j in range(n)
])
#fill in widgets and tell interact to call updateM each time a children changes value
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].value = self.M_[irow,icol]
self.children[irow].children[icol].observe(self.updateM, names='value')
#value = Unicode('example@example.com', help="The email value.").tag(sync=True)
self.observe(self.updateM, names='value', type= 'All')
def setM(self, newM):
#disable callbacks, change values, and reenable
self.unobserve(self.updateM, names='value', type= 'All')
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].unobserve(self.updateM, names='value')
self.M_ = newM
self.value = self.M_
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].value = self.M_[irow,icol]
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].observe(self.updateM, names='value')
self.observe(self.updateM, names='value', type= 'All')
#self.children[irow].children[icol].observe(self.updateM, names='value')
#overlaod class for state space systems that DO NOT remove "useless" states (what "professor" of automatic control would do this?)
class sss(control.StateSpace):
def __init__(self,*args):
#call base class init constructor
control.StateSpace.__init__(self,*args)
#disable function below in base class
def _remove_useless_states(self):
pass
```
## Controllo della posizione di un carico di una gru
<img src="Images\EX46-Crane.png" alt="drawing" width="150x150">
La gru considerata è composta da un carrello di massa $m_c = 1000$ kg, e un carico di massa $m_l$, collegati da una fune di lunghezza $L$ che può essere considerata priva di massa. L'attrito agisce sul carrello in modo lineare e proporzionale alla velocità del carrello con coefficiente $B_f = 100$ Ns/m; il carrello viene mosso dalla forza $F$. L'angolo tra l'asse verticale e la fune è $\theta$. Le equazioni dinamiche del sistema, linearizzate attorno alla condizione stazionaria ($\theta=\dot{\theta}=\dot{x}=0$), sono (dove $g = 9.81$ m/$\text{s}^2$ è l'accelerazione gravitazionale):
\begin{cases}
(m_l+m_c)\ddot{x}+m_lL\ddot{\theta}+B_f\dot{x}=F \\
m_lL^2\ddot{\theta}+m_lL\ddot{x}+m_lLg\theta=0
\end{cases}
Si assume $m_l=100$ kg e $L=10$ m come valori nominali e si vuole progettare un regolatore per la posizione del carico misurata $y=x+L\sin{\theta} \cong x+L\theta$ secondo le seguenti specifiche:
- tempo di assestamento al 5% inferiore a 20 secondi,
- overshoot minimo o nullo,
- errore a regime nullo o praticamente nullo in risposta a un cambiamento a gradino della posizione del carico desiderata,
- la forza $F$ non deve superare $\pm1000$ N quando il cambio di posizione richiesto è di 10 metri.
Per prima cosa, si devono scrivere le equazioni del sistema linearizzato in forma di stato; dividendo la seconda equazione per $m_lL$ viene
$$
L\ddot{\theta}+\ddot{x}+g\theta=0 \, .
$$
Ora, definendo $x=\begin{bmatrix} x_1 & x_2 & x_3 & x_4 \end{bmatrix}^T = =\begin{bmatrix} \dot{x} & \dot{\theta} & \theta & x \end{bmatrix}^T$ si può scrivere:
$$
M\dot{x}=Gx+HF,
$$
dove:
$$
M = \begin{bmatrix} (m_l+m_c) & m_lL & 0 & 0 \\ 1 & L & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}, \quad
G = \begin{bmatrix} -B_f & 0 & 0 & 0 \\ 0 & 0 & -g & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix} \quad \text{e} \quad
H = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \, .
$$
Quindi, pre-moltiplicando tutto per $M^{-1}$ si ottiene ($A=M^{-1}G$, $B=M^{-1}H$):
\begin{cases}
\dot{x} = \begin{bmatrix} -B_f/m_c & 0 & gm_l/m_c & 0 \\ B_f/(Lm_c) & 0 & -g(1/L + m_l/(Lm_c)) & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix}x +\begin{bmatrix}1/m_c \\ -1/(Lm_c) \\ 0 \\ 0 \end{bmatrix}F \\
y = \begin{bmatrix} 0 & 0 & L & 1 \end{bmatrix}x
\end{cases}
### Design del regolatore
#### Design del controller
I poli del sistema sono $0$, $-0.091$ e $-0.0045\pm1.038i$. Per soddisfare il requisito sul tempo di assestamento, si spostano i poli reali in $-0.28$ e si diminuisce la parte reale dei poli immaginari di $-0.25$. Di conseguenza, i poli a ciclo chiuso scelti sono: $-0.28$, $-0.28$ e $-0.2545\pm1.038i$. Per l'errore nullo a regime si scala il segnale di riferimento con un guadagno uguale all'inverso del guadagno del sistema a ciclo chiuso.
#### Design dell'osservatore
Per supportare il feedback dello stato, si progetta un osservatore dello stato completo con poli in $-8$.
### Come usare questo notebook?
Prova a cambiare la posizione dei poli per soddisfare i requisiti per un ingresso a gradino di riferimento di 15 m.
```
# Preparatory cell
X0 = numpy.matrix('0.0; 0.0; 0.0; 0.0')
K = numpy.matrix([0,0,0,0])
L = numpy.matrix([[0],[0],[0],[0]])
X0w = matrixWidget(4,1)
X0w.setM(X0)
Kw = matrixWidget(1,4)
Kw.setM(K)
Lw = matrixWidget(4,1)
Lw.setM(L)
eig1c = matrixWidget(1,1)
eig2c = matrixWidget(2,1)
eig3c = matrixWidget(1,1)
eig4c = matrixWidget(2,1)
eig1c.setM(numpy.matrix([-0.28]))
eig2c.setM(numpy.matrix([[-0.2545],[-1.038]]))
eig3c.setM(numpy.matrix([-0.28]))
eig4c.setM(numpy.matrix([[-1.],[-1.]]))
eig1o = matrixWidget(1,1)
eig2o = matrixWidget(2,1)
eig3o = matrixWidget(1,1)
eig4o = matrixWidget(2,1)
eig1o.setM(numpy.matrix([-8.]))
eig2o.setM(numpy.matrix([[-8.1],[0.]]))
eig3o.setM(numpy.matrix([-8.2]))
eig4o.setM(numpy.matrix([[-8.3],[0.]]))
# Misc
#create dummy widget
DW = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px'))
#create button widget
START = widgets.Button(
description='Test',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Test',
icon='check'
)
def on_start_button_clicked(b):
#This is a workaround to have intreactive_output call the callback:
# force the value of the dummy widget to change
if DW.value> 0 :
DW.value = -1
else:
DW.value = 1
pass
START.on_click(on_start_button_clicked)
# Define type of method
selm = widgets.Dropdown(
options= [('Imposta K e L','Set K and L'), ('Imposta gli autovalori','Set the eigenvalues')],
value= 'Set the eigenvalues',
description='',
disabled=False
)
# Define the number of complex eigenvalues
selec = widgets.Dropdown(
options= [('0 autovalori complessi','0 complex eigenvalues'), ('2 autovalori complessi','2 complex eigenvalues'), ('4 autovalori complessi','4 complex eigenvalues')],
value= '2 complex eigenvalues',
description='Autov. controller:',
style = {'description_width': 'initial'},
disabled=False
)
seleo = widgets.Dropdown(
options= [('0 autovalori complessi','0 complex eigenvalues'), ('2 autovalori complessi','2 complex eigenvalues')],
value= '0 complex eigenvalues',
description='Autov. osservatore:',
style = {'description_width': 'initial'},
disabled=False
)
#define type of ipout
selu = widgets.Dropdown(
options=[('impulso','impulse'), ('gradino','step'), ('sinusoide','sinusoid'), ('onda quadra','square wave')],
value='step',
description='Riferimento:',
style = {'description_width': 'initial'},
disabled=False
)
# Define the values of the input
u = widgets.FloatSlider(
value=10,
min=0,
max=20,
step=1,
description='Riferimento:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
period = widgets.FloatSlider(
value=0.5,
min=0.001,
max=10,
step=0.001,
description='Periodo: ',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.3f',
)
simTime = widgets.FloatText(
value=30,
description='Tempo di simulazione (s):',
style = {'description_width': 'initial'},
disabled=False
)
# Support functions
def eigen_choice(selec,seleo):
if selec == '0 complex eigenvalues':
eig1c.children[0].children[0].disabled = False
eig2c.children[1].children[0].disabled = True
eig3c.children[0].children[0].disabled = False
eig4c.children[0].children[0].disabled = False
eig4c.children[1].children[0].disabled = True
eigc = 0
if seleo == '0 complex eigenvalues':
eig1o.children[0].children[0].disabled = False
eig2o.children[1].children[0].disabled = True
eig3o.children[0].children[0].disabled = False
eig4o.children[0].children[0].disabled = False
eig4o.children[1].children[0].disabled = True
eigo = 0
if selec == '2 complex eigenvalues':
eig1c.children[0].children[0].disabled = False
eig2c.children[1].children[0].disabled = False
eig3c.children[0].children[0].disabled = False
eig4c.children[0].children[0].disabled = True
eig4c.children[1].children[0].disabled = True
eigc = 2
if seleo == '2 complex eigenvalues':
eig1o.children[0].children[0].disabled = False
eig2o.children[1].children[0].disabled = False
eig3o.children[0].children[0].disabled = False
eig4o.children[0].children[0].disabled = True
eig4o.children[1].children[0].disabled = True
eigo = 2
if selec == '4 complex eigenvalues':
eig1c.children[0].children[0].disabled = True
eig2c.children[1].children[0].disabled = False
eig3c.children[0].children[0].disabled = True
eig4c.children[0].children[0].disabled = False
eig4c.children[1].children[0].disabled = False
eigc = 4
if seleo == '4 complex eigenvalues':
eig1o.children[0].children[0].disabled = True
eig2o.children[1].children[0].disabled = False
eig3o.children[0].children[0].disabled = True
eig4o.children[0].children[0].disabled = False
eig4o.children[1].children[0].disabled = False
eigo = 4
return eigc, eigo
def method_choice(selm):
if selm == 'Set K and L':
method = 1
selec.disabled = True
seleo.disabled = True
if selm == 'Set the eigenvalues':
method = 2
selec.disabled = False
seleo.disabled = False
return method
ml = 100
mc = 1000
L = 10
g = 9.81
Bf = 100
A = numpy.matrix([[-Bf/mc,0,g*ml/mc,0],
[Bf/(L*mc), 0, -g*(1/L + ml/(L*mc)), 0],
[0, 1, 0, 0],
[1, 0, 0, 0]])
B = numpy.matrix([[1/mc],[-1/(L*mc)],[0],[0]])
C = numpy.matrix([[0,0,L,1]])
def main_callback2(X0w, K, L, eig1c, eig2c, eig3c, eig4c, eig1o, eig2o, eig3o, eig4o, u, period, selm, selec, seleo, selu, simTime, DW):
eigc, eigo = eigen_choice(selec,seleo)
method = method_choice(selm)
if method == 1:
solc = numpy.linalg.eig(A-B*K)
solo = numpy.linalg.eig(A-L*C)
if method == 2:
try:
if eigc == 0:
K = control.acker(A, B, [eig1c[0,0], eig2c[0,0], eig3c[0,0], eig4c[0,0]])
Kw.setM(K)
if eigc == 2:
K = control.acker(A, B, [eig3c[0,0],
eig1c[0,0],
numpy.complex(eig2c[0,0], eig2c[1,0]),
numpy.complex(eig2c[0,0],-eig2c[1,0])])
Kw.setM(K)
if eigc == 4:
K = control.acker(A, B, [numpy.complex(eig4c[0,0], eig4c[1,0]),
numpy.complex(eig4c[0,0],-eig4c[1,0]),
numpy.complex(eig2c[0,0], eig2c[1,0]),
numpy.complex(eig2c[0,0],-eig2c[1,0])])
Kw.setM(K)
if eigo == 0:
L = control.place(A.T, C.T, [eig1o[0,0], eig2o[0,0], eig3o[0,0], eig4o[0,0]]).T
Lw.setM(L)
if eigo == 2:
L = control.place(A.T, C.T, [eig3o[0,0],
eig1o[0,0],
numpy.complex(eig2o[0,0], eig2o[1,0]),
numpy.complex(eig2o[0,0],-eig2o[1,0])]).T
Lw.setM(L)
if eigo == 4:
L = control.place(A.T, C.T, [numpy.complex(eig4o[0,0], eig4o[1,0]),
numpy.complex(eig4o[0,0],-eig4o[1,0]),
numpy.complex(eig2o[0,0], eig2o[1,0]),
numpy.complex(eig2o[0,0],-eig2o[1,0])]).T
Lw.setM(L)
except:
print("ERRORE: uno dei poli richiesti è ripetuto più di rango(B) volte. Prova a cambiare i poli.")
return
sys = sss(A,B,numpy.vstack((C,[0,0,0,0])),[[0],[1]])
syse = sss(A-L*C,numpy.hstack((B,L)),numpy.eye(4),numpy.zeros((4,2)))
sysc = sss(0,[0,0,0,0],0,-K)
sys_append = control.append(sys,syse,sysc)
# To avoid strange behaviours
try:
sys_CL = control.connect(sys_append,
[[1,7],[2,2],[3,1],[4,3],[5,4],[6,5],[7,6]],
[1],
[1,2])
except:
sys_CL = control.connect(sys_append,
[[1,7],[2,2],[3,1],[4,3],[5,4],[6,5],[7,6]],
[1],
[1,2])
X0w1 = numpy.zeros((8,1))
X0w1[4,0] = X0w[0,0]
X0w1[5,0] = X0w[1,0]
X0w1[6,0] = X0w[2,0]
X0w1[7,0] = X0w[3,0]
if simTime != 0:
T = numpy.linspace(0, simTime, 10000)
else:
T = numpy.linspace(0, 1, 10000)
#t1, y1 = step(sys_CL[0,0],[0,10000])
u1 = u
try:
DCgain = control.dcgain(sys_CL[0,0])
u = u/DCgain
except:
# print("Error in the calculation of the dcgain of the closed loop controlled system. The feedforward gain is setted to 1.")
print("Errore nel calcolo del guadagno statico del sistema controllato a ciclo chiuso. Il guadagno in anello aperto è impostato a 1.")
DCgain = 1
if selu == 'impulse': #selu
U = [0 for t in range(0,len(T))]
U[0] = u
U1 = [0 for t in range(0,len(T))]
U1[0] = u1
T, yout, xout = control.forced_response(sys_CL,T,U,X0w1)
if selu == 'step':
U = [u for t in range(0,len(T))]
U1 = [u1 for t in range(0,len(T))]
T, yout, xout = control.forced_response(sys_CL,T,U,X0w1)
if selu == 'sinusoid':
U = u*numpy.sin(2*numpy.pi/period*T)
U1 = u1*numpy.sin(2*numpy.pi/period*T)
T, yout, xout = control.forced_response(sys_CL,T,U,X0w1)
if selu == 'square wave':
U = u*numpy.sign(numpy.sin(2*numpy.pi/period*T))
U1 = u1*numpy.sign(numpy.sin(2*numpy.pi/period*T))
T, yout, xout = control.forced_response(sys_CL,T,U,X0w1)
try:
step_info_dict = control.step_info(sys_CL[0,0],SettlingTimeThreshold=0.05,T=T)
print('Step info: \n\tTempo di salita =',step_info_dict['RiseTime'],'\n\tTempo di assestamento (5%) =',step_info_dict['SettlingTime'],'\n\tOvershoot (%)=',step_info_dict['Overshoot'])
print('Massimo valore di u (% di 1000N)=', max(abs(yout[1]))/(1000)*100)
except:
print("Errore nel calcolo delle step info.")
print("Guadagno statico del sistema in anello chiuso =",DCgain)
fig = plt.figure(num='Simulation1', figsize=(14,12))
fig.add_subplot(221)
plt.title('Risposta dell\'uscita')
plt.ylabel('Uscita')
plt.plot(T,yout[0],T,U1,'r--')
plt.xlabel('$t$ [s]')
plt.axvline(x=0,color='black',linewidth=0.8)
plt.axhline(y=0,color='black',linewidth=0.8)
plt.legend(['$y$','Riferimento'])
plt.grid()
fig.add_subplot(222)
plt.title('Ingresso')
plt.ylabel('$u$')
plt.plot(T,yout[1])
plt.plot(T,[1000 for i in range(len(T))],'r--')
plt.plot(T,[-1000 for i in range(len(T))],'r--')
plt.xlabel('$t$ [s]')
plt.axvline(x=0,color='black',linewidth=0.8)
plt.axhline(y=0,color='black',linewidth=0.8)
plt.grid()
fig.add_subplot(223)
plt.title('Risposta degli stati')
plt.ylabel('Stati')
plt.plot(T,xout[0],
T,xout[1],
T,xout[2],
T,xout[3])
plt.xlabel('$t$ [s]')
plt.axvline(x=0,color='black',linewidth=0.8)
plt.axhline(y=0,color='black',linewidth=0.8)
plt.legend(['$x_{1}$','$x_{2}$','$x_{3}$','$x_{4}$'])
plt.grid()
fig.add_subplot(224)
plt.title('Errori di stima')
plt.ylabel('Errori')
plt.plot(T,xout[4]-xout[0])
plt.plot(T,xout[5]-xout[1])
plt.plot(T,xout[6]-xout[2])
plt.plot(T,xout[7]-xout[3])
plt.axvline(x=0,color='black',linewidth=0.8)
plt.axhline(y=0,color='black',linewidth=0.8)
plt.xlabel('$t$ [s]')
plt.legend(['$e_{1}$','$e_{2}$','$e_{3}$','$e_{4}$'])
plt.grid()
#plt.tight_layout()
alltogether2 = widgets.VBox([widgets.HBox([selm,
selec,
seleo,
selu]),
widgets.Label(' ',border=3),
widgets.HBox([widgets.HBox([widgets.Label('K:',border=3), Kw,
widgets.Label('Autovalori:',border=3),
widgets.HBox([eig1c,
eig2c,
eig3c,
eig4c])])]),
widgets.Label(' ',border=3),
widgets.HBox([widgets.VBox([widgets.HBox([widgets.Label('L:',border=3), Lw, widgets.Label(' ',border=3),
widgets.Label(' ',border=3),
widgets.Label('Autovalori:',border=3),
eig1o,
eig2o,
eig3o,
eig4o,
widgets.Label(' ',border=3),
widgets.Label(' ',border=3),
widgets.Label('X0 stim.:',border=3), X0w]),
widgets.Label(' ',border=3),
widgets.HBox([
# widgets.VBox([widgets.Label('Tempo di simulazione (s):',border=3)]),
widgets.VBox([simTime])])]),
widgets.Label(' ',border=3)]),
widgets.Label(' ',border=3),
widgets.HBox([u,
period,
START])])
out2 = widgets.interactive_output(main_callback2, {'X0w':X0w, 'K':Kw, 'L':Lw,
'eig1c':eig1c, 'eig2c':eig2c, 'eig3c':eig3c, 'eig4c':eig4c,
'eig1o':eig1o, 'eig2o':eig2o, 'eig3o':eig3o, 'eig4o':eig4o,
'u':u, 'period':period, 'selm':selm, 'selec':selec, 'seleo':seleo, 'selu':selu, 'simTime':simTime, 'DW':DW})
out2.layout.height = '860px'
display(out2, alltogether2)
```
| github_jupyter |
# Simulating Language 13, Iterated Bayesian Learning (lab) (some answers)
This simulation features a replication of the Reali & Griffiths iterated learning model of the evolution of frequency distributions, and is built around a Bayesian model of inference. This simulation allows you to explore the effects of learning bias on learning and cultural evolution, and also gives you your first chance to see under the hood of a Bayesian model. But before we get onto the model itself, we need to talk about log probabilities.
```
from math import log, exp
```
## Introduction to log probabilities
In the lectures I introduced Bayes’ Rule as a relationship between probabilities: the posterior is proportional to the product of the likelihood and the prior, and all three of these quantities are probabilities. Doing Bayesian models of learning therefore involves manipulating probabilities, numbers between 0 and 1. And some of these probabilities can be very small indeed, because they involve multiplying small numbers lots of times (consider, for instance, how small the probability is of getting 100 heads if you flip a fair coin 100 times: it’s 0.5 x 0.5 x 0.5 ... 100 times, or $0.5^{100}$ if you prefer. That’s a very small number.)
Working with small numbers on a computer can be a problem, because the computer cannot exactly represent real numbers (i.e. numbers we would write in decimal notation, e.g. numbers like 0.1, 3.147). Your computer has a very large memory where it can store and manipulate numbers, but the problem is that this memory is necessarily finite (it has to fit in your computer) and there are infinitely many real numbers. Think of recurring decimal you get by dividing 1 by 3, 0.3333..., where the threes go on forever - it would take an infinite amount of space to exactly represent this number in your computer, and distinguish it from a very similar number, e.g. 0.33333... where the threes go on for a few thousand repetitions only. So there’s no way your computer can exactly represent every possible real number. What it does instead is store numbers as accurately as it can, which involves introducing small rounding errors. In fact your computer does its best to conceal these errors from you, and often displays numbers in a format that hides exactly what numbers it is actually working with.
Why do you need to care about this? Well, if you are dealing with very very small numbers (as you might do if you were doing a Bayesian model which involves learning from lots of data) then the rounding errors become a real factor - for big numbers the rounding errors are so small we don’t really care, but for very small numbers, the rounding errors might be relatively big. Worse, sometimes the computer will round a very very small number to 0, which can generate unexpected and hard-to-predict errors in your code (e.g. if you try to divide something by a very very small number which gets rounded to 0).
The solution to this is to have the computer work not with probabilities, but with log probabilities: we take our probabilities, take the log of those numbers, then carry on as before.
```
print(log(1))
print(log(0.1))
print(log(0.000001))
print(exp(log(0.5)))
print(exp(log(0.1)))
```
As you can see from the code above, taking the log of a very small number turns it into a large negative number - these are still real numbers, so the computer still can’t represent them exactly, but in the log domain the rounding errors will be proportionately smaller for very small numbers and the rounding-to-0 problem won’t crop up. Then, if we want to see an actual probability, rather than a log probability, we can reverse this process, using the exp function, to get back raw probabilities. Jumping back and forth from logs can introduce rounding errors of its own, but it’s necessary to avoid the catastrophic rounding errors you can get if you just work with raw probabilities.
Some basic arithmetic operations work a bit differently with logs. If you want to multiply two probabilities, you add their logarithms; if you want to divide one probability by another, you subtract the logarithm of one from another. And there is no direct equivalent of adding and subtracting in the log domain, which involves a little bit of fancy footwork in the code that you don’t have to worry about too much. The important thing is 1) to understand that the code is going to manipulate log probabilities and 2) this changes nothing conceptually, it’s just a matter of implementation.
```
print(0.5 * 0.5)
print(exp(log(0.5) + log(0.5)))
print(0.5 / 0.5)
print(exp(log(0.5) - log(0.5)))
```
## On to the code
First, loading in the usual extra functions, plus some more that are specifically for doing stuff with log probabilities and probability distributions.
```
import random
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg', 'pdf')
from scipy.stats import beta
from scipy.special import logsumexp
from math import log, log1p, exp
```
The code starts with various bits and pieces which we need for working with logs and probability distributions. In particular, it loads in a function called `logsumexp` which allows us to do addition in the log domain (remember, just using the normal addition operator + with logs is the equivalent of multiplying the non-logs). Then there is a function called `log_subtract` which allows us to do the equivalent of subtraction in the log domain (because if we just use normal subtraction, -, that’s equivalent to division). Then there are a couple of functions which we need for doing probabilistic sampling the log domain - `normalize_logprobs` will take a list of logs and normalise them for us (the equivalent of taking a list of pseudo-probabilities and rescaling them so they sum to 1, but in the log domain) and `log_roulette_wheel` takes a list of log probabilities and selects a random index from that list, with probability of any particular index being selected being given by its log probability. These functions are used elsewhere in the code, but it is not important that you understand exactly how they work.
```
def log_subtract(x,y):
return x + log1p(-exp(y - x))
def normalize_probs(probs):
total = sum(probs) #calculates the summed log probabilities
normedprobs = []
for p in probs:
normedprobs.append(p / total) #normalise - subtracting in the log domain
#equivalent to dividing in the normal domain
return normedprobs
def normalize_logprobs(logprobs):
logtotal = logsumexp(logprobs) #calculates the summed log probabilities
normedlogs = []
for logp in logprobs:
normedlogs.append(logp - logtotal) #normalise - subtracting in the log domain
#equivalent to dividing in the normal domain
return normedlogs
def log_roulette_wheel(normedlogs):
r=log(random.random()) #generate a random number between 0 and 1, then convert to log
accumulator = normedlogs[0]
for i in range(len(normedlogs)):
if r < accumulator:
return i
accumulator = logsumexp([accumulator, normedlogs[i + 1]])
```
The main part of the code starts by setting up the grid. As discussed in class, we are going to turn a problem of inferring a potentially continuous value (the probability with which your teacher uses word 1) into a problem of inferring one of a limited set of possible values (either your teacher is using the word with probability 0.005, or 0.015, or 0.025, etc). In the code we will refer to a certain probability of using word 1 as `pW1`. We will call this set of possible values for `pW1` the grid - you can set the granularity of the grid as high as you like, but 100 works OK without being too slow. We are actually going to maintain two grids - one of probabilities, and one of log probabilities (since we are going to work with log probabilities when we do our calculations).
```
grid_granularity = 100
grid_increment = 1 / grid_granularity
# sets up the grid of possible probabilities to consider
possible_pW1 = []
for i in range(grid_granularity):
possible_pW1.append(grid_increment / 2 + (grid_increment * i))
# sets up the grid of log probabilities
possible_logpW1 = []
for pW1 in possible_pW1:
possible_logpW1.append(log(pW1))
```
Have a look at the two grids (`possible_pW1` and `possible_logpW1`). Do they look like you expected?
Next up come the various functions we need for Bayesian inference. I will step through these gradually.
### The prior
```
def calculate_prior(alpha):
logprior = []
for pW1 in possible_pW1:
logprior.append(beta.pdf(pW1, alpha, alpha))
return normalize_probs(logprior)
def calculate_logprior(alpha):
logprior = []
for pW1 in possible_pW1:
logprior.append(beta.logpdf(pW1, alpha, alpha))
return normalize_logprobs(logprior)
```
There are two functions for calculating the prior probability distribution, the prior probability of each of our possible values of `pW1`. One of these returns raw probabilities, so you can look at the prior easily without worrying about logs. The second, which is the one our code actually uses, calculates the log probability distribution - i.e. it deals with log probabilities, not logs. The beta distribution, which is what we are using for our prior, is a standard probability distribution, so we can just use a function from a library (`beta.pdf` for raw probabilities, `beta.logpdf` for log probabilities) to get the probability density for each value of `pW1`, then normalise those to convert them to probabilities.
- Plot some different prior probability distributions - for example, try typing `plt.plot(possible_pW1, calculate_prior(0.1))` to see the prior probability distribution over various values of `pW1` for the `alpha=0.1` prior.
- What values of alpha lead to a prior bias for regularity?
- What values of alpha lead to a prior bias for variability?
- What values of alpha lead to a completely unbiased learner?
```
plt.plot(possible_pW1, calculate_prior(0.1), label="alpha 0.1")
plt.plot(possible_pW1, calculate_prior(1), label="alpha 1")
plt.plot(possible_pW1, calculate_prior(50), label="alpha 50")
plt.legend()
plt.xlabel("Probability of word 1")
plt.ylabel("Density")
```
### Likelihood and production
In order to do Bayesian inference, we need a likelihood function that tells us how probable a set of data is given a certain hypothesis (a value of `pW1`). And to do iterated learning we need a way of modelling production - taking an individual, with a value of `pW1` in their head, and having them produce data that someone else can learn from. The next two functions do that job.
```
def likelihood(data, logpW1):
logpW0 = log_subtract(log(1), logpW1) #probability of w0 is 1-prob of w1
logprobs = [logpW0, logpW1]
loglikelihoods = []
for d in data:
loglikelihood_this_item = logprobs[d] #d will be either 0 or 1,
#so can use as index
loglikelihoods.append(loglikelihood_this_item)
return sum(loglikelihoods) #summing log probabilities =
#multiply non-log probabilities
def produce(logpW1, n_productions):
logpW0 = log_subtract(log(1), logpW1)
logprobs = [logpW0, logpW1]
data = []
for p in range(n_productions):
data.append(log_roulette_wheel(logprobs))
return data
```
We are going to model data - sets of utterances - as a simple list of 0s and 1s: the 0s correspond to occurrences of word 0, the 1s correspond to occurrences of word 1. Both functions take a (log) probability of word 1 being produced, and use that to calculate the probability of word 0 (which is 1 minus the probability of word 1).
- Test out the produce function - remember, you need to feed it a log probability, so decide on a probability for w1 and then convert it to log using the log function. What kind of data will be produced if the probability of w1 is low? Or if it is high?
- Next, check out the likelihood function - how does the likelihood of a set of data depend on the data and the probability of word 1? Remember that the likelihood function returns a log probability, so you can convert this to a probability using the exp function.
```
print(produce(log(0.1), 20))
print(produce(log(0.9), 20))
print(exp(likelihood([1,1,0], log(0.1))))
print(exp(likelihood([1,1,0], log(0.9))))
```
### Learning
```
def posterior(data, prior):
posterior_logprobs = []
for i in range(len(possible_logpW1)):
logpW1 = possible_logpW1[i]
logp_h = prior[i] #prior probability of this pW1
logp_d = likelihood(data, logpW1) #likelihood of data given this pW1
posterior_logprobs.append(logp_h + logp_d) #adding logs =
#multiplying non-logs
return normalize_logprobs(posterior_logprobs)
def learn(data,prior):
posterior_logprobs = posterior(data, prior)
selected_index = log_roulette_wheel(posterior_logprobs)
return possible_logpW1[selected_index]
```
Now we have all the bits we need to calculate the posterior probability distribution, and therefore to do learning (by picking a hypothesis, a value of pW1, based on its posterior probability).
- Test out the learn function. To do this you will need to build a prior, and some data. For example:
```python
my_prior = calculate_logprior(1)
my_data = [0] * 2 + [1] * 2
print(my_data)
print(exp(learn(my_data, my_prior)))
print(exp(learn(my_data, my_prior)))
print(exp(learn(my_data, my_prior)))
```
This example shows how to test a uniform prior (alpha = 1) and data consisting of two 1s and two 0s (note that there is a cute little trick there for creating lists of duplicates and sticking two lists together). Note that the result is probabilistic. A different hypothesis about the value of pW1 (the probability of word one) is picked each time. A better approach to understanding what's going on would be to use a for loop, build a list of running `learn` a bunch of times, and then using `plt.hist` to look at the output.
Start with a uniform prior and see how the data affects the learner’s hypothesis.
- What does adding more data do?
- What does making the data highly skewed in favour of one word do?
- Try different priors - what does a strong prior in favour of regularity do?
- What does a strong prior in favour of variability do?
```
my_prior = calculate_logprior(1)
my_data = [0] * 2 + [1] * 2
print(my_data)
print(exp(learn(my_data, my_prior)))
print(exp(learn(my_data, my_prior)))
print(exp(learn(my_data, my_prior)))
results = []
my_data = [0] * 2 + [1] * 2
for i in range(10000):
results.append(exp(learn(my_data, my_prior)))
plt.hist(results, bins=10)
plt.xlim(0,1)
plt.xlabel("probability of word 1")
results = []
my_data = [0] * 20 + [1] * 20
for i in range(10000):
results.append(exp(learn(my_data, my_prior)))
plt.hist(results, bins=10)
plt.xlim(0,1)
plt.xlabel("probability of word 1")
results = []
my_data = [0] * 30 + [1] * 10
for i in range(10000):
results.append(exp(learn(my_data, my_prior)))
plt.hist(results, bins=10)
plt.xlim(0,1)
plt.xlabel("probability of word 1")
results = []
my_data = [0] * 3 + [1] * 1
for i in range(10000):
results.append(exp(learn(my_data, my_prior)))
plt.hist(results, bins=10)
plt.xlim(0,1)
plt.xlabel("probability of word 1")
my_prior = calculate_logprior(0.1)
results = []
my_data = [0] * 3 + [1] * 1
for i in range(10000):
results.append(exp(learn(my_data, my_prior)))
plt.hist(results, bins=10)
plt.xlim(0,1)
plt.xlabel("probability of word 1")
my_prior = calculate_logprior(50)
results = []
my_data = [0] * 3 + [1] * 1
for i in range(10000):
results.append(exp(learn(my_data, my_prior)))
plt.hist(results, bins=10)
plt.xlim(0,1)
plt.xlabel("probability of word 1")
```
### Iteration
At last, we have all the bits we need to do iterated learning: we can have a learner infer a value of pW1 given some observed data, then we can have that individual produce data which another individual can learn from.
You can run a simulation using something like:
```python
pW1_by_generation, data_by_generation = iterate(0.1, 10, 5, 10)
```
This will run the simulation for 10 generations, using a prior defined by alpha=0.1, each learner observes 10 data points before inferring pW1, and the initial language consists of 5 examples of word 1 (and therefore 5 of word 0). It returns two values: a generation-by-generation record of the inferred values of pW1, and the data produced at each generation (specified as a number of occurences of word 1). It's worth plotting these values as a graph over time, but also looking at the histogram of the values to get a sense of how they are distributed overall.
```
def iterate(alpha, n_productions, starting_count_w1, generations):
prior = calculate_logprior(alpha)
pW1_accumulator = []
data_accumulator = []
data = [1] * starting_count_w1 + [0] * (n_productions - starting_count_w1)
for generation in range(generations):
logpW1 = learn(data, prior)
data = produce(logpW1, n_productions)
pW1_accumulator.append(exp(logpW1))
data_accumulator.append(sum(data))
return pW1_accumulator, data_accumulator
pW1_by_generation, data_by_generation = iterate(0.1, 10, 5, 1000)
fig, axs = plt.subplots(nrows=1, ncols=2)
axs[0].plot(pW1_by_generation)
axs[0].set_xlabel("generations")
axs[0].set_ylabel("probability of word 1")
axs[1].hist(pW1_by_generation)
axs[1].set_xlim(0, 1)
axs[1].set_xlabel("probability of word 1")
pW1_by_generation, data_by_generation = iterate(50, 10, 5, 1000)
fig, axs = plt.subplots(nrows=1, ncols=2)
axs[0].plot(pW1_by_generation)
axs[0].set_xlabel("generations")
axs[0].set_ylabel("probability of word 1")
axs[1].hist(pW1_by_generation)
axs[1].set_xlim(0, 1)
axs[1].set_xlabel("probability of word 1")
pW1_by_generation, data_by_generation = iterate(1, 10, 5, 1000)
fig, axs = plt.subplots(nrows=1, ncols=2)
axs[0].plot(pW1_by_generation)
axs[0].set_xlabel("generations")
axs[0].set_ylabel("probability of word 1")
axs[1].hist(pW1_by_generation)
axs[1].set_xlim(0, 1)
axs[1].set_xlabel("probability of word 1")
```
## Questions
The priority for this worksheet is to work through the in-text questions above: experimenting with the prior, checking that the likelihood and production makes sense, checking you understand how learning depends on the prior and the data. Once you are happy with that, try these questions:
1. One of Reali & Griffiths’s main points was that studying learning in a single individual can be a bad way to discover their prior bias, particularly if you give them lots of data which swamps this prior bias - given enough data, learners with quite different priors look the same. Can you reproduce this effect using this code?
```
fig, axs = plt.subplots(nrows=2, ncols=1, sharex=True)
my_prior = calculate_logprior(50)
results = []
my_data = [0] * 3000 + [1] * 1000
for i in range(1000):
results.append(exp(learn(my_data, my_prior)))
axs[0].hist(results, label="alpha 50")
axs[0].set_xlim(0,1)
axs[0].legend()
my_prior = calculate_logprior(0.1)
results = []
my_data = [0] * 3000 + [1] * 1000
for i in range(1000):
results.append(exp(learn(my_data, my_prior)))
axs[1].hist(results, label="alpha 0.1")
axs[1].set_xlim(0,1)
axs[1].legend()
axs[1].set_xlabel("Probability of word 1")
```
2. Iterated learning can potentially give a clearer picture of prior bias. Try running some simulations for 10 generations, with 10 data points passed from generation to generation, starting each simulation with 5 instances of w1 and 5 of w0. How does changing the prior change the results? Try alpha=0.1, alpha=1, and alpha=5. Are the differences between different priors obvious after generation 1, or do they become more apparent over generations?
3. Now try messing with the amount of data that is passed from generation to generation (remember to change the starting count of the w1 so you can compare between the different data set sizes fairly). What happens if you pass more data between generations? What happens if you pass less? What happens if you pass no data from generation to generation? What would this latter setting correspond to in the real world?
```
fig, axs = plt.subplots(nrows=3, ncols=2, sharex="col")
pW1_by_generation, data_by_generation = iterate(0.1, 10, 5, 1000)
axs[0][0].plot(pW1_by_generation)
axs[0][0].set_ylabel("p(w1)")
axs[0][1].hist(pW1_by_generation, label="data 10")
axs[0][1].set_xlim(0,1)
axs[0][1].legend()
pW1_by_generation, data_by_generation = iterate(0.1, 1000, 500, 1000)
axs[1][0].plot(pW1_by_generation)
axs[1][0].set_ylabel("p(w1)")
axs[1][1].hist(pW1_by_generation, label="data 1000")
axs[0][1].set_xlim(0,1)
axs[1][1].legend()
pW1_by_generation, data_by_generation = iterate(0.1, 0, 0, 1000)
axs[2][0].plot(pW1_by_generation)
axs[2][0].set_ylabel("p(w1)")
axs[2][1].hist(pW1_by_generation, label="data 0")
axs[0][1].set_xlim(0,1)
axs[2][1].legend()
axs[2][0].set_xlabel("generations")
axs[2][1].set_xlabel("p(w1)")
```
| github_jupyter |
```
# This is the setup for the service, after start the API, please define these tags via API
all_tags = ['O','B-art','I-org','B-geo','Race','I-gpe','I-tim','Object','Integer','I-per','TemporalUnit','B-org','CountryCode','B-gpe','B-eve','Party','I-geo','I-art','CryptoCurrencyCode','I-nat','Event','B-tim','Time','SpecialTerm','CurrencyCode','I-eve','Float','Month','B-per','Location','Timezone','US_States','B-nat']
default_tags = []
for tag in all_tags:
if tag == "O" or tag[1] == "-":
default_tags.append(tag)
else:
print(f"not_default: {tag}")
len(default_tags)
all_tags = ['O','B-art','I-org','B-geo','Race','I-gpe','I-tim','Object','Integer','I-per','TemporalUnit','B-org','CountryCode','B-gpe','B-eve','Party','I-geo','I-art','CryptoCurrencyCode','I-nat','Event','B-tim','Time','SpecialTerm','CurrencyCode','I-eve','Float','Month','B-per','Location','Timezone','US_States','B-nat']
import requests
default_tags = ['O','B-art','I-org','B-geo','I-gpe','I-tim','I-per','B-org','B-gpe','B-eve','I-geo','I-art','I-nat','B-tim','I-eve','B-per','B-nat']
for tag in default_tags:
data = {
"user": "eason.tw.chen@gmail.com",
"label_name": tag,
"inherit": [
],
"alias_as": [
],
"comment": "This is the default label in CONLL dataset.",
"tags": [
"default"
]
}
import json
headers = {
'accept': 'application/json',
'Content-Type': 'application/json'
}
response = requests.post("https://gsoc.api.eason.tw/labels", data = json.dumps(data))
response.json()
transform_tag_mapping = {
'B-art': ['Object'],
'B-eve': ['Event'],
'B-geo': ['Location', 'Party'],
'B-gpe': ['Race', "Party"],
'B-nat': ['SpecialTerm'],
'B-org': ['Party', "Organization"],
'B-per': ['Party', "Person"],
'B-tim': ['TemporalUnit', "Time"],
'I-art': ['Object'],
'I-eve': ['Event'],
'I-geo': ['Location', 'Party'],
'I-gpe': ['Race', "Party"],
'I-nat': ['SpecialTerm'],
'I-org': ['Party', "Organization"],
'I-per': ['Party', "Person"],
'I-tim': ['TemporalUnit', "Time"],
'O': [],
}
import numpy as np
tags = list(transform_tag_mapping.values())
all_extended_tags = []
for t in tags:
all_extended_tags += t
len(all_extended_tags)
all_extended_tags
all_extended_tags_dict = {}
for key in all_extended_tags:
all_extended_tags_dict[key] = []
for key, items in transform_tag_mapping.items():
for item in items:
all_extended_tags_dict[item].append(key)
for tag in all_extended_tags:
data = {
"user": "eason.tw.chen@gmail.com",
"label_name": tag,
"inherit": all_extended_tags_dict[tag],
"alias_as": [
],
"comment": "This is the extend label by tags in CONLL dataset.",
"tags": [
"default", "default-extend"
]
}
import json
headers = {
'accept': 'application/json',
'Content-Type': 'application/json'
}
response = requests.post("https://gsoc.api.eason.tw/labels", data = json.dumps(data))
other_basic_tags = ['Float', 'Integer', 'CountryCode', 'CryptoCurrencyCode', 'CurrencyCode', 'TemporalUnit', 'Timezone', 'US_States', 'TemporalUnit', 'Month']
for tag in other_basic_tags:
data = {
"user": "eason.tw.chen@gmail.com",
"label_name": tag,
"inherit": [
],
"alias_as": [
],
"comment": "This is the label create by label function",
"tags": [
"default", "basic_tags", "have_label_function"
]
}
import json
headers = {
'accept': 'application/json',
'Content-Type': 'application/json'
}
response = requests.post("https://gsoc.api.eason.tw/labels", data = json.dumps(data))
data = {
"user": "eason.tw.chen@gmail.com",
"label_name": "String",
"inherit": all_extended_tags + default_tags,
"alias_as": [
],
"comment": "This is the default label in Shows all different in CONLL dataset.",
"tags": [
"default", "String"
]
}
headers = {
'accept': 'application/json',
'Content-Type': 'application/json'
}
response = requests.post("https://gsoc.api.eason.tw/labels", data = json.dumps(data))
origin_all_tags = ['O','B-art','I-org','B-geo','Race','I-gpe','I-tim','Object','Integer','I-per','TemporalUnit','B-org','CountryCode','B-gpe','B-eve','Party','I-geo','I-art','CryptoCurrencyCode','I-nat','Event','B-tim','Time','SpecialTerm','CurrencyCode','I-eve','Float','Month','B-per','Location','Timezone','US_States','B-nat']
all_tags_added_to_db = set(all_extended_tags + default_tags + other_basic_tags + ["String"])
all_tags_added_to_db - set(origin_all_tags)
# I create Org, Person, and String just now, to test the effect of inherit do work or not.
result = requests.get("https://gsoc.api.eason.tw/labels")
result = result.json()
all_label_name_in_db = []
for label in result:
all_label_name_in_db.append(label["label_name"])
# All tags in DB now :D
set(all_label_name_in_db) - set(all_tags_added_to_db)
# Next thing is put_all_labels_into_training_queue
```
| github_jupyter |
# Likelihood - Demo
Welcome to the Likelihood Demo! This will present a **truncated** version of Likelihood, one which utilizes working features to give the user a good idea of what Likelihood actually does, and how it can be implemented on a dataset!
### A Quick Rundown:
Likelihood is a data quality monitoring engine that measures the surprise, or entropy, of members of a given dataset. To learn more about the basic theory behind it, one may click on the 2 links below:
https://en.wikipedia.org/wiki/Entropy_(information_theory)
http://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf
The basic essence is: uncertainty is maximized (it is most regular) in cases where the probability structure of a dataset is chaotic, meaning we don't have much information about it. However, when we can identify some given patterns about the probability structure of a dataset, we know that data members following these rules are not particularly chaotic. They are structured, and thus unsurprising. It is when these patterns are defied that the entropy shoots up to irregular heights. It is this percise rule defying approach that Likelihood uses to find outliers within a structured dataset.
Likelihood began as a numerical-estimation focused tool, but currently it works quite well with numerical, categorical, and timestamp data. Its functional approaches are mapped out below:
1. **Bootstrapping** - Building a distribution using the properties of a bootstrap, this approach uses the bootstrap to capture standard counts for values by TimeStamp and finds anomaly if test-set counts are a certain level off expected training set ratios.
2. **Time Series** - Using Facebook Prophet, Time Series evaluation puts surprising event in the context of the time in which they happened. Building a pattern off these approximations and understanding, the Time Series tool predicts the future for the test-set and raises an issue if expected future counts fall off.
3. **Kernel Density** - Smoothing a distribution so that certain properties can be utilized, Kernel Density fits the data under a curve depending on the data's variation and approximates which values in a distribution are unlikely by virtue of magnitude, thus finding the most surprising Data.
4. **PCA** - Using Dimensionality Reduction, PCA attributes the variation in the data to several significant columns which are used to compute bias row-wise. This approach is combined with the column based kernel density approach to truly triangulate the percise location of numeric data-error, and PCA's surprise metric is thus grouped with Kernel Density's.
5. **Relative Entropy Model for Categorical Data** - Much in the spirit of grammar, this relative entropy its own rules (expected formatting and behavior) for data, and obtains surprise based off the strictness of the rule that the data defies (defying a stricter rule would inherently be more chaotic)
6. **TimeStamp Intervals** - This Kernel Density approach computes similarly to the numerical Kernel Density, but this time orders the time intervals in the dataset and procceeds to test if there is a weird interval in which no data/ too much data was recorded.
7. **In Progress**: Mutual Entropy for Mixed Numeric and Categorical Data
Ultimately, Likelihood should become a functional tool that can build functional distributions without the need for any context. Currently it functions more as a copilot
```
# Imports for project purposes
# Full Project imports
import pandas as pd
import math as mt
import dateutil
from datetime import datetime, timedelta
import requests as rd
import numpy as np
from sklearn import neighbors, decomposition
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
import smtplib
import scipy.stats as st
import os
from datetime import datetime, timedelta
from pandas.api.types import is_numeric_dtype
import copy
# parameters (will put into JSON config file later)
params = {"fName": "pd_calls_for_service_2020_datasd.csv", # local CSV file only right now
"ts": "date_time", # Timestamp for when the event happened in the world
"bootstrapResamples":1000, # should probably be 10k for better accuracy, but much worse speed
"maxTrainingSizeMultiple":10, # if there is more than X times more training data, trim to most recent
"maxCategories":100, # maximum number of categories in a column - if higher we skip
"minCategoryCount":10, # don't report boostrap surprise if a category has lower count
}
# Loading data into project
def load_data(dataset_link, category):
if(category == "html"):
return pd.read_html(dataset_link)
else:
if(category == "excel"):
return pd.read_excel(dataset_link)
else:
return pd.read_csv(dataset_link)
df = load_data("pd_calls_for_service_2020_datasd.csv", "csv")
```
The Data used throughout this part of the demo comes from the San Diego County Police Calls for Service Dataset, it will be used to show the effect of Likelihood's Time-Series Methods
```
df
# Converts Timetamp column of DataFrame to a legitimate timestamp
def convertToDateTime(df, timestamp):
df[timestamp] = pd.to_datetime(df[timestamp], format='%Y%m%d %H:%M:%S')
return df
# Assignments for computational purposes
df['ts'] = df['date_time']
batchHours = 24*7
df = convertToDateTime(df, 'ts')
df
# Splits data into train and test set based on date/time
def split_train_test(df, batchHours):
maxTs = max(df['ts'])
batchTs = maxTs - timedelta(hours = batchHours)
testDf = df[df['ts'] > batchTs]
trainDf = df[df['ts'] < batchTs]
return trainDf, testDf
trainDf, testDf = split_train_test(df, batchHours)
# Helpers and Math
def pValue(data, threshold, result):
p_larger = sum(np.array(data) >= threshold) / len(data)
p_smaller = sum(np.array(data) <= threshold) / len(data)
p = min(p_larger, p_smaller)
# only use gaussian p-value when there is variation, but bootsrap p = 0
stdev = np.std(data)
if stdev == 0 or p != 0:
p_gauss = p
else:
p_gauss = st.norm(np.mean(result['bootstrap_counts']), stdev).cdf(result['count'])
p_gauss = min(p_gauss,1-p_gauss)
return p_gauss
def trimTraining(trainDf, params):
# trim to most recent
trainDf = trainDf.sort_values(params['ts'], ascending =False)
trainDfTrimmed = trainDf[:params['maxTrainingSizeMultiple']*len(testDf)]
return trainDfTrimmed
# Returns names of categorical columns
def getCategoricalColumnNames(df):
columnNames = []
for columnName in df.keys():
if (type (df[columnName].iloc[0])) == str:
columnNames.append(columnName)
return columnNames
def bootstrap(trainDf, testDf, params):
# get all of the string columns
columnNames = getCategoricalColumnNames(testDf)
bootstrapDf = trimTraining(trainDf, params)
# set up dict, add counts
results = {}
for columnName in columnNames:
# if it isn't a string column, reject it
if type(testDf[columnName].iloc[0]) != str:
continue
categories = (bootstrapDf[columnName].append(testDf[columnName])).unique()
if len(categories) > params['maxCategories']:
continue
results[columnName] = {}
testCounts = testDf[columnName].value_counts(dropna = False)
for i in np.arange(1,len(categories) -1):
if(pd.isna(categories[i])):
categories = np.delete(categories, i)
for category in categories:
results[columnName][category] = {'bootstrap_counts':[],
'count':testCounts.get(category,0)}
# resample, add boostrap counts
for ii in range(params['bootstrapResamples']):
# Draw random sample from training
sampleDf = bootstrapDf.sample(len(testDf), replace=True)
for columnName in results.keys():
# count by category
trainCounts = sampleDf[columnName].value_counts(dropna = False)
# put results in dict
for category in results[columnName].keys():
boostrapCount = trainCounts.get(category,0)
results[columnName][category]['bootstrap_counts'].append(boostrapCount)
# convert to records, add p-values
bootstrap_results = []
for columnName in results.keys():
for category in results[columnName].keys():
result = results[columnName][category]
estimatedCount = int(np.round(np.mean(result['bootstrap_counts'])))
# don't report entries with very low predicted and actual counts
if estimatedCount < params['minCategoryCount'] and result['count'] < params['minCategoryCount']:
continue
p = pValue(result['bootstrap_counts'],result['count'], result)
categoryName = category
# Backup
if not category:
categoryName = "NULL"
bootstrap_results.append({"column":columnName,
"category":categoryName,
"count":result['count'],
"p": p,
"estimated_count":estimatedCount,
})
# Sorting by P-values and obtaining Surprise of each
if(np.count_nonzero(p)>0):
resultsDf = pd.DataFrame.from_records(bootstrap_results).sort_values('p')
resultsDf['surprise'] = -np.log2(resultsDf['p'])
return resultsDf
bootstrap(trainDf, testDf, params)
```
## TimeSeries Approximation
```
from fbprophet import Prophet
def truncateTs(ts):
return ts.replace(minute=0, second=0, microsecond=0)
# Groups data by value counts and returns a table with corresponding y values according to
def group_and_build_time_table(truncatedData):
groupedCounts = truncatedData.value_counts()
prophetDf = pd.DataFrame({'ds':groupedCounts.index,'y':np.log10(groupedCounts.values)})
return prophetDf
truncatedData = trainDf['ts'].apply(truncateTs)
prophetDf = group_and_build_time_table(truncatedData)
prophetDf
# Takes in the the dataset and the prophet dataset returned by the ast option
def train_model_on_country(testDf, prophetDf, country = "US"):
# Train model
m = Prophet(#daily_seasonality = True,
#yearly_seasonality = False,
#weekly_seasonality = True,
#growth='linear',
interval_width=0.68 # one sigma
)
m.add_country_holidays(country_name=country)
m.fit(prophetDf)
return m
# Applies Prophet analytics to create a forecast based on hours
def predict_future(testDf,m, timestamp = "date_time"):
# Takes in trained model and predicts the future
# find number of hours to preduct: ceil of hours in testDf
testDf = testDf.assign(ts = testDf.get(timestamp))
#If a column is string, convert to date/time
if(testDf.applymap(type).eq(str).any()['ts']):
testDf['ts'] = pd.to_datetime(testDf['ts'])
timeDelta = max(testDf['ts']) -min(testDf['ts'])
hours = int(timeDelta.days*24 + timeDelta.seconds/(60*60))+1
future = m.make_future_dataframe(periods = hours, freq = 'H')
forecast = m.predict(future)
return forecast, testDf
m = train_model_on_country(testDf, prophetDf)
forecast, testDf = predict_future(testDf, m)
forecast
# Takes in truncated test data (column), spits out out the prophet results
def find_surprise(truncatedData, forecast):
groupedCounts = truncatedData.value_counts()
prophetTestDf = pd.DataFrame({'ds':groupedCounts.index,
'y':np.log10(groupedCounts.values),
'y_linear':groupedCounts.values})
# find p-value
prophet_results = []
# Comparing test and training set data for identical intervals
for ii in range(len(prophetTestDf)):
ts = prophetTestDf['ds'][ii]
fcstExample = forecast[forecast['ds'] == ts]
mean = fcstExample['yhat'].iloc[0]
stdev = (fcstExample['yhat_upper'].iloc[0] - fcstExample['yhat_lower'].iloc[0])/2
# Calculating the P-value
p = st.norm(mean, stdev).cdf(prophetTestDf['y'][ii])
p = min(p,1-p)
prophet_results.append({"column":"Forecast",
"category":str(ts),
"count":prophetTestDf['y_linear'][ii],
"p": p,
"estimated_count":int(np.round(np.power(10,mean))),
})
# Obtaining Entropy of Time-Series values
prophetResultsDf = pd.DataFrame.from_records(prophet_results).sort_values('p')
prophetResultsDf['surprise'] = -np.log2(prophetResultsDf['p'])
return prophetResultsDf
#Group the test data
truncatedData = testDf['ts'].apply(truncateTs)
find_surprise(truncatedData, forecast)
# Takes in a model that has been trained on country, plots graphs for visualization
def visualize(m, forecast):
# Model visualization
fig = m.plot(forecast)
fig = m.plot_components(forecast)
visualize(m, forecast)
```
# Kernel Density
```
#https://www.nbastuffer.com/2019-2020-nba-team-stats/
def inp(default = 1, default2 = "https://www.nbastuffer.com/2019-2020-nba-team-stats/"):
# If our default dataset is changed, obtain some input
if(default2 != "https://www.nbastuffer.com/2019-2020-nba-team-stats/"):
nam = input()
else:
nam = default2
frame = pd.read_html(nam)
first_table = frame[default]
return first_table
first_table = inp(1)
first_table
```
## Different Kernels Attached Below
```
# Using cosine kernel function to get estimate for log density
def cosKernel(stat):
stat = stat.to_numpy().reshape(-1,1)
l = neighbors.KernelDensity(kernel = 'cosine').fit(stat)
cos_density = l.score_samples(stat)
return cos_density
# Using gaussian kernel function to get estimate for log density
def gaussKernel(stat):
stat = stat.to_numpy().reshape(-1,1)
l = neighbors.KernelDensity(kernel = 'gaussian').fit(stat)
density = l.score_samples(stat)
return density
# Using linear kernel function to get estimate for log density
def expKernel(stat):
stat = stat.to_numpy().reshape(-1,1)
l = neighbors.KernelDensity(kernel = 'exponential').fit(stat)
triDensity = l.score_samples(stat)
return triDensity
# Using epanechnikov kernel function to get estimate for log density
def parabolicKernel(stat):
stat = stat.to_numpy().reshape(-1,1)
l = neighbors.KernelDensity(kernel = 'epanechnikov').fit(stat)
epDensity = l.score_samples(stat)
return epDensity
# Drops non-numerical and nan values from a table
def pcaPrep(first_table):
# Finding all numerical components of the table so that pca can function
tabl = first_table.select_dtypes(include = [np.number])
tabl = tabl.dropna(1)
return tabl
# Specialized column based P-value function: double ended
def retPVal(col):
#Since we have a normal distribution, starting by obtaining the z-score
mean = col.mean()
std = np.std(col)
array = np.array([])
for i in np.arange(len(col)):
array = np.append(array, col.iloc[i] - mean)
#Now obtaining legitimate p-values
z_scores = array/std
for l in np.arange(len(z_scores)):
cdf = st.norm.cdf(z_scores[l])
z_scores[l] = min(cdf, 1-cdf)
return pd.Series(z_scores, index = col.index)
# Assigning initial kernal estimations
def kernelEstimator(indx, stat):
kernelEstimate = pd.DataFrame()
kernelEstimate = kernelEstimate.assign(Data_Index = indx, Data_Point = stat,Gaussian = gaussKernel(stat),
Epanechnikov = parabolicKernel(stat), Exponential = expKernel(stat),
Cosine = cosKernel(stat))
# temporary sort for some visualization of surprise
kernelEstimate = kernelEstimate.sort_values(by = "Gaussian", ascending = False)
return kernelEstimate
# Calculating their average
def surprise_estimator(kernelEstimation):
# Calculating maximum number of deviations from the mean
numDevMax = (kernelEstimation.get("Data_Point").max() - kernelEstimation.get("Data_Point").mean())/kernelEstimation.get("Data_Point").std()
numDevMin = (kernelEstimation.get("Data_Point").min() - kernelEstimation.get("Data_Point").mean())/kernelEstimation.get("Data_Point").std()
numDev = max(numDevMax, numDevMin)
# Assigning appropriate Kernel Estimator
if(numDev > 3.2):
metric = retPVal(kernelEstimation.get("Exponential"))
elif((numDev <=3.2) & (numDev >= 2)):
metric = retPVal(kernelEstimation.get("Gaussian"))
else:
metric = retPVal(kernelEstimation.get("Exponential")+kernelEstimation.get("Epanechnikov"))
# Surprise Metric
kernelEstimation = kernelEstimation.assign(Surprise = -np.log2(metric))
kernelEstimation = kernelEstimation.sort_values(by = "Surprise", ascending = False)
return kernelEstimation
# A grouping of the entire kernel estimation process
def surprise_Table(Table, index = "TEAM"):
temp = pcaPrep(Table)
# Checking if index given
if(isinstance(index, str)):
index = Table.get(index)
#Obtaining surprise of every individual column
sum_surprise = pd.Series(np.zeros(Table.shape[0]))
for col in temp.columns:
stat = temp.get(col)
KernelTable = kernelEstimator(index, stat)
KernelTable = surprise_estimator(KernelTable)
Table[col] = KernelTable.get("Surprise")
sum_surprise+=Table[col]
# Averaging our surprise so we can sort by it
sum_surprise = sum_surprise.array
Table = Table.set_index(index)
Table = Table.assign(mean_surprise = np.round(sum_surprise/Table.shape[1],2))
# Sorting table for easier visualization
Table = Table.sort_values(by = "mean_surprise", ascending = False)
return Table
modTable = first_table
surpriseTable = surprise_Table(first_table)
surpriseTable
def obtain_variance_table(first_table):
# Scaling and preparing values for PCA
tabl = pcaPrep(first_table)
scaled_data = StandardScaler().fit_transform(tabl)
# Creating a PCA object
pca = PCA(n_components = (tabl.shape[1]))
pcaData = pca.fit_transform(scaled_data)
infoFrame = pd.DataFrame().assign(Column = ["PC" + str(i) for i in range(tabl.shape[1])], Variance_ratio = pca.explained_variance_ratio_ )
return infoFrame
obtain_variance_table(first_table)
# Fit PCA model onto the data
def obtainPCAVals(componentNum, scaled_data):
pca = PCA(n_components = componentNum)
pcaData = pca.fit_transform(scaled_data)
return pcaData
# Deciding how many columns need to be used: utilizing threashold of 95% of the explained variance
def elementDecider(infoFrame):
numSum = 0
counter = 0
# Continuing until we have accounted for 95% of the variance
for i in infoFrame.get("Variance_ratio"):
if(numSum < .95):
numSum += i
counter+=1
return counter
# Reducing dimensionality of data into pc's, only storing what is neccessary
def reducedData(infoFrame, scaled_data, indx):
numCols = elementDecider(infoFrame)
pcaData = obtainPCAVals(numCols, scaled_data)
pcaFrame = pd.DataFrame(pcaData)
# Dealing with potential index issues
pcaFrame = pcaFrame.set_index(indx)
return pcaFrame
# Visualization tool for seeing grouping of elements by pc
def displayReducedData(pcaVals, xNum = 0, yNum = 1):
# Ensuring that the elements given do not overacess table
if(xNum < pcaVals.shape[1]) & (yNum < pcaVals.shape[1]):
pcaVals.plot(kind = "scatter", x = 2, y = 3)
else:
print("You have overaccessed the number of elements, keep in mind there are only " + str(pcaVals.shape[1]) + " elements")
#Summing p-values because PCA serves to check for systematic bias, whereas kernel density checks for accuracy
def sumRows(pcaVals):
sumArray = np.zeros(pcaVals.shape[0])
for i in np.arange(pcaVals.shape[1]):
values = pcaVals.get(str(i)).array
sumArray += values
sumArray /= pcaVals.shape[1]
#After obtaining sum, the average deviation from the expected value is averaged out, not taking in absolute value
# to check for systematic error
return sumArray
# Tests for systematic bias by row
def pcaRowOutliers(pcaVals):
P_val_table = pd.DataFrame()
#Creating a table of all the PCA p-values
for col in np.arange(0,pcaVals.shape[1]):
P_vals = retPVal(pcaVals.get(col))
i = str(col)
P_val_table[i] = P_vals
totalVar = sumRows(P_val_table)
#Calculating surprise by taking negative log
newVals = pcaVals.assign(Surprise = -np.log2(totalVar))
newVals = newVals.sort_values(by = "Surprise", ascending = False)
return newVals
# Master method to run PCA as a whole
def runPCA(table, index):
processing_table = pcaPrep(table)
variance_table = obtain_variance_table(table)
pcaVals = reducedData(variance_table, StandardScaler().fit_transform(processing_table), table.get(index))
new_pca = pcaRowOutliers(pcaVals)
return new_pca
new_pca = runPCA(first_table, 'TEAM')
new_pca
# Combining PCA and Kernel density into one approach to obtain join probabilities
def pca_kernel_combo(pcaTable,kernelTable):
pcaSurpriseCol = new_pca.get("Surprise")
temp = pcaPrep(kernelTable)
for column in temp.columns:
# Finding geometric mean of two factors individually (updating our beliefs in a Bayesian manner)
kernelTable[column] = (kernelTable[column].multiply(pcaSurpriseCol)).apply(np.sqrt)
kernelTable = kernelTable.sort_values(by = "mean_surprise", ascending = False)
return kernelTable
surpriseTable = pca_kernel_combo(new_pca, surpriseTable)
surpriseTable
```
# Categorical Data
```
# Will examine whether or not a column is categorical, giving the user the opportunity to add additional numeric columns
def identifyCategorical(surpriseFrame):
categorical_list = []
for col in surpriseFrame.columns:
if(not(is_numeric_dtype(surpriseFrame[col]))):
categorical_list.append(col)
# Allows fixing of default assumption that numeric columns aren't categorical
print("Are there any numeric Columns you would consider categorical?(yes/no)")
while(input().upper() == "YES"):
print("Enter one such column:")
categorical_list.append(input())
print("Any more?")
return categorical_list
```
### **Running tests for: value type, its length, and whether or not it is missing (NaN)**
```
# Returns suprise of type classification
def types(column):
value_types = column.apply(classifier)
counts = value_types.value_counts(normalize = True)
index = counts.index
values = counts.values
probs = value_types.apply(giveProb, args = (np.array(index), np.array(values)))
surpriseVal = probs.apply(surprise)
return surpriseVal
# Obtains the type of value, even if it is currently contained within a string
def classifier(value):
value = str(value)
# Boolean check done manually: this is an easy check
if(('True' in value) or ('False' in value )):
return 'boolean'
else:
if(value.isnumeric()):
return 'number'
else:
return 'string'
# Takes in a column and returns the surprise of each nan value being present (True) or not being present (False)
def nans(column):
nan_values = column.apply(isNan)
counts = nan_values.value_counts(normalize = True)
index = counts.index
values = counts.values
probs = nan_values.apply(giveProb, args = (np.array(index), np.array(values)))
surpriseVal = probs.apply(surprise)
return surpriseVal
# Takes in a column and returns the surprise of the length of each value in the column: the first and simplest of probabilistic tests
def lenCount(column):
column = column.apply(str)
counts = column.apply(len).value_counts(normalize = True)
index = counts.index
values = counts.values
column = column.apply(len)
probs = column.apply(giveProb, args = (np.array(index), np.array(values)))
surpriseVal = probs.apply(surprise)
return surpriseVal
# Calculates the surprise of a given value
def surprise(value):
return -np.log2(value)
# Given a numerical value, finds it equivalent within the set of indices and assigns it the proper probability
def giveProb(value, index, values):
for num in np.arange(len(index)):
if(value == index[num]):
return values[num]
return values[0]
# NaN's aren't equal to themselves
def isNan(x):
return x!=x
```
### **Running tests for: Special Character sequence and Number of Unique Values**
```
# Checks for special characters within a string, calculating surprise so as to identify which character combinations are chaotic
def special_char(column):
characters = column.apply(str).apply(char_identifier)
counts = characters.value_counts(normalize = True)
index = counts.index
values = counts.values
probs = characters.apply(giveProb, args = (np.array(index), np.array(values)))
surpriseVal = probs.apply(surprise)
return surpriseVal
# Checks if a single entry of any data type contains special symbols and returns all that it contains
def char_identifier(entry):
charList = np.array(['<', '>', '!', '#','_','@','$','&','*','^', ' ', '/', '-','"','(', ',', ')', '?', '.'])
ret_string = ""
for i in charList:
if(i in entry):
ret_string += i
return ret_string
# Simpler approach here: if the value counts of certain elements are greater when they should be unique, they are more suprising
# If they are non-unique when they are supposed to be unique, also more surprising. Done with binary classification
def uniques(column):
# Counting number of each value and returning whether or not it is a singular unique value,
#then counting truly unique values
column = column.replace({np.nan: "np.nan"})
vals = column.value_counts().apply(isunique)
vals = column.apply(unique_assignment, args = [vals])
counts = vals.value_counts(normalize = True)
index = counts.index
values = counts.values
probs = vals.apply(giveProb, args = (np.array(index), np.array(values)))
surpriseVal = probs.apply(surprise)
# Note: if all values unique/non unique this will provide definite outcome because no room for uncertainty
return surpriseVal
# Returns whether the count of a value is 1
def isunique(val):
return (val == 1)
# Maintains individual values without grouping while assigning unique / nonunique probabilities. To be used on original column
def unique_assignment(val, column):
value = column[column.index == val]
return value.iloc[0]
# Obtains a date time object and treats this as numerical rather than categorical value
def obtainCatSurprise(table):
cols = identifyCategorical(table)
for col in cols:
# Obtaining individual relative entropies, averaging them out, finding their p-values and calculating final surprise
values = table.get(col)
temp = (uniques(values)+special_char(values)+ nans(values) + types(values)+lenCount(values))/5
table[col] = -np.log2(retPVal(temp))
table = table.replace({np.nan:0})
return table
dataset = pd.read_excel("sampleDataSet.xlsx")
dataset
categoricalSurprise = obtainCatSurprise(dataset)
# Assigning colors to problematic values (still grouped with indices so easy to tell)
# Yellow: mild concern, Orange: serious concern, red - major concern
def designer(frame):
threshold1 = 5
threshold2 = 10
threshold3 = 15
print("Would you like to reset default issue alert thresholds?")
if(input().upper() == 'YES'):
print("Mild concern threshold (in probability (percentage) of issue being present):")
threshold1 = float(input())
print("Moderate concern threshold (in probability (percentage) of issue being present)")
threshold2 = float(input())
print("Serious concern threshold (in probability (percentage) of issue being present)")
threshold3 = float(input())
temp = pcaPrep(frame)
styler = frame.style
for col in temp.columns:
frame = styler.applymap(lambda x: 'background-color: %s' % 'yellow' if x > threshold1 else 'background-color: %s' % 'light-gray', subset = [col])
frame = styler.applymap(lambda x: 'background-color: %s' % 'orange' if x > threshold2 else 'background-color: %s' % 'light-gray', subset = [col])
frame = styler.applymap(lambda x: 'background-color: %s' % 'red' if x > threshold3 else 'background-color: %s' % 'light-gray', subset = [col])
return frame
designer(categoricalSurprise)
```
## Date/Time Interval Approximation
```
# Calculation of date time entropies
def dateTimeClassifier(column):
# Conversion to proper format
if (type(column.iloc[0]) == str):
column = convert_to_dateTime(column)
# Unix timestamps for ease of calculation
unixCol = column.apply(convertToUnix).to_numpy()
# Finding time intervals
difference_array = np.append(np.array([]), np.diff(unixCol))
timeFrame = (pd.DataFrame().assign(index = np.arange(1,len(unixCol)), Times_diff = difference_array))
dateSurprise = surprise_Table(timeFrame, 'index')
return (dateSurprise.sort_values(by = ['Times_diff']))
# If date-value is given as a string, convert to date- time format first
def convert_to_dateTime(column):
return pd.to_datetime(column, format='%Y%m%d %H:%M:%S')
# Converting the date to unix format for ease of calculations
def convertToUnix(value):
return (value - datetime(1970, 1, 1)).total_seconds()
dateTimeClassifier(trainDf[:3000].get("date_time"))
```
# The Next Step
The next step in the proccess as of now is releasing the Time Series as a Python package and layering the rest of the functionality on top of it. In terms of the actual functionality of the project, the next step is mutual entropy, or the correlation of columns as a means of obtaining more information (context) for the column itself!
## Thank you!
| github_jupyter |
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
from scipy.stats import norm
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score
from xgboost import XGBRegressor, plot_importance, DMatrix
from tqdm import tqdm
from pickle import dump, load
from utils import *
```
# Data loading
We need to load the samples. `columns` is a list in which we keep all needed predictors.
```
columns = [
'MKE_sfc',
'Rd_dx_sfc',
'relative_vorticity_sfc',
'grad_SSH_sfc',
]
```
# Pre-processing
We perform a two-step preproces for the predictors.
## First pre-processing step: logarithm where needed
- The feature `Rd_dx_z` does not need any pre-processing. We keep it as last feature, and our `regularize` function relies on this convention (i.e. the last feature is not processed).
- Some features are log-normally distributed. They are always positive, so we can simply take their logarithm, to obtain normally-distributed features. We apply this technique to the predictand feature (`EKE`) too.
- Other features follow what looks like a Laplace distribution with some outliers. In this case, we apply $f(x) = (\log(\left|x\right|)+36)*\text{sign}(x)$. This functions splits the negative and positive domains, compresses the range, is monotonic, and bijective, if and only if $|x|>10^{-15}$. Here is a plot of the function over $\mathbb{R}$.
```
f = lambda x: np.log(np.abs(x)+36.0)*np.sign(x)
x_plot = np.arange(-100, 100, 1e-2)
plt.plot(x_plot, f(x_plot), '.')
plt.draw()
```
## Second pre-processing step: Scaling
The second pre-processing step is the same for all predictors, we scale the features so that they have approximately zero-mean and standard deviation of 1: this is usually good for Neural Network training. We do not apply this step to the predictand. We will store the scaler object, as it will be need to pre-process data at inference time.
Once the data is pre-processed, we split it into train and validation (called test here) sets.
```
def prep_data(dataset, scaler=None, skip_vars=[], abs_val=None):
targets = dataset['EKE'].values.copy()
data, targets, scaler = prep_samples(dataset, columns, scaler, skip_vars, abs_val)
X_train, X_test, y_train, y_test = train_test_split(data, targets, test_size=0.2, random_state=42)
print('Dimensions of the training feature matrix: {}'.format(X_train.shape))
print('Dimensions of the training target vector: {}'.format(y_train.shape))
print('Dimensions of the test feature matrix: {}'.format(X_test.shape))
print('Dimensions of the test target vector: {}'.format(y_test.shape))
return X_train, X_test, y_train, y_test, scaler
```
The original dataset is computed with a resolution of 1/10th of a degree. To train the model, we use spatially averaged version of the samples. We use coarsening factors from 2 to 10, these are shortened as `cf2`, `cf3`, ...
```
datapath1 = './data/MOM6_1-10_data'
datapath2 = './data/MOM6_1-10_data_with_ssh/'
model_data = pop_data(datapath1, datapath1, skip_vars = ['x','y','depth','depth_stdev'], extra_pref='cf2')
# grad_ssh is in another dataset
model_data.extend_inventory(datapath2)
samples = get_samples(0, 5, columns, model_data)
for cf_idx in range(3, 11):
model_data = pop_data(datapath1, datapath1, skip_vars = ['x','y','depth','depth_stdev'], extra_pref=f'cf{cf_idx}')
model_data.extend_inventory(datapath2)
scale_factor = cf_idx/2
samples_loc = get_samples(0, min(int(np.ceil(5*scale_factor*scale_factor)), 110), columns, model_data)
samples = pd.concat([samples,samples_loc])
X_train, X_test, y_train, y_test, scaler = prep_data(samples, abs_val=[False, True, True, False], skip_vars=['Rd_dx_sfc'])
```
## Visual inspection of pre-processing results
Let's have a look at the resulting feature distributions.
```
plt.figure(figsize=(10,5*(len(columns)//2+len(columns)%2)))
for i in range(X_train.shape[-1]):
sample = X_train.values[:, i]
print('---------------------------------')
print(f'min {columns[i]} = {np.min(sample)}, max {columns[i]} = {np.max(sample)}')
print('---------------------------------')
plt.subplot(len(columns)//2+len(columns)%2,2,i+1)
plt.hist(sample, bins=1000, density=True)
plt.title(columns[i])
plt.draw()
print('---------------------------------')
print(f'min EKE = {np.min(y_train)}, max EKE = {np.max(y_train)}')
print('---------------------------------')
plt.figure(figsize=(10,4))
plt.hist(y_train, bins=1000, density=True, alpha=0.5)
plt.hist(y_test, bins=1000, density=True, alpha=0.5)
plt.legend(['train', 'validation'])
plt.title('EKE')
plt.draw()
```
We notice that the predictand is approximately log-normally distributed. This means that we have way less samples for the values belonging to the two tails, and when we will train the neural network against mean squared error, the extremum values will therefore have less weight. We explored a possible way to mitigate this phenomenon, that is inverse sampling weighting. We approximated the distribution with a Gaussian, and we assigned to each sample the inverse of its probability. The results are in the next section.
```
plt.figure(figsize=(10,4))
plt.hist(y_train, bins=1000, density=True, alpha=0.5)
xmin, xmax = plt.xlim()
mu, std = norm.fit(y_train)
x_pdf = np.linspace(xmin, xmax, 100)
p = norm.pdf(x_pdf, mu, std)
plt.plot(x_pdf, p, 'k', linewidth=2)
plt.legend([f'Norm dist $\mu={mu:.4f}$, $\sigma={std:.4f}$'])
plt.title('EKE')
plt.draw()
save = True
if save:
np.save('./data/scaler_cf_all_4', scaler)
np.save('./data/X_train_cf_all_4', X_train.values)
np.save('./data/X_test_cf_all_4', X_test.values)
np.save('./data/y_train_cf_all_4', y_train)
np.save('./data/y_test_cf_all_4', y_test)
```
We will now train an XGBoost regressor, to be able to compare the results of our Neural Network(s) to it. The dataset is too large for the algorithm to run in a reasonable time, thus we will use 1/100th of the data points.
```
# If you have the data, just skip the loading and pre-processing
load_data = True
if load_data:
scaler = np.load('./data/scaler_cf_all_4.npy')
X_train = pd.DataFrame(np.load('./data/X_train_cf_all_4.npy'), columns=columns)
X_test = pd.DataFrame(np.load('./data/X_test_cf_all_4.npy'), columns=columns)
y_train = np.load('./data/y_train_cf_all_4.npy')
y_test = np.load('./data/y_test_cf_all_4.npy')
rf = XGBRegressor()
num_samples = X_train.shape[0]
rf.fit(X_train[:num_samples//100], y_train[:num_samples//100])
rf.save_model('./data/xgbreg_new')
y_train_rf = rf.predict(X_train[:num_samples//100])
y_test_rf = rf.predict(X_test)
rf_results = pd.DataFrame({'algorithm':['XGBRegressor'],
'training error': [mean_squared_error(y_train[:num_samples//100], y_train_rf)],
'test error': [mean_squared_error(y_test, y_test_rf)],
'training_r2_score': [r2_score(y_train[:num_samples//100], y_train_rf)],
'test_r2_score': [r2_score(y_test, y_test_rf)]})
rf_results
```
The regressor can give us an important insight into the data, we can see what the most relevant features are.
```
plot_importance(rf)
plt.draw()
```
We see that the four features are almost equally important.
# Validation
In this section, we load two trained neural networks and compare the results. The Neural Network `model_cus` uses a weighted sampling scheme (see above) to draw training samples, whereas `model_mse` uses a standard mean squared error loss. We also load a scaler we previously saved: **notice that this scaler must be the same one used to scale the data we trained our NNs on, otherwise results will be inconsistent**.
```
import torch
from torchsummary import summary
#if you don't have a GPU, set to device='cpu'
device = 'cuda'
model_mse = torch.load('../ml_eke/nn/trained_models/ResNet_4_mse.pkl', map_location=torch.device(device))
model_cus = torch.load('../ml_eke/nn/trained_models/ResNet_4_custom.pkl', map_location=torch.device(device))
model_mse.eval()
model_cus.eval()
summary(model_mse, (4,))
scaler = np.load('./data/scaler_cf_all_4.npy')
print('\nScaler info:')
for i in range(len(columns)):
print(f'{columns[i]}: avg={scaler[0,i]}, std={scaler[1,i]}')
```
We load the datasets we did not use to train our NNs and look at how the predicted value distributions compare to the target one.
```
datapaths = ('./data/MOM6_1-10/',
'./data/MOM6_1-10_data_with_ssh/')
suffixes = ('_013_01.nc', '_013_01.nc')
columns[1] = 'Rd_dx_sfc'
model_data = pop_data(datapaths[0], datapaths[0], skip_vars = ['x','y','depth','depth_stdev'], extra_pref='cf9', first_suffix=suffixes[0])
model_data.extend_inventory(datapaths[1])
dataset = get_samples(110, 111, columns, model_data, predictands=['EKE_sfc'])
data, targets, _ = prep_samples(dataset, columns, scaler=None, abs_val=[False, True, True, False], clean_after_reg=True, skip_vars=['Rd_dx_sfc'], scale=True)
data
# Depending on what the device is
predict_cpu = lambda model, X: model(torch.tensor(X.values)).detach().numpy()
predict_gpu = lambda model, X: model(torch.tensor(X.values).cuda()).cpu().detach().numpy()
predict = predict_gpu if device == 'cuda' else predict_cpu
# make some predictions
preds_mse = predict(model_mse, data)
preds_cus = predict(model_cus, data)
# show true distribution vs predicted
plt.figure(figsize=(10,6))
plt.hist(preds_mse,bins=500,alpha=0.25,label='Predicted MSE', density=True)
plt.hist(preds_cus,bins=500,alpha=0.25,label='Predicted Custom', density=True)
plt.hist(targets,bins=500,alpha=0.25,label='EKE distribution', density=True, color='black')
plt.legend()
plt.draw()
```
We clearly see that the standard MSE function tends to overfit the samples close to the mean, but does poorly on the distribution tails. The weighted sampling scheme leads to more spread in the predictions, but goes too far in that direction. A mixed approach will have to be investigated in the future.
To display the results in a meaningful way, we keep the original samples as maps, instead of collecting single point values.
```
sample, target, mask = get_samples_2D(119, 120, columns, model_data, predictands=['EKE_sfc'])
disp_sample, disp_target, disp_pred_mse = prep_maps(sample, target, mask, columns, scaler=scaler, model=model_mse, predict_fn=predict,
abs_val=[False, True, True, False], clean_after_reg=True, skip_vars=['Rd_dx_sfc'])
_, _, disp_pred_cus = prep_maps(sample, target, mask, columns, scaler=scaler, model=model_cus, predict_fn=predict,
abs_val=[False, True, True, False], clean_after_reg=True, skip_vars=['Rd_dx_sfc'])
```
We start by plotting the predicted values for the two different NNs. We can see that they perform better on the high-resolution dataset, less on the coarser one. Two possible reasons are: the ratio between the number of fine- and coarse-grained samples is ~25:1, this means that the network will be trained mostly on high-resolution samples; the information (and thus the prediction) might be more accurate when less averaging takes place.
```
mask_s = mask.squeeze()
plt.figure(figsize=(16,4))
vmin = -10
vmax = 0
plt.subplot(1,3,1)
plt.pcolormesh(disp_target, vmin=vmin, vmax=vmax)
plt.colorbar()
plt.title('EKE_sfc')
plt.subplot(1,3,2)
plt.pcolormesh(disp_pred_mse, vmin=vmin, vmax=vmax)
plt.title('MSE Loss Prediction')
plt.colorbar()
plt.subplot(1,3,3)
plt.pcolormesh(disp_pred_cus, vmin=vmin, vmax=vmax)
plt.title('Custom Loss Prediction')
plt.colorbar()
plt.draw()
```
We plot the error for the two models.
```
plt.figure(figsize=(10,4))
disp_err_mse = disp_target - disp_pred_mse
disp_err_cus = disp_target - disp_pred_cus
vmin = min(np.min(disp_err_mse[mask_s]),
np.min(disp_err_cus[mask_s]))
vmax = max(np.max(disp_err_mse[mask_s]),
np.max(disp_err_cus[mask_s]))
plt.subplot(1,2,1)
plt.pcolormesh(disp_err_mse, vmin=vmin, vmax=vmax)
plt.title('MSE Error')
plt.colorbar()
plt.subplot(1,2,2)
plt.pcolormesh(disp_err_cus, vmin=vmin, vmax=vmax)
plt.colorbar()
plt.title('Custom loss Error')
plt.draw()
plt.figure(figsize=(12,6))
plt.subplot(1,2,1)
plt.hist(disp_err_mse[mask_s].reshape((-1,1)), bins=100, density=True, label='MSE', alpha=0.3)
plt.hist(disp_err_cus[mask_s].reshape((-1,1)), bins=100, density=True, label='custom', alpha=0.3)
plt.legend()
plt.draw()
```
We clearly see that the standard MSE loss performs better than the weighted sampling scheme. At the same time, we know that only the latter can reach the tails of the EKE distribution.
| github_jupyter |
# Evaluating gambles
### _An exploration of expected values and time averages_
## General concepts and mandatory coin flipping
We define a _gamble_ $G$ as a set of payouts $D$ associated with a probability distribution $\mathbf{P}$.
As an example, consider a game where a fair coin is tossed. If heads comes up, you win $\$3$, you lose $\$1$ otherwise.
Here the gamble has payouts $D = \{3, -1\}$ where $P(d) = 0.5 \, \, \forall \, d \in D$
Would it be profitable to play such a game?
Our intuition suggests we should take the average of the payouts weighted by their probalility.
```
weighted_average = 3 * 0.5 + -1 * 0.5
print(weighted_average)
```
This number is known as the **expected value** of the game (we also refer to $G$ as a [random variable](https://en.wikipedia.org/wiki/Random_variable)).
What does this number mean? Should we expect to win $\$1$ _every time_ we play? Perhaps it says something about our long term winnigs if we play many times over?
Conceptually, it helps to think about expected values as taking the average of the payouts over many (tending to infinity) plays (_realizations_) of the game.
Think of $n$ individuals playing simultaneously. We then take the average of their winnings by summing over all the payouts and dividing over $n$.
```
%config InlineBackend.figure_format="retina"
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.style.use("seaborn-talk")
np.random.seed(42)
plt.rcParams["figure.figsize"] = (12, 6)
n = 100
def average_payout(n):
coin_flips = np.random.rand(n)
# Heads when > 0.5, Tails when <= 0.5
payouts = np.where(coin_flips > 0.5, 3, -1)
return payouts.mean()
print(average_payout(n))
```
In fact, by the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem#Classical_CLT), as $n$ increases this average will tend towards the expected value.
```
n = 1_000_000
print(average_payout(n))
many_payouts = [average_payout(x) for x in np.full(1000, n)]
plt.hist(many_payouts, bins=40);
```
## Games people play (or avoid)
### Some questions to ponder
- Why do we choose to enter or avoid such gambles?
- Is the expected value telling us all we need to know?
- Is there a criterion that works for every concievable game?
Our intuition suggests we should try to maximise the change in our wealth $\Delta x = x(t + \Delta t) - x(t)$.
However, consider the following [game](https://en.wikipedia.org/wiki/St._Petersburg_paradox):
1. A casino offers a game of chance in which a fair coin is tossed at each stage.
2. The initial stake starts at $\$2$ and is doubled every time heads appears.
3. The first time tails appears, the game ends and the player wins whatever is in the pot.
4. The player wins $\$2^k$ where $k$ is the number of coin tosses.
What would be a fair price (i.e. a price at which you would feel indifferent taking the role of the gambler or the casino) to play such a game?
The expected value of such a game is
$$\mathbb{E} = \frac{1}{2} \cdot 2 + \frac{1}{4} \cdot 4 + \frac{1}{8} \cdot 8 + \dotsb = 1 + 1 + 1 + \dotsb = \infty$$
Should we be willing to pay any amount for the opportunity to play?
Most people would not, even though the game has an infinite expected value.
### Expected utility hypothesis
The aforementioned paradox was studied by [Daniel Bernoulli](https://en.wikipedia.org/wiki/Daniel_Bernoulli). He introduced the [expected utility hypothesis](https://en.wikipedia.org/wiki/Expected_utility_hypothesis), which states that individual preferences concerning such gambles aim to maximise not the expected change in wealth, but the **expected change in utility**, which is a mathematical concept that captures the subjective value of wealth to the individual.
**Key points**
- Utility functions could differ between individuals.
- Any arbitrary function could be used to model utility. This makes the theory very flexibile, at the cost of explanatory power.
- Typical choices for utilities include $\sqrt{x}$ and $\ln{x}$.
```
points = np.linspace(0.0001, 50, num=300)
fig, (ax1, ax2) = plt.subplots(1, 2, sharey=True)
ax1.set_title(r"$\sqrt{x}$"), ax2.set_title(r"$\log{x}$")
ax1.plot(points, np.sqrt(points))
ax2.plot(points, np.log(points));
```
### A different approach
Let's consider one more coin game.
> - Heads -> increase your wealth by 60%
> - Tails -> decrease your wealth by 50%
If your initial wealth is $x(0)$, then the expected value of one play is:
$$\mathbb{E}(x(1)) = 0.5 \cdot 1.6 x(0) + 0.5 \cdot 0.5 x(0) = 1.05 x(0)$$
- Would you play this game?
- Once? Many times?
```
def play_game(initial_wealth, steps):
draws = np.random.rand(steps)
factors = np.where(draws > 0.5, 1.6, 0.5)
factors[0] = 1 # Factor at initial time step
return factors.cumprod() * initial_wealth
x_0 = 1_000
wealth = play_game(x_0, 100)
plt.plot(wealth)
plt.xlabel("t"), plt.ylabel(r"$x(t)$");
plays = [play_game(x_0, 100) for _ in range(20)]
for play in plays:
plt.plot(play)
plt.title("Wealth over time")
plt.xlabel("t"), plt.ylabel(r"$x(t)$");
```
There is high variance in the short term, but over the long run we see that realizations tend to 0 as $t$ grows.
<br>
What is happening here is that as $t$ increases:
$$\lim_{t \to\infty} x(t) = 1.6^{t/2} 0.5^{t/2} = (1.6 \cdot 0.5)^{t/2} = 0.8^{t/2} = 0$$
We see that this game, which has a positive expected value, has a _time average_ value of 0.
We say that such systems are **non ergodic**, that is, they behave differently when averaged over time as averaged over the space of all possible states.
## Optimal growth and gambler's ruin
Now suppose we could play a game infinitely many times where a coin is flipped, coming up heads with probability $p > 0.5$, tails with probalility $q = 1 - p$.
You win an amount equal to your stake: if you bet $\$1$ you would earn $\$2$ if heads show up, $-\$1$ otherwise.
Let's set $p = 0.6$
```
def average_payout_biased_coin(n, p, stake):
coin_flips = np.random.rand(n)
payouts = np.where(coin_flips > p, -stake, 2*stake)
return payouts.mean()
n = 1_000_000
p = 0.6
stake = 100
print(average_payout_biased_coin(n, p, stake))
```
We're making money! Let's plot some more runs:
```
avg_payouts = [average_payout_biased_coin(x, p, stake) for x in np.full(200, n)]
plt.hist(avg_payouts, bins=40);
```
An interesting question:
- How should we choose our stakes in order to maximise the rate at which our wealth grows?
Since the expected value of this game is positive, should we bet all our wealth each time? A fraction?
Given our initial wealth $x(0) = \$1000$, we have after $t$ plays:
```
def wealth_over_time(p, initial_wealth, t, stake):
"""Returns an np.array of wealth levels after `t` time steps.
Arguments:
p: float - Probability of heads (win chance)
x_0: float - Initial wealth
t: int - Number of time steps
stake: float - Fraction of wealth to bet
"""
coin_flips = np.random.rand(t)
factors = np.where(coin_flips > p, 1 - stake, 1 + stake)
factors[0] = 1
return factors.cumprod() * initial_wealth
x_0 = 1_000
plt.plot(wealth_over_time(p, x_0, 500, 1.0));
```
If we try to maximise the rate of growth by betting all our wealth each time, we will inevitably go broke.
Perhaps we should bet the minimum amount, and avoid bankrupcy.
```
plt.plot(wealth_over_time(p, x_0, 500, 0.001));
```
We're not broke, but our earnings are growing timidly.
Is there a better approach? 🤔
Let's look at how our wealth changes each time we play. We'll bet a fraction $f$ of our current wealth each time.
$$x(t) = x(0)(1 + f)^\text{H}(1 - f)^\text{T}$$
Where $\text{H}$ is the number of times heads comes up, $\text{T}$ the number of tails.
The quantity $\frac{x(t)}{x(0)}$ can be rewritten as
$$\frac{x(t)}{x(0)} = e^{t \log{\left[ \frac{x(t)}{x(0)} \right]}^{\frac{1}{t}}}$$
Let's focus on this part:
$$G_{t}(f) = \log{\left[ \frac{x(t)}{x(0)} \right]}^{\frac{1}{t}} = \frac{H}{t}(1 + f) + \frac{T}{t}(1 - f)$$
We want to find $f^\ast$ that maximises
$$g(f) = \mathbb{E} \Bigg \{ \frac{1}{t} \log{\left[ \frac{x(t)}{x(0)} \right]} \Bigg \}$$
🧙
$$g(f) = \mathbb{E} \Big \{ \frac{1}{t} \log{x(t)} - \frac{1}{t} \log{x(0)} \Big \}$$
Since $\frac{1}{t} \log{x(0)}$ is a constant, maximising $g(f)$ is the same as maximising $\mathbb{E} \Big \{ \frac{1}{t} \log{x(t)} \Big \}$
\begin{align}
g(f) & = \mathbb{E} \Big \{ \frac{H}{t} \log{(1 + f)} + \frac{T}{t} \log{(1 - f)} \Big \} \\
g(f) & = p \log{(1 + f)} + q \log{(1 - f)} \\
\end{align}
Let's plot $g(t)$
```
fs = np.linspace(0, 1, endpoint=False, num=200)
p, q = 0.6, 0.4
g_star = lambda f: p * np.log(1+f) + q * np.log(1-f)
plt.plot(fs, g_star(fs));
plt.plot(fs, g_star(fs))
plt.axvline(x=p-q, linestyle="--", color="r", linewidth=1)
plt.yscale("log");
```
We can calculate the maximum by taking the derivative.
\begin{align}
g^{\ast\prime} = \frac{p}{1+f} - \frac{q}{1+f} = \frac{p-q-f}{(1+f)(1-f)} = 0 \\
\end{align}
It follows then that $f^\ast = p - q$
This was first discovered by mathematician John Kelly in 1956. Known as _Kelly's criterion_ or _Kelly's formula_, it is used widely in gambling, finance and risk management.
Let's see how our wealth grows using Kelly's optimal betting strategy.
```
plt.title("Wealth over time with optimal betting")
plt.xlabel("t"), plt.ylabel("x(t)")
plt.ticklabel_format(style="plain")
wealth = wealth_over_time(p, x_0, 500, p-q)
plt.plot(wealth);
plt.title("Wealth over time with optimal betting")
plt.xlabel("t"), plt.ylabel(r"$\log{x(t)}$")
plt.yscale("log");
plt.plot(wealth);
```
| github_jupyter |
# Non-Negative ICA Applications
The notebok goes through a 2D and a 3D case study and demonstrates how to use the provided nn_ica algorithm. It also comes with a simple plotting function for the results.
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import sys
sys.path.insert(0, '../nn_ica/')
from main import *
%load_ext autoreload
%autoreload 2
```
# Simple 3D Data
```
np.random.seed(2)
n = 3
# it works if the second source is 3, 3
s_1 = np.random.beta(1, 1, 1000)
s_2 = np.random.beta(3, 3, 1000)
s_3 = np.random.beta(2, 4, 1000)
S = np.array(np.concatenate((s_1[None, :], s_2[None, :], s_3[None, :]), axis=0))
# make the data unit variance
S = np.diag(1/np.std(S, axis=1)) @ S
# create A and X
A = np.random.normal(0, 10, (n, n))
X = A @ S
S.shape, A.shape, X.shape
Y, W, Z, t_max_arr = run_nn_ica(X, t_tol=1)
```
Plot the reconstruction:
```
plot_ica_reconstruction([S, X, Y], labels=['S', 'X', 'Y'])
```
This is fairly similar to what we had before. Let's vary the sources slightly and try again.
```
np.random.seed(2)
n = 3
# it works if the second source is 3, 3
s_1 = np.random.beta(1, 1, 1000)
s_2 = np.random.beta(2, 3, 1000)
s_3 = np.random.beta(2, 4, 1000)
S = np.array(np.concatenate((s_1[None, :], s_2[None, :], s_3[None, :]), axis=0))
# make the data unit variance
S = np.diag(1/np.std(S, axis=1)) @ S
# create A and X
A = np.random.normal(0, 10, (n, n))
X = A @ S
S.shape, A.shape, X.shape
# run the algorithm and plot
Y, W, Z, t_max_arr = run_nn_ica(X, t_tol=1)
plot_ica_reconstruction([S, X, Y], labels=['S', 'X', 'Y'])
```
Let's check how many negative elements are still present.
```
np.sum(Y < 0)
Y.shape
```
So this was succesful for most, but not absolutely all elements. Let's see what happens when we decrease the tolerance on the torque:
```
Y, W, Z, t_max_arr = run_nn_ica(X, t_tol=1e-2, i_max=1e5, print_all=5000)
```
This does converge when t_tol is 1e-2, but it does not when it's 1e-3. I tried using up to half a million iterations.
```
plot_ica_reconstruction([S, X, Y], labels=['S', 'X', 'Y'])
```
Let's check how many elements in Y are still negative:
```
np.sum(Y<0)
```
# More Challenging 3D Sources
Let's generate some higher dimensional data to check the performance, with some more interesting densities.
```
np.random.seed(2)
sources = []
sources.append(np.concatenate( (np.random.beta(1, 1, 1000), np.random.beta(3, 3, 1000)) ))
sources.append(np.concatenate((np.random.beta(1, 3, 1000), np.random.normal(10, 1, 1000))))
sources.append(np.concatenate((np.random.beta(1, 4, 1000), 2+np.random.exponential(1, 1000))))
n = len(sources)
S = np.concatenate([s[None, :] for s in sources], axis=0)
S = np.diag(1/np.std(S, axis=1)) @ S
S.shape
A = np.random.normal(0, 10, (n, n))
X = A @ S
S.shape, A.shape, X.shape
Z = whiten(X)
plot_ica_reconstruction([S, X, Z], labels=['S', 'X', 'Z'])
Y, W, Z, _ = run_nn_ica(X, t_tol=1e-2, i_max=5e5, print_all=5000)
```
Let's check to see how well we are doing.
```
plot_ica_reconstruction([S, X, Y], labels=['S', 'X', 'Y'])
```
# More Sources - High Dimensional Data
Let's use some more sources to see how the algorithm can deal with higher dimensional data
```
def normal_pos(n_samples=1000, *args):
a = np.random.normal(*args, n_samples*3)
a = a[a>0]
assert(a.shape[0] >= n_samples)
return np.random.choice(a, size=n_samples, replace=False)
# set the seed
np.random.seed(2)
# create a list of source distributions
sources = []
sources.append(normal_pos(1000, 0, 1))
sources.append(normal_pos(1000, 10, 5))
sources.append(np.random.beta(1, 1, 1000))
sources.append(np.random.beta(1, 10, 1000))
sources.append(np.random.chisquare(2, 1000))
n = len(sources)
# concatenate and make unit variance
S = np.concatenate([s[None, :] for s in sources], axis=0)
S = np.diag(1/np.std(S, axis=1)) @ S
# create random matrix A and use it to mix the sources
A = np.random.normal(0, 10, (n, n))
X = A @ S
# whiten the data, and plot sources, mixed data and whitenend mixed data
Z = whiten(X)
plot_ica_reconstruction([S, X, Z], labels=['S', 'X', 'Z'])
```
Run NN-ICA
```
Y, _, _ , _ = run_nn_ica(X, t_tol=1e-1, i_max=5e5, print_all=5000)
```
This does indeed take quite a while. I think that for such low-dimensional data, there must be more efficient ways do handle this. The algorithm does not converge in half a million iterations for tolerance 1e-2.
Plot the final reconstruction.
```
plot_ica_reconstruction([S, X, Y], labels=['S', 'X', 'Y'])
```
How many negative elements are still present in the reconstruction?
```
np.sum(Y<0)
```
How many is that in realtion to the total elements in Y?
```
100*(21 / (Y.shape[0]*Y.shape[1]))
```
So that's less than 0.5 percent. In comparison for X
```
np.sum(X<0)
```
So that works fairly well indeed.
| github_jupyter |
```
import numpy as np
```
Voici la définition du broadcasting qu'on peut trouver sur le site de Numpy
> Le terme `broadcasting` décrit comment numpy traite les tableaux de différentes formes lors des opérations arithmétiques. Sous réserve de certaines contraintes, le plus petit tableau est «diffusé» à travers le plus grand tableau afin qu'ils aient des formes compatibles. La diffusion fournit un moyen de vectoriser les opérations de tableau afin que la boucle se produise en C au lieu de Python. Il le fait sans faire de copies inutiles des données et conduit généralement à des implémentations d'algorithmes efficaces. Il existe cependant des cas où le broadcasting est une mauvaise idée car elle conduit à une utilisation inefficace de la mémoire qui ralentit le calcul.
Considérons l'opération suivante:
```
a = np.array([0, 1, 2])
b = np.array([5, 5, 5])
a + b # on peut aisément deviner la reponse
```
Mais qu'en est-il de ceci :
```
matrice = np.random.randint(20, size=(3, 3))
matrice
a
matrice + a # ici les 2 tableaux n'ont pas la même taille
```
Ici s'est produit, le phénomène de broadcasting, la forme du plus petit tableau a été adapté pour avoir la même que la grande. C'est équivalent à ceci :
```
np.vstack([a, a, a])
matrice + np.vstack([a, a, a])
```
Il existe 3 règles à connaitre pour le broadcasting
> Règle 1 : si 2 tableaux ont des ndim différents, le shape de celui avec le ndim inférieur est augmenté par des 1 à gauches pour avoir le même ndim que l'autre
```
m = np.random.randint(10, size=(2, 3))
m
a
m # (2, 3) --> (2, 3)
a # (3) ---> (1, 3)
```
> Règle 2 : Si le shape de 2 tableaux n'est pas égale dans une dimension quelconque, celui qui a un 1 à cette dimension est étendu pour atteindre le nombre d'éléments que l'autre a à cette dimension
```
m # (2, 3) --> (2, 3) ---> (2, 3)
a # (3) ---> (1, 3) ---->(2, 3) #grace à vstack
np.vstack([a, a])
m + a
```
Un autre exemple
```
m = np.arange(3).reshape((3, 1))
m
h = np.random.randint(3, size=(3,))
h
m.shape = (3, 1)
h.shape = (3, )
# Regle 1
m.shape = (3, 1)
h.shape = (1, 3)
h.reshape((1, 3))
np.vstack([h.reshape((1, 3))]* 3)
# Regle 2
m.shape = (3, 1)
h.shape = (3, 3)
m
np.hstack([m, m, m])
# Regle 2
m.shape = (3, 3)
h.shape = (3, 3)
np.vstack([h.reshape((1, 3)), h.reshape((1, 3)), h.reshape((1, 3))])
m + h
```
Encore un exemple :
```
m = np.ones((3, 2))
m
j = np.arange(3)
j
m.shape = (3, 2)
j.shape = (3)
Règle 1
m.shape = (3, 2)
j.shape = (1, 3) # j.reshape((1, 3))
np.vstack([j.reshape((1, 3)), j.reshape((1, 3)), j.reshape((1, 3))])
Règle 2
m.shape = (3, 2)
j.shape = (3, 3)
m + j
m
j
```
> Règle 3 : Si après Règle 1 et 2, les shapes, ne sont pas les mêmes, alors calcul impossible
# Utilité de brodcast
```
x = np.random.randint(255, size=(6, 3))
x
xmean = x.mean(axis=0)
xmean
x.shape
xmean.shape
xstd = x.std(axis=0)
xstd
x_centre_et_reduit = (x - xmean) / xstd
x_centre_et_reduit
```
# Plus : Fancy Indexing
```
x = np.random.randint(100, size=(5))
x
x[:2], x[2:4]
x[1:4:2]
x[[0,1,-1] ] # ceci est le fancy indexing. On indexe avec une liste des index de chaque élément
data = np.random.randint(100, size=(6, 3))
data
data[:, [0, 2], ]
data[:, [2,0]]
data[[2,1, 4] , :]
```
| github_jupyter |
### 1. Write a function that counts how many concentric layers a rug.
Examples
* count_layers([ "AAAA", "ABBA", "AAAA" ]) ➞ 2
* count_layers([ "AAAAAAAAA", "ABBBBBBBA", "ABBAAABBA", "ABBBBBBBA", "AAAAAAAAA" ]) ➞ 3
* count_layers([ "AAAAAAAAAAA", "AABBBBBBBAA", "AABCCCCCBAA", "AABCAAACBAA", "AABCADACBAA", "AABCAAACBAA", "AABCCCCCBAA", "AABBBBBBBAA", "AAAAAAAAAAA" ]) ➞ 5
```
def count_layers(arr):
return len(set(arr))
print(count_layers([ "AAAA", "ABBA", "AAAA" ]))
print(count_layers([ "AAAAAAAAA", "ABBBBBBBA", "ABBAAABBA", "ABBBBBBBA", "AAAAAAAAA" ]) )
print(count_layers([ "AAAAAAAAAAA", "AABBBBBBBAA", "AABCCCCCBAA", "AABCAAACBAA", "AABCADACBAA", "AABCAAACBAA", "AABCCCCCBAA", "AABBBBBBBAA", "AAAAAAAAAAA" ]) )
```
### 2. There are many different styles of music and many albums exhibit multiple styles. Create a function that takes a list of musical styles from albums and returns how many styles are unique.
Examples
* unique_styles([ "Dub,Dancehall", "Industrial,Heavy Metal", "Techno,Dubstep", "Synth-pop,Euro-Disco", "Industrial,Techno,Minimal" ]) ➞ 9
* unique_styles([ "Soul", "House,Folk", "Trance,Downtempo,Big Beat,House", "Deep House", "Soul" ]) ➞ 7
```
def unique_styles(styles:list) -> int:
x = set()
for s in styles:
for k in s.split(','):
x.add(k)
return len(set(x))
print(unique_styles([ "Dub,Dancehall", "Industrial,Heavy Metal", "Techno,Dubstep", "Synth-pop,Euro-Disco", "Industrial,Techno,Minimal" ]))
print(unique_styles([ "Soul", "House,Folk", "Trance,Downtempo,Big Beat,House", "Deep House", "Soul" ]))
```
### 3. Create a function that finds a target number in a list of prime numbers. Implement a binary search algorithm in your function. The target number will be from 2 through 97. If the target is prime then return "yes" else return "no".
Examples
* primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97]
is_prime(primes, 3) ➞ "yes"
is_prime(primes, 4) ➞ "no"
is_prime(primes, 67) ➞ "yes"
is_prime(primes, 36) ➞ "no"
```
def bin_search(l,low, high,key):
if high >= low:
mid = int ((low + high) / 2)
if key == l[mid]:
return mid
if l[mid] > key:
return bin_search(l,low,mid - 1, key)
else:
return bin_search(l, mid+1,high, key)
else:
return -1
def is_prime(primes,n):
if n in range(2,97):
result = bin_search(sorted(primes), 0, len(primes), n)
if result > 0:
return 'yes'
else:
return 'no'
return 'Elemnet is not in the list primes'
primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97]
print(is_prime(primes, 3))
print(is_prime(primes, 4))
print(is_prime(primes, 67))
print(is_prime(primes, 36))
```
### 4. Create a function that takes in n, a, b and returns the number of positive values raised to the nth power that lie in the range [a, b], inclusive.
Examples
* power_ranger(2, 49, 65) ➞ 2
\# 2 squares (n^2) lie between 49 and 65, 49 (7^2) and 64 (8^2)
* power_ranger(3, 1, 27) ➞ 3
\# 3 cubes (n^3) lie between 1 and 27, 1 (1^3), 8 (2^3) and 27 (3^3)
* power_ranger(10, 1, 5) ➞ 1
\# 1 value raised to the 10th power lies between 1 and 5, 1 (1^10)
* power_ranger(5, 31, 33) ➞ 1
* power_ranger(4, 250, 1300) ➞ 3
```
import math
def power_ranger(n, a, b):
max = int(round(math.sqrt(b), 0)) + 1
return len([x for x in range(1, max + 1) if x**n >= a and x**n <= b])
print(power_ranger(2, 49, 65))
print(power_ranger(3, 1, 27))
print(power_ranger(10, 1, 5))
print(power_ranger(5, 31, 33))
print(power_ranger(4, 250, 1300))
```
### 5. Given a number, return the difference between the maximum and minimum numbers that can be formed when the digits are rearranged.
Examples:
* rearranged_difference(972882) ➞ 760833 # 988722 - 227889 = 760833
* rearranged_difference(3320707) ➞ 7709823 # 7733200 - 23377 = 7709823
* rearranged_difference(90010) ➞ 90981 # 91000 - 19 = 90981
```
def rearranged_difference(number):
lst = list(str(number))
lst.sort()
min = int("".join(lst))
lst.reverse()
max = int("".join(lst))
print(f"Max: {max}, Min: {min}, Difference: {max - min}")
rearranged_difference(972882)
rearranged_difference(3320707)
rearranged_difference(90010)
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Get Started with TensorFlow
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/alpha/tutorials/quickstart/beginner"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/quickstart/beginner.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/quickstart/beginner.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
This is a [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) notebook file. Python programs are run directly in the browser—a great way to learn and use TensorFlow. To run the Colab notebook:
1. Connect to a Python runtime: At the top-right of the menu bar, select *CONNECT*.
2. Run all the notebook code cells: Select *Runtime* > *Run all*.
For more examples and guides, see the [TensorFlow tutorials](https://www.tensorflow.org/alpha/tutorials/).
To get started, import the TensorFlow library into your program:
```
from __future__ import absolute_import, division, print_function
!pip install tf-nightly-2.0-preview
import tensorflow as tf
```
Load and prepare the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). Convert the samples from integers to floating-point numbers:
```
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
```
Build the `tf.keras.Sequential` model by stacking layers. Choose an optimizer and loss function used for training:
```
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
```
Train and evaluate model:
```
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)
```
The image classifier is now trained to ~98% accuracy on this dataset. To learn more, read the [TensorFlow tutorials](https://www.tensorflow.org/alpha/tutorials/).
| github_jupyter |
```
# ignore this
%load_ext music21.ipython21
```
# User's Guide, Chapter 1: Installing and Getting Started with `music21`
If you're going to use `music21`, you'll need to have a copy of it and Python on your computer. The instructions are slightly different for each type of computer, so follow the link below and then come back to the system. For many people, installation is the most difficult step for a bit.
Mac users:
:ref:`installMac`
Windows user (also installs Python):
:ref:`installWindows`
Unix/Linux users or Mac users with unusual needs:
:ref:`installLinux`
When a new version of `music21` is released, you can upgrade by using the same method that you originally used to install it.
If you want to see if it's worth installing, keep :ref:`reading the guide <usersGuide_02_notes>` without installing, but if you have installed already, you'll get more out of the guide since you'll be able to follow along.
## Starting `music21`
We'll just see if music21 worked for you. Open up the Terminal (Mac) or IDLE (Windows). On the Mac type "python"
(without the quotes) and hit enter.
To load music21 type this:
```
from music21 import *
```
You'll probably get a few warnings that you're missing some optional modules. That's okay.
If you get a warning that "no module named music21" then something probably went wrong above.
Try going step-by-step through the instructions above one more time, making sure not to skip
anything. 99% of installation errors come from skipping a step above. If you still have a
problem, Google for "installation problem music21" or "installation problem mac python module"
and see if anything looks familiar. If all else fails, contact the music21list Google Group
which might be able to help.
If you didn't have a problem, which is nearly always the case, then `music21` has worked for you.
Test that you can get a score from the corpus by typing this command:
```
s = corpus.parse('bach/bwv65.2.xml')
```
Now `s` represents an entire score of a chorale by J.S. Bach. Type "`s.analyze('key')`" to see
what music21's best guess as to its key is:
```
s.analyze('key')
```
Now let's see if you can see scores with `music21`. If this doesn't work, you can skip ahead to :ref:`Chapter 8: Installing a MusicXML reader <usersGuide_08_installingMusicXML>` or just work through the tutorial until you get there without seeing scores. Type "`s.show()`. Assuming your installation and configuration went as expected,
your MusicXML reader should launch and display the chorale, looking something
like what we see here:
```
s.show()
```
On your computer you might see something like this.
Again, if you don't have `MusicXML` working for you yet, don't panic, we'll give more explicit instructions
in a few chapters. For now, let's proceed to :ref:`Chapter 2: Notes <usersGuide_02_notes>`.
| github_jupyter |
# Mengimport library dan package machine learning
```
import findspark
findspark.init()
import pyspark
# import modules
from pyspark.sql.types import *
from pyspark.sql.functions import *
from pyspark.ml.classification import NaiveBayes, NaiveBayesModel
from pyspark.ml.feature import VectorAssembler
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
# create Spark session
appName = "Classification in Apache Spark"
spark = SparkSession \
.builder \
.appName(appName) \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
```
# Baca file dataset ke Spark
```
# define our schema
flightSchema = StructType([
StructField("DayofMonth", IntegerType(), False),
StructField("DayOfWeek", IntegerType(), False),
StructField("Carrier", StringType(), False),
StructField("OriginAirportID", IntegerType(), False),
StructField("DestAirportID", IntegerType(), False),
StructField("DepDelay", IntegerType(), False),
StructField("ArrDelay", IntegerType(), False),
])
# read csv data with our defined schema
csv = spark.read.csv('flights.csv', schema=flightSchema, header=True)
csv.show(3)
```
# Handle missing data
```
# Hapus baris jika ada setidaknya satu kolom kosong
csv2 = csv.dropna(how="any", subset=["DayofMonth","DayOfWeek","Carrier","OriginAirportID",
"DestAirportID","ArrDelay", "DepDelay"])
```
# Pilih fitur data dan konversi kolom ArrDelay menjadi biner
```
data = csv2.select(
"DayofMonth", "DayOfWeek", "OriginAirportID", "DestAirportID", ((col("ArrDelay") > 15).cast("Int").alias("Late")))
data.show(3)
```
# Split menjadi data training dan data testing
```
# divide data, 70% for training, 30% for testing
dividedData = data.randomSplit([0.7, 0.3])
trainingData = dividedData[0] #index 0 = data training
testingData = dividedData[1] #index 1 = data testing
train_rows = trainingData.count()
test_rows = testingData.count()
print("Training data rows:", train_rows, "; Testing data rows:", test_rows)
```
# Persiapkan data training
```
# define an assembler
assembler = VectorAssembler(inputCols = [
"DayofMonth", "DayOfWeek", "OriginAirportID", "DestAirportID"], outputCol="features")
trainingDataFinal = assembler.transform(
trainingData).select(col("features"), col("Late").alias("label"))
trainingDataFinal.show(truncate=False, n=2)
```
# Latih model dengan data training
```
# define our classifier
classifier = NaiveBayes(labelCol="label", featuresCol="features", smoothing=1.0, modelType="multinomial")
# train our classifier
model = classifier.fit(trainingDataFinal)
print("Model berhasil dilatih!")
```
# Persiapkan data testing
```
testingDataFinal = assembler.transform(
testingData).select(col("features"), col("Late").alias("trueLabel"))
testingDataFinal.show(3)
```
# Prediksi data testing menggunakan model yang telah dilatih
```
prediction = model.transform(testingDataFinal)
predictionFinal = prediction.select(
"features", "prediction", "probability", "trueLabel")
predictionFinal.show(truncate=False, n=3)
prediction.show(truncate=False, n=3)
```
# Hitung performa atau akurasi model
```
correctPrediction = predictionFinal.filter(
predictionFinal['prediction'] == predictionFinal['trueLabel']).count()
totalData = predictionFinal.count()
print("correct prediction:", correctPrediction, ", total data:", totalData,
", accuracy:", correctPrediction/totalData)
correctPrediction = predictionFinal.filter(
predictionFinal['prediction'] == predictionFinal['trueLabel']).count()
totalData = predictionFinal.count()
print("correct prediction:", correctPrediction, ", total data:", totalData,
", accuracy:", correctPrediction/totalData)
```
| github_jupyter |
Let's look how can we combbine old competition data with the current one. For that we can do two things first:
- Run binary classification old vs new data. In ideal case samples coming from the new and old data shouldn't be separated.
- Since we have label (0\1) for each of sample of the old data, we can take models, trained only on new data and see how coherent the predictions are on the old data
```
import sys
sys.path.append('../')
import warnings
from torch.utils.data import DataLoader, Dataset
from src.pl_module import MelanomaModel
import pandas as pd
import torch
import torch.nn as nn
from typing import Tuple
import albumentations as A
from tqdm.auto import tqdm
import skimage.io
import numpy as np
import matplotlib.pyplot as plt
warnings.filterwarnings("ignore")
```
Load trained models and make dataset
```
def load_model(model_name: str, model_type: str, weights: str):
print('Loading {}'.format(model_name))
model = MelanomaModel.net_mapping(model_name, model_type)
model.load_state_dict(
torch.load(weights)
)
model.eval()
model.cuda()
print("Loaded model {} from checkpoint {}".format(model_name, weights))
return model
class MelanomaDataset(Dataset):
def __init__(self, image_folder, df, transform=None):
super().__init__()
self.image_folder = image_folder
self.df = df
self.transform = transform
def __len__(self) -> int:
return self.df.shape[0]
def __getitem__(self, index) -> Tuple[torch.Tensor, torch.Tensor]:
row = self.df.iloc[index]
img_id = row.image_name
img_path = f"{self.image_folder}/{img_id}.jpg"
image = skimage.io.imread(img_path)
if self.transform is not None:
image = self.transform(image=image)['image']
image = image.transpose(2, 0, 1)
image = torch.from_numpy(image)
target = row.target
return{'features': image, 'img_id': img_id, 'target': target}
def get_valid_transforms():
return A.Compose(
[
A.Normalize()
],
p=1.0)
old_data = pd.read_csv('../data/external_train.csv')
old_data.head()
model_name_list = [
'resnest50d',
'resnest269e',
'resnest101e',
#'seresnext101_32x4d',
'tf_efficientnet_b3_ns',
'tf_efficientnet_b7_ns',
'tf_efficientnet_b5_ns']
model_type_list = ['SingleHeadMax'] * len(model_name_list)
weights_list = [
'../weights/train_384_balancedW_resnest50d_fold0_heavyaugs_averaged_best_weights.pth',
'../weights/07.09_train_384_balancedW_resnest269e_heavyaugs_averaged_best_weights.pth',
'../weights/03.09_train_384_balancedW_resnest101e_fold0_heavyaugs_averaged_best_weights.pth',
#'../weights/06.18_train_384_balancedW_seresnext101_32x4d_fold0_heavyaugs_averaged_best_weights.pth',
'../weights/06.10_train_384_balancedW_b3_fold0_heavyaugs_averaged_best_weights.pth',
'../weights/05.23_train_384_balancedW_b7_fold0_heavyaugs_averaged_best_weights.pth',
'../weights/03.18_train_384_balancedW_b5_fold0_heavyaugs_averaged_best_weights.pth'
]
models = [load_model(model_name, model_type, weights) for model_name, model_type, weights in
zip(model_name_list, model_type_list, weights_list)]
dataset = MelanomaDataset('../data/jpeg-isic2019-384x384/train/', old_data, get_valid_transforms())
dataloader = DataLoader(dataset, batch_size=16, shuffle=False, num_workers=4)
targets_list = []
cv_cls_1_list = []
mean_cls_1_list = []
std_cls_1_list = []
for batch in tqdm(dataloader, total=len(dataloader)):
with torch.no_grad():
preds = [nn.Sigmoid()(model(batch['features'].cuda())) for model in models]
preds = torch.stack(preds)
mean_cls_1 = preds[..., 0].cpu().numpy().mean(axis=0)
std_cls_1 = preds[..., 0].cpu().numpy().std(axis=0)
cv_cls_1 = std_cls_1 / mean_cls_1
targets_list.extend(batch['target'].cpu().numpy())
mean_cls_1_list.extend(mean_cls_1)
cv_cls_1_list.extend(cv_cls_1)
std_cls_1_list.extend(std_cls_1)
targets_list = np.array(targets_list)
cv_cls_1_list = np.array(cv_cls_1_list)
mean_cls_1_list = np.array(mean_cls_1_list)
std_cls_1_list = np.array(std_cls_1_list)
thr_pred = 0.5
predicted_cls = (mean_cls_1_list >= thr_pred).astype(int)
f, ax = plt.subplots(2, 3, figsize=(15, 10))
ax[0, 0].hist(cv_cls_1_list[targets_list==1])
ax[0, 0].set_title('CV predictions of samples with gt == 1')
ax[0, 1].hist(mean_cls_1_list[targets_list==1])
ax[0, 1].set_title('mean predictions of samples with gt == 1')
ax[0, 2].hist(std_cls_1_list[targets_list==1])
ax[0, 2].set_title('SD predictions of samples with gt == 1')
ax[1, 0].hist(cv_cls_1_list[targets_list==0])
ax[1, 0].set_title('CV predictions of samples with gt == 0')
ax[1, 1].hist(mean_cls_1_list[targets_list==0])
ax[1, 1].set_title('mean predictions of samples with gt == 0');
ax[1, 2].hist(std_cls_1_list[targets_list==0])
ax[1, 2].set_title('SD predictions of samples with gt == 0');
#plt.hist(cv_cls_1_list);
```
We can take only confident predictions, that is CV <= 0.5 for both classes + predicted class == gt class
```
old_data.loc[:, 'predicted_target'] = predicted_cls
old_data.loc[:, 'prediction_mean'] = mean_cls_1_list
old_data.loc[:, 'CV'] = cv_cls_1_list
old_data.loc[:, 'std'] = std_cls_1_list
old_data.head()
selection_mask = (std_cls_1_list <= 0.15) & (predicted_cls == targets_list)
old_data_cleaned = old_data.loc[selection_mask, :]
old_data_cleaned.head()
old_data_cleaned.target.value_counts()
old_data.target.value_counts()
#now replace target with soft labels
old_data_cleaned['old_target'] = old_data_cleaned['target']
old_data_cleaned['target'] = old_data_cleaned['prediction_mean']
old_data_cleaned.head()
old_data_cleaned['target'].hist()
old_data_cleaned.to_csv('../data/external_train_cleaned.csv', index=False)
```
| github_jupyter |
# <center>Data collection, pipelines, and visualization </center>
### <center>An applied approach using python data science ecosystem</center>
<br>
<br>
<br>
<br>
<br>
<br>
<center> <strong> Dr. Abdalrahman Eweiwi</strong></center>
# About me!
<br>
<br>
<b>
<ul>
<li> Abdalrahman Eweiwi (Abdal), exported to Germany from Palestine in 2008 for Msc. in RWTH Aachen
<li> I finalized my Phd in computer vision in 2014 from the University of Bonn
<li> Have been working for industry ever since in cologne region
<li>Currently as a senior data scientist in Siegwerk AG.
# Overview of this talk
<br>
<br>
<b>
<ul>
<li>Motivation</li>
<ul>
<li> From research to industry, why data pipelines are important for data scientists?</li>
</ul>
<li> Data science process: general overview</li>
<li>Python ecosystem packages to the rescue</li>
<ul>
<li>Scrapy</li>
<li>Luigi</li>
<li>Dash</li>
</ul>
<li> Business case: The car buyer’s dilemma (Demo) </li>
# Overview of this talk
<br>
<br>
<b>
<ul>
<font color="blue">
<li>Motivation</li>
<ul>
<li> From research to industry, why data pipelines are important for data scientists? </li>
</ul>
</font>
<li> Data science process: general overview</li>
<li>Python ecosystem packages to the rescue</li>
<ul>
<li>Scrapy</li>
<li>Luigi</li>
<li>Dash</li>
</ul>
<li> Business case: The car buyer’s dilemma (Demo) </li>
# From research to industry, why data pipelines are important?
<br>
<br>
<b>
Research environments make critical assumptions that are usually not valid in real world:
<ul>
<li> Exploration approach with lots of prototypes </li>
<li> Data sources exists and/or easily accessible </li>
<li> Continous monitoring of model performance is not necessarily important </li>
<li> Optimizing ML metrics imply optimizing business metrics </li>
# From research to industry, why data pipelines are important?

# From research to industry, why data pipelines are important?
<br>
<br>
<b>
In industry
<ul>
<li> Less exploration and prototypes, more focus on production ready systems </li>
<li> Data sources usually do not exists or not easily accessible </li>
<li> Continous monitoring of model and business KPI's is a must</li>
<li> Code quality and standards should be maintained</li>
# From research to industry: a cross functional challenge

# Overview of this talk
<br>
<br>
<b>
<ul>
<li>Motivation</li>
<ul>
<li> From research to industry, why data pipelines are important for data scientists? </li>
</ul>
<li><font color="blue"> Data science process: general overview</font></li>
<li>Python ecosystem packages to the rescue</li>
<ul>
<li>Scrapy</li>
<li>Luigi</li>
<li>Dash</li>
</ul>
<li> Business case: The car buyer’s dilemma (Demo) </li>
# Data science process: general overview

# Data science process: Tools to the rescue

# Overview of this talk
<br>
<br>
<b>
<ul>
<li>Motivation</li>
<ul>
<li> From research to industry, why data pipelines are important for data scientists? </li>
</ul>
<li> Data science process: general overview</li>
<li><font color="blue">Python ecosystem packages to the rescue</li>
<ul>
<li>Scrapy</li>
<li>Luigi</li>
<li>Dash</li>
</ul>
</font>
<li> Business case: The car buyer’s dilemma (Demo) </li>

## Scrapy in a nutshell
1. <strong> Open source with a healthy community</strong>


2. <strong> Fast, scalable, and easily extendible framework for complex scraping projects</strong>
3. <strong> Scrapy project skeleton is automatically generated upon calling:</strong>
<ul>
<li>
<code> scrapy createproject [project name] </code>
</li>
# Scrapy project structure

### Each file created serves a specific functionality:
1. Scrapy.cfg: deploy configuration files (e.g. ip of the server where the spider will be deployed)
2. Scrapy_shobiddak: Main project module
3. Scrapy_shobiddak/items.py:
4. scrapy_shobiddak/pipelines.py:
5. scrapy_shobiddak/settings.py: contains project settings file
6. scrapy_shobiddak/spiders/: directory that contains your crawler modules
## items.py:
define the data entries you want to collect for a particular object, for example, represent the viewed car below using the following features:
1. <strong> Car model </strong>
2. <strong>Car price</strong>
3. <strong>Car type</strong>

<b>Items represent the basic structure of elements that we are collecting </b>
``` python
class ShobiddakCar(Item):
car_model = Field()
car_year = Field()
car_color = Field()
price = Field()
```

## Luigi in a nutshell
1. <strong> Open source with a healthy community</strong>


2. <strong>Simple breakdown of tasks based on their requirements (dependencies), execution, and output</strong>
3. <strong>Interaction support with lots of variant systems including different cloud providers, docker_runner, hadoop, kubernetes, pyspark_runner, salesforce etc.</strong>
4. <strong>Minimal code overhead</strong>
6. <strong>Provides pipeline monitoring tools using the task visualizer tool</strong>
7. <strong>Supports error notification Tasks through e.g. email or slack</strong>
## Luigi code structure

## Luigi provides tools to visually monitor the task status

## General overview of the running tasks status


## Dash in a nutshell
1. <strong> Its free </strong>
2. <strong> Pure python, no javascript or html required </strong>
3. <strong> Uses flask server as backend, react.js for frontend </strong>
4. <strong> Clear structure, a very good candidate for internal data products</strong>
``` python
import dash
import dash_core_components as dcc
import dash_html_components as html
import plotly.graph_objs as go
```
# Hello world example!
``` python
app = dash.Dash(__name__)
app.layout(html.div([html.H1("hello world!")]))
if __name__ == '__main__':
app.run_server(debug=True)
```
## Add interactivity by invoking dash_core_component widget
``` python
from dash.dependencies import Input, Output
app.layout = html.Div([dcc.Input(id='txt_input', type='text', value='Bonn'),
html.Div(id='output')
])
@app.callback(Output('output', 'children'),
[Input('txt_input', 'value')])
def update_output(my_txtinput):
return u'Text input is "{}"'.format(my_txtinput)
if __name__ == '__main__':
app.run_server(debug=True)
```
# Overview of this talk
<br>
<br>
<b>
<ul>
<li>Motivation</li>
<ul>
<li> From research to industry, why data pipelines are important for data scientists? </li>
</ul>
<li> Data science process: general overview</li>
<li>Python ecosystem packages to the rescue</li>
<ul>
<li>Scrapy</li>
<li>Luigi</li>
<li>Dash</li>
</ul>
<li> <font color="blue">Business case: The car buyer’s dilemma (Demo) </font> </li>
</ul>
</b>
# Business case: The car buyer’s dilemma (Demo)

<br>
<br>
<br>
<br>
# <center> Demo </center>
<br>
<br>
<br>
<br>
# <center> Thanks for listening!</center>
```
class shobiddak(CrawlSpider):
name = "scrapy_shobiddak"
# defines the allowed downains to which the crawler are allowed to crawl
allowed_domains = ["https://shobiddak.com"]
# a list of url to start scraping from
start_urls = ['https://shobiddak.com/cars']
# rules if exists to generate subsequent requests
rules = [
# rule to click on the next page, hence, no callback required because I am not interested
# in the general description but rather in the details
Rule(LinkExtractor(allow=(), restrict_xpaths='//a[@class="next_page"]'),
follow=True),
# rule to generate the requests of the items in the next page to get their further details
Rule(LinkExtractor(allow=(), restrict_xpaths='//p[@class="section_title"]'),
callback='parse_sub', follow=True)
]
def __init__(self):
super().__init__()
item_fields = {'car_model': "//h1[@class='section_title']/text()",
'car_year': "//table/tbody/tr/td/h3[@class='section_title']/text()",
'price': "//div[@class='post-price']/text()",
}
def parse_sub(self, response):
# assemble contents based on the provided item_fields variable
loader = ItemLoader(ShobiddakCar(), response=response)
loader.default_input_processor = MapCompose(str.strip)
loader.add_value('url', response.url)
for field, xpath in self.item_fields.items():
loader.add_xpath(field, xpath)
yield loader.load_item()
```
| github_jupyter |
## AI for Medicine Course 1 Week 1 lecture exercises
# Patient Overlap and Data Leakage
Patient overlap in medical data is a part of a more general problem in machine learning called **data leakage**. To identify patient overlap in this week's graded assignment, you'll check to see if a patient's ID appears in both the training set and the test set. You should also verify that you don't have patient overlap in the training and validation sets, which is what you'll do here.
Below is a simple example showing how you can check for and remove patient overlap in your training and validations sets.
```
# Import necessary packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import os
import seaborn as sns
sns.set()
```
### Read in the data from a csv file
First, you'll read in your training and validation datasets from csv files. Run the next two cells to read these csvs into `pandas` dataframes.
```
# Read csv file containing training data
train_df = pd.read_csv("nih/train-small.csv")
# Print first 5 rows
print(f'There are {train_df.shape[0]} rows and {train_df.shape[1]} columns in the training dataframe')
train_df.head()
# Read csv file containing validation data
valid_df = pd.read_csv("nih/valid-small.csv")
# Print first 5 rows
print(f'There are {valid_df.shape[0]} rows and {valid_df.shape[1]} columns in the validation dataframe')
valid_df.head()
```
### Extract and compare the PatientId columns from the train and validation sets
By running the next four cells you will do the following:
1. Extract patient IDs from the train and validation sets
2. Convert these arrays of numbers into `set()` datatypes for easy comparison
3. Identify patient overlap in the intersection of the two sets
```
# Extract patient id's for the training set
ids_train = train_df.PatientId.values
# Extract patient id's for the validation set
ids_valid = valid_df.PatientId.values
# Create a "set" datastructure of the training set id's to identify unique id's
ids_train_set = set(ids_train)
print(f'There are {len(ids_train_set)} unique Patient IDs in the training set')
# Create a "set" datastructure of the validation set id's to identify unique id's
ids_valid_set = set(ids_valid)
print(f'There are {len(ids_valid_set)} unique Patient IDs in the validation set')
# Identify patient overlap by looking at the intersection between the sets
patient_overlap = list(ids_train_set.intersection(ids_valid_set))
n_overlap = len(patient_overlap)
print(f'There are {n_overlap} Patient IDs in both the training and validation sets')
print('')
print(f'These patients are in both the training and validation datasets:')
print(f'{patient_overlap}')
```
### Identify rows (indices) of overlapping patients and remove from either the train or validation set
Run the next two cells to do the following:
1. Create lists of the overlapping row numbers in both the training and validation sets.
2. Drop the overlapping patient records from the validation set (could also choose to drop from train set)
```
train_overlap_idxs = []
valid_overlap_idxs = []
for idx in range(n_overlap):
train_overlap_idxs.extend(train_df.index[train_df['PatientId'] == patient_overlap[idx]].tolist())
valid_overlap_idxs.extend(valid_df.index[valid_df['PatientId'] == patient_overlap[idx]].tolist())
print(f'These are the indices of overlapping patients in the training set: ')
print(f'{train_overlap_idxs}')
print(f'These are the indices of overlapping patients in the validation set: ')
print(f'{valid_overlap_idxs}')
# Drop the overlapping rows from the validation set
valid_df.drop(valid_overlap_idxs, inplace=True)
```
### Check that everything worked as planned by rerunning the patient ID comparison between train and validation sets.
When you run the next two cells you should see that there are now fewer records in the validation set and that the overlap problem has been removed!
```
# Extract patient id's for the validation set
ids_valid = valid_df.PatientId.values
# Create a "set" datastructure of the validation set id's to identify unique id's
ids_valid_set = set(ids_valid)
print(f'There are {len(ids_valid_set)} unique Patient IDs in the validation set')
# Identify patient overlap by looking at the intersection between the sets
patient_overlap = list(ids_train_set.intersection(ids_valid_set))
n_overlap = len(patient_overlap)
print(f'There are {n_overlap} Patient IDs in both the training and validation sets')
```
### Congratulations! You removed overlapping patients from the validation set!
You could have just as well removed them from the training set.
Always be sure to check for patient overlap in your train, validation and test sets.
| github_jupyter |
# Custom DataLoader for Imbalanced dataset
* In this notebook we will use the higly imbalanced Protein Homology Dataset from [KDD cup 2004](https://www.kdd.org/kdd-cup/view/kdd-cup-2004/Data)
```
* The first element of each line is a BLOCK ID that denotes to which native sequence this example belongs. There is a unique BLOCK ID for each native sequence. BLOCK IDs are integers running from 1 to 303 (one for each native sequence, i.e. for each query). BLOCK IDs were assigned before the blocks were split into the train and test sets, so they do not run consecutively in either file.
* The second element of each line is an EXAMPLE ID that uniquely describes the example. You will need this EXAMPLE ID and the BLOCK ID when you submit results.
* The third element is the class of the example. Proteins that are homologous to the native sequence are denoted by 1, non-homologous proteins (i.e. decoys) by 0. Test examples have a "?" in this position.
* All following elements are feature values. There are 74 feature values in each line. The features describe the match (e.g. the score of a sequence alignment) between the native protein sequence and the sequence that is tested for homology.
```
## Initial imports
```
import numpy as np
import pandas as pd
import torch
from torch.optim import SGD, lr_scheduler
from pytorch_widedeep import Trainer
from pytorch_widedeep.preprocessing import TabPreprocessor
from pytorch_widedeep.models import TabMlp, WideDeep
from pytorch_widedeep.dataloaders import DataLoaderImbalanced, DataLoaderDefault
from torchmetrics import F1 as F1_torchmetrics
from torchmetrics import Accuracy as Accuracy_torchmetrics
from torchmetrics import Precision as Precision_torchmetrics
from torchmetrics import Recall as Recall_torchmetrics
from pytorch_widedeep.metrics import Accuracy, Recall, Precision, F1Score, R2Score
from pytorch_widedeep.initializers import XavierNormal
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import time
import datetime
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
# increase displayed columns in jupyter notebook
pd.set_option('display.max_columns', 200)
pd.set_option('display.max_rows', 300)
header_list = ['EXAMPLE_ID', 'BLOCK_ID', 'target'] + [str(i) for i in range(4,78)]
df = pd.read_csv('data/kddcup04/bio_train.dat', sep='\t', names=header_list)
df.head()
# imbalance of the classes
df['target'].value_counts()
# drop columns we won't need in this example
df.drop(columns=['EXAMPLE_ID', 'BLOCK_ID'], inplace=True)
df_train, df_valid = train_test_split(df, test_size=0.2, stratify=df['target'], random_state=1)
df_valid, df_test = train_test_split(df_valid, test_size=0.5, stratify=df_valid['target'], random_state=1)
```
## Preparing the data
```
continuous_cols = df.drop(columns=['target']).columns.values.tolist()
# deeptabular
tab_preprocessor = TabPreprocessor(continuous_cols=continuous_cols,
scale=True)
X_tab_train = tab_preprocessor.fit_transform(df_train)
X_tab_valid = tab_preprocessor.transform(df_valid)
X_tab_test = tab_preprocessor.transform(df_test)
# target
y_train = df_train['target'].values
y_valid = df_valid['target'].values
y_test = df_test['target'].values
```
## Define the model
```
input_layer = len(tab_preprocessor.continuous_cols)
output_layer = 1
hidden_layers = np.linspace(input_layer*2, output_layer, 5, endpoint=False, dtype=int).tolist()
deeptabular = TabMlp(mlp_hidden_dims=hidden_layers,
column_idx=tab_preprocessor.column_idx,
continuous_cols=tab_preprocessor.continuous_cols)
model = WideDeep(deeptabular=deeptabular)
model
# Metrics from torchmetrics
accuracy = Accuracy_torchmetrics(average=None, num_classes=2)
precision = Precision_torchmetrics(average='micro', num_classes=2)
f1 = F1_torchmetrics(average=None, num_classes=2)
recall = Recall_torchmetrics(average=None, num_classes=2)
# # Metrics from pytorch-widedeep
# accuracy = Accuracy(top_k=2)
# precision = Precision(average=False)
# recall = Recall(average=True)
# f1 = F1Score(average=False)
# Optimizers
deep_opt = SGD(model.deeptabular.parameters(), lr=0.1)
# LR Scheduler
deep_sch = lr_scheduler.StepLR(deep_opt, step_size=3)
trainer = Trainer(model,
objective="binary",
lr_schedulers={'deeptabular':deep_sch},
initializers={'deeptabular':XavierNormal},
optimizers={'deeptabular':deep_opt},
metrics=[accuracy, precision, recall, f1],
verbose=1)
start = time.time()
trainer.fit(X_train={"X_tab": X_tab_train, "target": y_train},
X_val={"X_tab": X_tab_valid, "target": y_valid},
n_epochs=3,
batch_size=50,
custom_dataloader=DataLoaderImbalanced,
oversample_mul=5
)
print('Training time[s]: {}'.format(datetime.timedelta(seconds=round(time.time()-start))))
pd.DataFrame(trainer.history)
df_pred = trainer.predict(X_tab=X_tab_test)
print(classification_report(df_test['target'].to_list(), df_pred))
print("Actual predicted values:\n{}".format(np.unique(df_pred, return_counts=True)))
```
| github_jupyter |
# LakeFormation Example Notebook
***Creating LakeFormation and Secured Database with Granular Security Access***
___
## Contents
1. [Introduction](#Introduction)
2. [Setup](#Setup)
1. [Imports](#Imports)
2. [Create Low-Level Clients](#Create-Low-Level-Clients)
3. [Athena Connection](#Athena-Connection)
3. [Create Secured Database](#Create-Secured-Database)
1. [Create Database In Glue](#Create-Database-In-Glue)
2. [Create Tables](#Create-Tables)
4. [Adding Lakeformation policy tags to the resources - Database, Tables and Columns](#)
1. [Database Level Tagging](#Database-Level-Tagging)
2. [Table Level Tagging](#Table-Level-Tagging)
3. [Column Level Tagging](#Column-Level-Tagging)
5. [Securing the Database Using LakeFormation](#Securing-the-Database-Using-LakeFormation)
1. [Registering Database](Registering-Database)
___
## Introduction
This notebook dives deeps into the Tag-based Security Access in AWS LakeFormation. It illustrates the following:
* Ability to create new database that is secured by AWS Lake Formation and is managed by AWS Glue Catalog.
* Ability to tag databases, tables and columns with user defined security tags
This is the second step in setting up our Data Lake before we can securely start analyzing our data, typically through reporting, visualization, advanced analytics and machine learning methodologies.
---
#### Author: AWS Professional Services Emerging Technology and Intelligent Platforms Group
#### Date: June 10 2021
## Setup
#### Imports and Parameters
First, let's import all of the modules we will need for our lake formation, including Pandas DataFrames, Athena, etc. Lets store our session state so that we can create service clients and resources later on.
Next, lets define the location of our unsecured databased, a secured db location, assert we are indeed the lake-creator
(**Note:** We cannot run this notebook if we are not the lake-creator):
```
import json
import boto3
import time
from pandas import DataFrame
# Import orbit helpers
from aws_orbit_sdk.database import get_athena
from aws_orbit_sdk.common import get_workspace
my_session = boto3.session.Session()
my_region = my_session.region_name
print(my_region)
# Clients
lfc = boto3.client('lakeformation')
iamc = boto3.client('iam')
ssmc = boto3.client('ssm')
gluec = boto3.client('glue')
workspace = get_workspace()
catalog_id = workspace['EksPodRoleArn'].split(':')[-2]
orbit_lake_creator_role_arn = workspace['EksPodRoleArn']
orbit_env_admin_role_arn = orbit_lake_creator_role_arn.replace("-lake-creator-role", "-admin")
env_name = workspace['env_name']
team_space = workspace['team_space']
assert team_space == 'lake-creator'
workspace
# Define parameters
unsecured_glue_db = f"cms_raw_db_{env_name}".replace('-', '_')
secured_glue_db = f"cms_secured_db_{env_name}".replace('-', '_')
```
#### Create Low-Level Clients
Next we must create clients for our different AWS services, lakeformation, iam, glue, & AWS Systems Manager (SSM). We will also use SSM to get the location of our secured bucket:
```
def get_ssm_parameters(ssm_string, ignore_not_found=False):
try:
return json.loads(ssmc.get_parameter(Name=ssm_string)['Parameter']['Value'])
except Exception as e:
if ignore_not_found:
return {}
else:
raise e
def get_demo_configuration():
return get_ssm_parameters(f"/orbit/{env_name}/demo", True)
demo_config = get_demo_configuration()
lake_bucket = demo_config.get("LakeBucket").split(':::')[1]
secured_lake_bucket = demo_config.get("SecuredLakeBucket").split(':::')[1]
secured_location = f"s3://{secured_lake_bucket}/{secured_glue_db}/"
(lake_bucket,secured_lake_bucket, secured_location)
```
#### Athena Connection
Our last set up is to connect ot athena with a defualt database and check our connection by running a simple SQL query in our notebook:
```
%reload_ext sql
%config SqlMagic.autocommit=False # for engines that do not support autommit
athena = get_athena()
%connect_to_athena -database default
%%sql
SELECT 1 as "Test"
```
# Create Secured Database
Let's begin by deregistering our secured bucket ARN if registered so that Lake Formation removes the path from the inline policy attached to your service-linked role.
**Note:** We will then re-register the bucket location to use Lake Formation permissions for fine-grained access control to AWS Glue Data Catalog objects.
Afterwards let's clean out our secured glue db if it exists and clean our s3 secured bucket to prepare for our new database creation (**CASCADE** clause tells Apache SQL to drop all tables along with database):
```
# Deregister lakeformation location if its already exists
try:
deregister_resource_response = lfc.deregister_resource(ResourceArn=f"arn:aws:s3:::{secured_lake_bucket}")
print(deregister_resource_response['ResponseMetadata']['HTTPStatusCode'])
except Exception as e:
print("location was not yet registered")
print(e)
# Drop and clean previous created database
%sql drop database if exists $secured_glue_db CASCADE
!aws s3 rm --recursive $secured_location --quiet
```
#### Create Database In Glue
We are all set to start creating our secured database in our secured s3 location by running an Athena SQL query. We will quickly check our database list to ensure it was created succesfully:
```
try:
gluec.get_database(Name=secured_glue_db)
except gluec.exceptions.EntityNotFoundException as err:
print(f"Database {secured_glue_db} doesn't exist. Creating {secured_glue_db}")
create_db = f"create database {secured_glue_db} LOCATION '{secured_location}'"
create_db
athena.current_engine.execute(create_db)
%sql show databases
```
## Create Tables
It's time to create new tables in our secured database from our unsecured database data. We will run a load_tables() function which iterate over all of the tables:
The load_tables() function performs the following steps:
- Retrieves the definitions of all the tables in our secured db as a list of the requested Table objects
- For each table object creates a new Parquet formatted table in our secured database located in our secured s3 location
- Runs a query on secured table to check if creation successful
```
import time
def load_tables():
response = gluec.get_tables(
DatabaseName=unsecured_glue_db
)
response
for table in response['TableList']:
createTable = """
CREATE TABLE {}.{}
WITH (
format = 'Parquet',
parquet_compression = 'SNAPPY',
external_location = '{}/{}'
)
AS
(select * from {}.{})
""".format(secured_glue_db,table['Name'], secured_location,table['Name'],unsecured_glue_db,table['Name'])
print(f'creating table {table["Name"]}...')
athena.current_engine.execute(createTable)
print(f'created table {table["Name"]}')
query = f"select count(*) as {table['Name']}_count from {secured_glue_db}.{table['Name']}"
try:
res = athena.current_engine.execute(query)
except:
print("Unexpected error:", sys.exc_info()[0])
print("Try again to run query...")
%sql drop database if exists $secured_glue_db CASCADE
!aws s3 rm --recursive $secured_location --quiet
!sleep 10s
# try one more time
res = athena.current_engine.execute(query)
df = DataFrame(res.fetchall())
print(df)
for i in range(0,3):
try:
load_tables()
except:
# try one more time
time.sleep(60)
%%sql
SHOW TABLES IN {secured_glue_db};
```
# Adding Lakeformation policy tags to the resources - Database, Tables and Columns.
Our secured database is filled with all of our data but we must now configure security and access permissions for our differnet tables. By default , columns in a table have the lowest security tagging. To fix this, we must tag the columns and tables with higher security access.
**Note:** Policy Tag usage in the example - sec-1(more secure) > sec-5(less secured)
```
orbit_env_lf_tag_key = workspace['env_name']+'-security-level'
```
# Database Level Tagging
Adding policy tag to Database will allow all tables and respective columns to inherit the policy tag
```
db_add_lf_tags_to_resource_response = lfc.add_lf_tags_to_resource(
CatalogId=catalog_id,
Resource={
'Database': {
'CatalogId': catalog_id,
'Name': secured_glue_db
},
},
LFTags=[
{
'CatalogId': catalog_id,
'TagKey': orbit_env_lf_tag_key,
'TagValues': [
'sec-5',
]
},
]
)
assert 0 == len(db_add_lf_tags_to_resource_response['Failures'])
```
# Table with high security access
One way to increase security is to tag an entire table with a higher security level. Here we will give a table a sec-4 security level.
Overrides the database inherited tag.
```
table_add_lf_tags_to_resource_response = lfc.add_lf_tags_to_resource(
CatalogId=catalog_id,
Resource={
'Table': {
'CatalogId': catalog_id,
'DatabaseName': secured_glue_db,
'Name': 'inpatient_claims',
},
},
LFTags=[
{
'CatalogId': catalog_id,
'TagKey': orbit_env_lf_tag_key,
'TagValues': [
'sec-4',
]
},
]
)
assert 0 == len(table_add_lf_tags_to_resource_response['Failures'])
```
## Column Level Tagging
Tagging two columns 'sp_depressn' and 'sp_diabetes' with a higher security access (sec-2) while the table gets a security access level of sec-5( inherited from database):
```
table_columns_add_lf_tags_to_resource_response = lfc.add_lf_tags_to_resource(
CatalogId=catalog_id,
Resource={
'TableWithColumns': {
'CatalogId': catalog_id,
'DatabaseName': secured_glue_db,
'Name': 'beneficiary_summary',
'ColumnNames': [
'sp_depressn',
'sp_diabetes'
]
},
},
LFTags=[
{
'CatalogId': catalog_id,
'TagKey': orbit_env_lf_tag_key,
'TagValues': [
'sec-2',
]
},
]
)
assert 0 == len(table_columns_add_lf_tags_to_resource_response['Failures'])
```
---
## Securing the Database Using LakeFormation
Lastly, after securing our tables in our database, we have a few more steps to finalize our LakeFormation.
#### Registering Database
Registering our s3 bucket ARN registers the resource as managed by the Data Catalog. By establishing **UseServiceLinkedRole=True** we designates an AWS IAM service-linked role by registering this role with the Data Catalog.
Our lake formation can now access our secured bucket and work with our data:
```
reg_s3_location_response = lfc.register_resource(ResourceArn=f"arn:aws:s3:::{secured_lake_bucket}",UseServiceLinkedRole=True)
```
#### Revoking IAM Default Permissions
In our default account settings, we are using the "Use only IAM Access control for new databases". Therefore our new database is providing Super access to all IAM users. In the next cell , we will revoke this privilieges to leave only the specific Orbit Lake User IAM role.
```
def revoke_database_tables_super_permissions(database_name):
response = gluec.get_tables(
DatabaseName=database_name
)
for table in response['TableList']:
try:
response = lfc.revoke_permissions(
Principal={
'DataLakePrincipalIdentifier': 'IAM_ALLOWED_PRINCIPALS'
},
Resource={
'Table': {
'DatabaseName': database_name,
'Name': table['Name']
}
},
Permissions=[
'ALL'
]
)
except lfc.exceptions.InvalidInputException as err:
print(err)
revoke_database_tables_super_permissions(secured_glue_db)
def revoke_database_super_permissions(database_name):
try:
response = lfc.revoke_permissions(
Principal={
'DataLakePrincipalIdentifier': 'IAM_ALLOWED_PRINCIPALS'
},
Resource={
'Database': {
'CatalogId': catalog_id,
'Name': database_name
},
},
Permissions=[
'ALL'
]
)
except lfc.exceptions.InvalidInputException as err:
print(err)
revoke_database_super_permissions(secured_glue_db)
#Used for cleanup operations.
def grant_creator_drop_permission(database_name):
response = lfc.grant_permissions(
CatalogId=catalog_id,
Principal={
'DataLakePrincipalIdentifier': orbit_lake_creator_role_arn
},
Resource={
'Database': {
'CatalogId': catalog_id,
'Name': database_name
}
},
Permissions=[
'DROP'
]
)
print(response)
grant_creator_drop_permission(secured_glue_db)
```
# Quick check on the created tables.
```
%reload_ext sql
%config SqlMagic.autocommit=False # for engines that do not support autommit
athena = get_athena()
%connect_to_athena -database secured_glue_db
time.sleep(30)
%sql select * from {secured_glue_db}.inpatient_claims limit 1
%sql select sp_depressn, sp_diabetes from {secured_glue_db}.beneficiary_summary limit 1
%sql select clm_pmt_amt, nch_prmry_pyr_clm_pd_amt from {secured_glue_db}.outpatient_claims limit 1
```
# End of orbit lake creator demo notebook.
| github_jupyter |
```
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import os
# from utils import read_wav, extract_feats, read_dataset, batch, decode
from IPython.display import Audio
from sklearn.model_selection import train_test_split
from IPython.core.display import HTML
from scipy.signal import spectrogram
from keras.layers import LSTM, Dense, Convolution1D
from keras.models import Sequential
from keras.layers.wrappers import TimeDistributed, Bidirectional
%matplotlib inline
import os
import numpy as np
import pickle
import scipy.io.wavfile as wav
from itertools import izip_longest as zip_longest
def decode(d, mapping):
"""Decode."""
shape = d.dense_shape
batch_size = shape[0]
ans = np.zeros(shape=shape, dtype=int)
seq_lengths = np.zeros(shape=(batch_size, ), dtype=np.int)
for ind, val in zip(d.indices, d.values):
ans[ind[0], ind[1]] = val
seq_lengths[ind[0]] = max(seq_lengths[ind[0]], ind[1] + 1)
ret = []
for i in range(batch_size):
ret.append("".join(map(lambda s: mapping[s], ans[i, :seq_lengths[i]])))
return ret
def list_2d_to_sparse(list_of_lists):
"""Convert python list of lists to a [tf.SparseTensorValue](https://www.tensorflow.org/api_docs/python/tf/SparseTensorValue).
Args:
list_of_lists: list of lists to convert.
Returns:
tf.SparseTensorValue which is a namedtuple (indices, values, shape) where:
* indices is a 2-d numpy array with shape (sum_all, 2) where sum_all is a
sum over i of len(l[i])
* values is a 1-d numpy array with shape (sum_all, )
* shape = np.array([len(l), max_all]) where max_all is a max over i of
len(l[i])
Also, the following is true: for all i values[i] ==
list_of_lists[indices[i][0]][indices[i][1]]
"""
indices, values = [], []
for i, sublist in enumerate(list_of_lists):
for j, value in enumerate(sublist):
indices.append([i, j])
values.append(value)
dense_shape = [len(list_of_lists), max(map(len, list_of_lists))]
return tf.SparseTensorValue(indices=np.array(indices),
values=np.array(values),
dense_shape=np.array(dense_shape))
import time
vocabulary = { 'а': 1,
'б': 2,
'в': 3,
'г': 4,
'д': 5,
'е': 6,
'ё': 7,
'ж': 8,
'з': 9,
'и': 10,
'й': 11,
'к': 12,
'л': 13,
'м': 14,
'н': 15,
'о': 16,
'п': 17,
'р': 18,
'с': 19,
'т': 20,
'у': 21,
'ф': 22,
'х': 23,
'ц': 24,
'ч': 25,
'ш': 26,
'щ': 27,
'ъ': 28,
'ы': 29,
'ь': 30,
'э': 31,
'ю': 32,
'я': 33}
inv_mapping = dict(zip(vocabulary.values(), vocabulary.keys()))
inv_mapping[34]='<пробел>'
X = pickle.load( open( "X_pncc.pkl", "rb" ) )
y = pickle.load( open( "y.pkl", "rb" ) )
def decode2(stroka,inv_mapping,session):
dense_decoded = tf.sparse_tensor_to_dense(stroka, default_value=-1).eval(session=session)
seq = [s for s in dense_decoded[0] if s != -1]
ret = []
for i in range(len(seq)):
ret.append("".join( inv_mapping[seq[i]]))
for i in range(len(ret)):
print(str(ret[i])),
def decode1(stroka,inv_mapping):
ret = []
for i in range(len(stroka)):
ret.append("".join( inv_mapping.get(int(stroka[i]),34)))
for i in range(len(ret)):
print(str(ret[i])),
print('')
from utils import list_2d_to_sparse
from itertools import izip_longest as zip_longest
def batch(X_train, y_train, batch_size):
num_features = X_train[0].shape[1]
n = len(X_train)
perm = np.random.permutation(n)
for batch_ind in np.resize(perm, (n // batch_size, batch_size)):
X_batch, y_batch = [X_train[i] for i in batch_ind], [y_train[i] for i in batch_ind]
sequence_lengths = list(map(len, X_batch))
X_batch_padded = np.array(list(zip_longest(*X_batch, fillvalue=np.zeros(num_features)))).transpose([1, 0, 2])
yield X_batch_padded, sequence_lengths, list_2d_to_sparse(y_batch), y_batch
graph = tf.Graph()
with graph.as_default():
input_X = tf.placeholder(tf.float32, shape=[None, None, 13],name="input_X")
labels = tf.sparse_placeholder(tf.int32)
seq_lens = tf.placeholder(tf.int32, shape=[None],name="seq_lens")
model = Sequential()
model.add(Bidirectional(LSTM(128, return_sequences=True, implementation=2), input_shape=(None, 13)))
model.add(Bidirectional(LSTM(128, return_sequences=True, implementation=2)))
model.add(TimeDistributed(Dense(len(inv_mapping) + 2)))
final_seq_lens = seq_lens
logits = model(input_X)
logits = tf.transpose(logits, [1, 0, 2])
ctc_loss = tf.reduce_mean(tf.nn.ctc_loss(labels, logits, final_seq_lens,ignore_longer_outputs_than_inputs=True))
# ctc_greedy_decoder? merge_repeated=True
decoded, log_prob = tf.nn.ctc_greedy_decoder(logits, final_seq_lens)
ler = tf.reduce_mean(tf.edit_distance(tf.cast(decoded[0], tf.int32), labels))
train_op = tf.train.AdamOptimizer(learning_rate=1e-5).minimize(ctc_loss)
def decode_single(session, test_input):
z=np.zeros((30,13))
zz=np.vstack((test_input,z))
val_feed = {
input_X: np.asarray([zz]),
seq_lens: np.asarray([len(test_input)])
}
# Decoding
d = session.run(decoded[0], feed_dict=val_feed)
dense_decoded = tf.sparse_tensor_to_dense(d, default_value=-1).eval(session=session)
seq = [s for s in dense_decoded[0] if s != -1]
print('Decoded:\t%s' % (decode2(d, inv_mapping,session)))
print(seq)
y_train1=[]
X_train1=[]
for i in range(len(y)):
if len(y[i]) <75:
y_train1.append(y[i])
X_train1.append(X[i])
len(y_train1)
num_epochs=200
epoch_save_step=20
with tf.Session(graph=graph) as session:
saver = tf.train.Saver(tf.global_variables())
snapshot = "ctc"
checkpoint = tf.train.latest_checkpoint(checkpoint_dir="checkpoint3")
last_epoch = 0
if checkpoint:
print("[i] LOADING checkpoint " + checkpoint)
try:
saver.restore(session, checkpoint)
last_epoch = int(checkpoint.split('-')[-1]) + 1
print("[i] start from epoch %d" % last_epoch)
except:
print("[!] incompatible checkpoint, restarting from 0")
else:
# Initializate the weights and biases
tf.global_variables_initializer().run()
for epoch in range(last_epoch, num_epochs):
for X_batch, seq_lens_batch, y_batch, y_batch_orig in batch(X_train1, y_train1, 100):
feed_dict = {
input_X: X_batch,
labels: y_batch,
seq_lens: seq_lens_batch
}
train_loss, train_ler, train_decoded, true, _ = session.run([ctc_loss, ler, decoded[0], labels, train_op], feed_dict=feed_dict)
if epoch % epoch_save_step == 0 and epoch > 0:
print("[i] SAVING snapshot %s" % snapshot)
# del tf.get_collection_ref ( ' LAYER_NAME_UIDS ' )[ 0 ]
saver.save(session, "checkpoint3/" + snapshot + ".ckpt", epoch)
# for X_batch, seq_lens_batch, y_batch, y_batch_orig in batch(X_test, y_test, 4):
# feed_dict = {
# input_X: X_batch,
# labels: y_batch,
# seq_lens: seq_lens_batch
# }
# test_loss, test_ler, test_decoded, true = session.run([ctc_loss, ler, decoded[0], labels], feed_dict=feed_dict)
print(epoch, train_loss, train_ler)#, test_loss, test_ler)
ret=decode(train_decoded, inv_mapping)[:10]
for i in range(len(ret)):
print(str(ret[i])),
print(time.ctime())
decode1(y_batch_orig[0],inv_mapping)
k=[]
y_train1=[]
X_train1=[]
for i in range(len(y)):
if (len(y[i]) <181) and (len(X[i])<1745):
y_train1.append(y[i])
X_train1.append(X[i])
len(y_train1)
num_epochs=300
epoch_save_step=5
with tf.Session(graph=graph) as session:
saver = tf.train.Saver(tf.global_variables())
snapshot = "ctc"
checkpoint = tf.train.latest_checkpoint(checkpoint_dir="checkpoint3")
last_epoch = 0
if checkpoint:
print("[i] LOADING checkpoint " + checkpoint)
try:
saver.restore(session, checkpoint)
last_epoch = int(checkpoint.split('-')[-1]) + 1
print("[i] start from epoch %d" % last_epoch)
except:
print("[!] incompatible checkpoint, restarting from 0")
else:
# Initializate the weights and biases
tf.global_variables_initializer().run()
for epoch in range(last_epoch, num_epochs):
for X_batch, seq_lens_batch, y_batch, y_batch_orig in batch(X_train1, y_train1, 50):
feed_dict = {
input_X: X_batch,
labels: y_batch,
seq_lens: seq_lens_batch
}
train_loss, train_ler, train_decoded, true, _ = session.run([ctc_loss, ler, decoded[0], labels, train_op], feed_dict=feed_dict)
if epoch % epoch_save_step == 0 and epoch > 0:
print("[i] SAVING snapshot %s" % snapshot)
# del tf.get_collection_ref ( ' LAYER_NAME_UIDS ' )[ 0 ]
saver.save(session, "checkpoint3/" + snapshot + ".ckpt", epoch)
# for X_batch, seq_lens_batch, y_batch, y_batch_orig in batch(X_test, y_test, 4):
# feed_dict = {
# input_X: X_batch,
# labels: y_batch,
# seq_lens: seq_lens_batch
# }
# test_loss, test_ler, test_decoded, true = session.run([ctc_loss, ler, decoded[0], labels], feed_dict=feed_dict)
print(epoch, train_loss, train_ler)#, test_loss, test_ler)
ret=decode(train_decoded, inv_mapping)[:10]
for i in range(len(ret)):
print(str(ret[i])),
print(time.ctime())
decode1(y_batch_orig[0],inv_mapping)
num_epochs=311
epoch_save_step=1
with tf.Session(graph=graph) as session:
saver = tf.train.Saver(tf.global_variables())
snapshot = "ctc"
checkpoint = tf.train.latest_checkpoint(checkpoint_dir="checkpoint3")
last_epoch = 0
if checkpoint:
print("[i] LOADING checkpoint " + checkpoint)
try:
saver.restore(session, checkpoint)
last_epoch = int(checkpoint.split('-')[-1]) + 1
print("[i] start from epoch %d" % last_epoch)
except:
print("[!] incompatible checkpoint, restarting from 0")
else:
# Initializate the weights and biases
tf.global_variables_initializer().run()
for epoch in range(last_epoch, num_epochs):
for X_batch, seq_lens_batch, y_batch, y_batch_orig in batch(X, y, 10):
feed_dict = {
input_X: X_batch,
labels: y_batch,
seq_lens: seq_lens_batch
}
train_loss, train_ler, train_decoded, true, _ = session.run([ctc_loss, ler, decoded[0], labels, train_op], feed_dict=feed_dict)
if epoch % epoch_save_step == 0 and epoch > 0:
print("[i] SAVING snapshot %s" % snapshot)
# del tf.get_collection_ref ( ' LAYER_NAME_UIDS ' )[ 0 ]
saver.save(session, "checkpoint3/" + snapshot + ".ckpt", epoch)
# for X_batch, seq_lens_batch, y_batch, y_batch_orig in batch(X_test, y_test, 4):
# feed_dict = {
# input_X: X_batch,
# labels: y_batch,
# seq_lens: seq_lens_batch
# }
# test_loss, test_ler, test_decoded, true = session.run([ctc_loss, ler, decoded[0], labels], feed_dict=feed_dict)
print(epoch, train_loss, train_ler)#, test_loss, test_ler)
ret=decode(train_decoded, inv_mapping)[:10]
for i in range(len(ret)):
print(str(ret[i])),
print(time.ctime())
decode1(y_batch_orig[0],inv_mapping)
```
| github_jupyter |
# Modelling external isolation
In this notebook we use an age- and household-structured model to assess the impact of external quarantining as a mitigation measure against covid-19. We compare against the more typical internal quarantine approach. Under external quarantining, a single infected individual is removed from the household and has no interaction with the members of that household or any other. Under internal quarantining, the infected individual is still able to interact with the other members of their household but the entire household is completely prevented from contributing to the population-level outbreak.
The following cell imports all of the dependencies required for this notebook.
```
from os import getcwd, chdir
from os.path import isfile
from pickle import load, dump
from numpy import arange, array
from numpy.random import rand
from pandas import read_csv
from time import time as get_time
from scipy.integrate import solve_ivp
from matplotlib.pyplot import subplots
from matplotlib.cm import get_cmap
from model.preprocessing import TwoAgeWithVulnerableInput, HouseholdPopulation
from model.preprocessing import add_vulnerable_hh_members, make_initial_SEPIRQ_condition
from model.common import SEPIRQRateEquations, within_household_SEPIRQ
from model.imports import ( FixedImportModel)
```
The working directory needs to be the main repo directory, `...\GitHub\covid-19-in-households-public`. If the kernel is clear, the following cell will set this to be the working directory. Once you have run this cell, do not run it again without clearing the kernel.
```
print(getcwd())
chdir('..\..')
print(getcwd())
```
## Model description
We use a compartmental model with six compartments, **S**usceptible, **E**xposed, **P**rodromal/presymptomatic infection, symptomatic/fully transmissible **I**nfection, **R**ecovered, and **Q**uarantined/isolated, which we will call the SEPIRQ model. The same structure is used under both external and internal quarantining, but the impact of the **Q** compartment is different depending on the quarantining method being used. The possible transitions are infection of a susceptible (**S** to **E**), progression from exposure into the prodromal phase (**E** to **P**), progression from prodrome into full infection (**P** to **I**), recovery of an infectious cases (**I** to **R**), quarantining of infected cases (**E**, **P**, and **I** to **Q**), and recovery of a quarantined case (**Q** to **R**). The population is further subdivided into three age- and vulnerability-stratified classes: children, non-vulnerable adults, and vulnerable adults. Our model works at the level of a single household, which is specified by the number of individuals of each class in the household. These numbers are static, but the members of each class may move between epidemic compartments. The instantaneous state of a household can be summarised as
$$
(S_C,E_C,P_C,I_C,R_C,Q_C,S_A,E_A,P_A,I_A,R_A,Q_A,S_V,E_V,P_V,I_V,R_V,Q_V).
$$
The dynamics within each household are captured by a Markov chain whose evolution is captured by a set of Kolmogorov equations. The equations corresponding to households in different compositions can be combined into a block-diagonal system, with the proportion of households in a given composition equal to the total probability in the corresponding block. Because infection can transmit between households, we add a nonlinear term to the Kolmogorov equations capturing household-to-household transmission which couples the otherwise independent blocks of the system.
In the following cell we introduce the disease parameters for our model. The three "progression" events (**E** to **P**, **P** to **I**, and **I** to **R**) take place at fixed per-capita rates $\alpha_1$, $\alpha_2$, and $\gamma$. Here we choose rates of 1/1, 1/5, 1/4, i.e. each case experiences on average a short incubation period of one day, four days of prodromal infection and five days of full/symptomatic infection before recovering. Infection occurs along three different routes: internal transmission between members of the same household, external transmission between members of different households, and imports of infection from outside of the population. Both prodromal and symptomatic individuals transmit, with the infectiousness of prodromal individuals scaled by an age-specific factor $\tau$. The intensity of between-household transmission is scaled down by a factor $\epsilon$, capturing the idea that interactions outside the household are likely to involve less intensive contact than those within the household; since this is the only interaction between households, $\epsilon$ defines the level of coupling between the different household sub-systems. The transmission is age-structured through the use of age-structured contact matrices $\mathbf{K}_{\mathrm{home}}$ and $\mathbf{K}_{\mathrm{ext}}$ and an age-specific susceptibility vector $\mathbf{\sigma}$ (`sus` in the cell below). To obtain age-structured transmission matrices for within- and between-household transmission we scale the rows of the two contact matrices by the elements of $\mathbf{\sigma}$ and then mutliply both (with the external mixing term scaled by $\epsilon$) by a scaling factor chosen such that the eigenvalue of the sum of the two transmission matrices is equal to a specified basic reproductive ratio.
These parameters are specified in a dictionary in the next block. The contact matrices are taken from estimates generated by Prem et. al. (2017). Because we will be using different age boundaries to them, we need to aggregate their estimated matrices, which requires an estimate of the sizes of each age class in their division, which we take from the file `United Kingom-2019.csv`. We also need to then split the adults into vulnerable and nonvulnerable individuals - to do this we need the population-level proportion of individuals who are vulnerable.
```
SEPIRQ_SPEC = {
# Interpretable parameters:
'R0': 2.4, # Reproduction number
'gamma': 1/4, # Recovery rate
'alpha_1': 1/1, # E->P incubation rate
'alpha_2': 1/5, # P->I prodromal to symptomatic rate
'tau': array([0.5,0.5,0.5]), # Prodromal transmission intensity relative to full inf transmission
'sus': array([1,1,1]), # Relative susceptibility by age/vulnerability class
'epsilon': 0.5, # Relative intensity of external compared to internal contacts
'vuln_prop': 2.2/60, # Total proportion of adults who are shielding
'k_home': {
'file_name': 'inputs/MUestimates_home_2.xlsx',
'sheet_name':'United Kingdom of Great Britain'
},
'k_all': {
'file_name': 'inputs/MUestimates_all_locations_2.xlsx',
'sheet_name': 'United Kingdom of Great Britain'
},
'pop_pyramid_file_name': 'inputs/United Kingdom-2019.csv'
}
```
## Introducing quarantine
In the cell below we convert the specifications from the dictionary into input for our model and add some extra parameters which define the quarantining strategy we wish to model. The rate at which an infected individual is quarantined depends on how far the infection has progressed; isolation should happen faster as the infection progresses because more advanced cases should be easier to identify and simply because these cases will have had more time to be found through tracing. We define a discharge rate of 14 days, which should be long enough for individuals to be recovered when they return home or cease isolating. The parameter `model_input.adult_bd` defines the boundary between children and adults in the list of age classes (using zero-indexing). In our simple two-age-class-plus-vulnerable-adults framework this is just 1, but for flexibility we keep it a user-defined parameter. Specifying this boundary is necessary because when implementing external isolation we do not allow adults to isolate if it means leaving children alone without any adults. The Boolean array `model_input.class_is_isolating` captures the answer to the question "if a person of class $j$ is present in the household, should a case of class $i$ isolate?". For external isolation, we are interested in isolating adults who live with vulnerable adults, so the $(2,3)$rd and $(3,3)$rd entries of the matrix is `True`, and everything else is false. The last parameter, `model_input.iso_method`, is set to 0 if we are modelling external isolation and 1 if we are modelling internal isolation.
We will start by modelling external isolation. Under this control strategy, when an individual enters the quarantine compartment it makes no contribution to within- or between-household infectious pressures. Because we model within-household mixing as frequency dependent, the rate of infection within households includes a factor of $1/(S+E+P+I+R)$ which does *not* include the quarantined individuals. When externally quarantined individuals are discharged they join the recovered compartment and behave identically to the other members of this compartment.
Internal isolation is slightly more complex. When an individual is quarantined, they remain in the household (in particular, the household size remains $(S+E+P+I+R+Q)$) and continue to transmit infection to other members of their household. We scale their infectiousness relative to cases in the **I** compartment by a factor $(\frac{1}{\alpha_2}\tau + \frac{1}{\gamma})/(\frac{1}{\alpha_1} + \frac{1}{\alpha_2} + \frac{1}{\gamma})$. This slightly unwieldy expression averages over the possible relative infectiousness of this quarantined case - zero if exposed, $\tau$ if prodromal, one if fully infectious. The purpose of internal isolation is to reduce household-to-household transmission, which we account for by scaling the contribution of all households containing one or more quarantined cases to population-level infectiousness by the factor `model_input.isoprob`.
```
spec = SEPIRQ_SPEC
model_input = TwoAgeWithVulnerableInput(SEPIRQ_SPEC)
model_input.E_iso_rate = 1/1
model_input.P_iso_rate = 1/1
model_input.I_iso_rate = 1/0.5
model_input.discharge_rate = 1/14
model_input.adult_bd = 1
model_input.class_is_isolating = array([[False, False, False],[False, False, True],[False, False, True]])
model_input.iso_method = 0
model_input.iso_prob = 0.5
```
## Building the household population
Next we construct household population object which captures all the within-household events - everything except external imports of infection. We do this by reading in a preprepared list of possible compositions in terms of the number of children, non-vulnerable adults, and vulnerable adults in a household, and an accompanying list which estimates the proportion of households in each composition. We combine this information with our model input to create a household population object. The last two arguments in `HouseholdPopulation` specify the compartmental structure and number of compartments.
```
# List of observed household compositions
composition_list = read_csv(
'inputs/eng_and_wales_adult_child_vuln_composition_list.csv',
header=0).to_numpy()
# Proportion of households which are in each composition
comp_dist = read_csv(
'inputs/eng_and_wales_adult_child_vuln_composition_dist.csv',
header=0).to_numpy().squeeze()
# With the parameters chosen, we calculate Q_int:
household_population = HouseholdPopulation(
composition_list, comp_dist, model_input, within_household_SEPIRQ,6)
```
## Solving the ODE system
In the next cell we solve the system of ODEs defining the evolution of our household population. We begin by specifying the model we are using for external imports of infection. In this example we choose a fixed per-capita rate of importation, but our code also allows for time-varying imports. We specify the rates of importation for prodromal and fully symptomatic infections separately, emphasising that the requisite scaling of prodromal infection by $\tau$ takes place within the code so does not need to be applied here; we specify separate rates to account for the possibility that fully infectious cases may be less likely to travel than prodromal cases.
The right-hand-side object `rhs` defines the system of ODEs. The initial condition we generate places a single infectious case into 0.1% of all households - this is the final element of `make_initital_SEPIRQ_condition`. We then solve the equations using `solve_ivp`.
```
import_model = FixedImportModel(
1e-5, # Import rate of prodromals
1e-5) # Import rate of symptomatic cases
rhs = SEPIRQRateEquations(
model_input,
household_population,
import_model)
H0 = make_initial_SEPIRQ_condition(household_population, rhs, 1e-3)
no_days = 50
tspan = (0.0, no_days)
solver_start = get_time()
solution = solve_ivp(rhs, tspan, H0, first_step=0.001, atol=1e-16)
solver_end = get_time()
print('Integration completed in', solver_end-solver_start,'seconds.')
time = solution.t
H = solution.y
```
In the cell below we calculate the expected size of each compartment (stratified by age/vulnerability class) over time. We also calculate the average number of children, non-vulnerable adults, and vulnerable adults per household, which we use to estimate things like prevalence stratified by class.
```
S = H.T.dot(household_population.states[:, ::6])
E = H.T.dot(household_population.states[:, 1::6])
P = H.T.dot(household_population.states[:, 2::6])
I = H.T.dot(household_population.states[:, 3::6])
R = H.T.dot(household_population.states[:, 4::6])
Q = H.T.dot(household_population.states[:, 5::6])
children_per_hh = comp_dist.T.dot(composition_list[:,0])
nonv_adults_per_hh = comp_dist.T.dot(composition_list[:,1])
vuln_adults_per_hh = comp_dist.T.dot(composition_list[:,2])
```
We now plot the expected prevalence over time in each age-vulnerability class.
```
class_per_hh = [children_per_hh, nonv_adults_per_hh, vuln_adults_per_hh]
lgd=['Children','Non-vulnerable adults','Vulnerable adults']
fig1, axis1 = subplots()
fig2, axis2 = subplots()
cmap = get_cmap('tab20')
alpha = 0.5
for i in range(3):
axis1.plot(
time, I[:,i]/class_per_hh[i], label=lgd[i],
color=cmap(i*2), alpha=alpha)
axis2.plot(
time, Q[:,i]/class_per_hh[i], label=lgd[i],
color=cmap(i*2), alpha=alpha)
axis1.set_ylabel('Infectious prevalence')
axis2.set_ylabel('Proportion quarantining')
axis1.legend(ncol=1, bbox_to_anchor=(1,0.50))
fig1.show()
fig2.show()
```
## Modelling internal quarantine
Under internal quarantine, a household is designated as quarantined as soon as one of the members of that household enters the **Q** compartment. The internal dynamics of the household are essentially unaffected, but when we calculate population-level force of infection we scale down the contributions of all quarantined household by a chosen factor (the variable `model_input.iso_prob`, which we earlier set to 0.5). Since the aim of internal quarantine is to prevent infection from leaving the household, rather than preventing within-household spread to vulnerable people, we set all of the Boolean isolation indicators to `True`. We reconstruct the household population and ODE system with our new model input, and solve once again.
```
model_input.class_is_isolating = array([[True, True, True],[True, True, True],[True, True, True]])
model_input.iso_method = 1
household_population = HouseholdPopulation(
composition_list, comp_dist, model_input, within_household_SEPIRQ,6)
rhs = SEPIRQRateEquations(
model_input,
household_population,
import_model)
H0 = make_initial_SEPIRQ_condition(household_population, rhs, 1e-3)
tspan = (0.0, no_days)
solver_start = get_time()
solution = solve_ivp(rhs, tspan, H0, first_step=0.001,atol=1e-16)
solver_end = get_time()
print('Integration completed in', solver_end-solver_start,'seconds.')
time = solution.t
H = solution.y
```
In the cell below we calculate the expected number of people of each age-vulnerability class in each compartment in a single household over time. Because we are more interested in the number of people who are self-isolating rather than the number who are actually in the **Q** compartment, we define `Q`, the number of people per household quarantining over time to be the expected number of people in a household with at least one person in the **Q** compartment.
```
S = H.T.dot(household_population.states[:, ::6])
E = H.T.dot(household_population.states[:, 1::6])
P = H.T.dot(household_population.states[:, 2::6])
I = H.T.dot(household_population.states[:, 3::6])
R = H.T.dot(household_population.states[:, 4::6])
states_iso_only = household_population.states[:,5::6]
total_iso_by_state =states_iso_only.sum(axis=1)
iso_present = total_iso_by_state>0
Q = H[iso_present,:].T.dot(household_population.composition_by_state[iso_present,:])
```
We again plot the expected prevalence over time.
```
class_per_hh = [children_per_hh, nonv_adults_per_hh, vuln_adults_per_hh]
lgd=['Children','Non-vulnerable adults','Vulnerable adults']
fig1, axis1 = subplots()
fig2, axis2 = subplots()
cmap = get_cmap('tab20')
alpha = 0.5
for i in range(3):
axis1.plot(
time, I[:,i]/class_per_hh[i], label=lgd[i],
color=cmap(i*2), alpha=alpha)
axis2.plot(
time, Q[:,i]/class_per_hh[i], label=lgd[i],
color=cmap(i*2), alpha=alpha)
axis1.set_ylabel('Infectious prevalence')
axis2.set_ylabel('Proportion quarantining')
axis1.legend(ncol=1, bbox_to_anchor=(1,0.50))
fig1.show()
fig2.show()
```
| github_jupyter |
```
import multiprocessing as mp
import sys
# import mss
sys.path.append('../')
from mss import visreader as mvis
from mss import mssmain as msm
from mss import align
from mss import frag
from mss import dm
import pandas as pd
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt
import math
import peakutils
import scipy
import itertools
bin_count = int(np.log(16/1)/np.log(2)+1)
np.logspace(0,bin_count-1,bin_count,base=2)
bin_count
np.logspace(0,3,4,base=2)
dummydata = [[1,5,0],[0,2,4],[9,9,9]]
df = pd.DataFrame(dummydata, columns=['test1','test2','test3'])
insert_col = ['ex_1','ex_2','ex_3']
df[insert_col] = pd.DataFrame([np.zeros(len(insert_col))], index=df.index)
df
import multiprocessing
def worker(procnum, return_dict):
"""worker function"""
print(str(procnum) + " represent!")
return_dict[procnum] = procnum
if __name__ == "__main__":
manager = multiprocessing.Manager()
return_dict = manager.dict()
jobs = []
for i in range(5):
p = multiprocessing.Process(target=worker, args=(i, return_dict))
jobs.append(p)
p.start()
print(return_dict.values())
for proc in jobs:
proc.join()
print(return_dict.values())
return_dict.values()[0]
global test
test = []
tlist = [1,2,3,4,5,6,7,8,9]
def tfunc(n):
return test+n
for t in tlist:
test = t
print(list(map(tfunc, [1,2,3])))
path = '../example_data/ex_1.mzML'
scans = msm.get_scans(path)
msm.peak_pick(scans,187.0512,400)
err=50
def peak_pick(input_mz, mzml_scans=scans, error=err, enable_score=True, peak_thres=0.01,
peakutils_thres=0.02, min_d=1, rt_window=1.5,
peak_area_thres=1e5, min_scan=5, max_scan=200, max_peak=5,
overlap_tol=15, sn_detect=15, rt=None):
'''
The function is used to detect peak for given m/z's chromatogram
error: in ppm
enable_score: option to enable the RF model
peak_thres: base peak tolerance
peakutils_thres: threshold from peakutils, may be repeated with peak_thres
min_d: peaktuils parameter
rt_window: window for integration only, didn't affect detection
peak_area_thres: peak area limitation
min_scan: min scan required to be detected as peak
max_scan: max scan limit to exclude noise
max_peak: max peak limit for selected precursor
overlap_tot: overlap scans for two peaks within the same precursor
sn_detect: scan numbers before/after the peak for sn calculation
'''
if not rt:
rt = [i.scan_time[0] for i in mzml_scans]
intensity = msm.ms_chromatogram_list(mzml_scans, input_mz, error)
# Get rt_window corresponding to scan number
scan_window = int(
(rt_window / (rt[int(len(intensity) / 2)] -
rt[int(len(intensity) / 2) - 1])))
rt_conversion_coef = np.diff(rt).mean()
# Get peak index
indexes = peakutils.indexes(intensity, thres=peakutils_thres,
min_dist=min_d)
result_dict = {}
# dev note: boundary detection refinement
for index in indexes:
h_range = index
l_range = index
base_intensity = peak_thres * intensity[index]
half_intensity = 0.5 * intensity[index]
# Get the higher and lower boundary
while intensity[h_range] >= base_intensity:
h_range += 1
if h_range >= len(intensity) - 1:
break
if intensity[h_range] < half_intensity:
if h_range - index > 4:
# https://stackoverflow.com/questions/55649356/
# how-can-i-detect-if-trend-is-increasing-or-
# decreasing-in-time-series as alternative
x = np.linspace(h_range - 2, h_range, 3)
y = intensity[h_range - 2: h_range + 1]
(_slope, _intercept, r_value,
_p_value, _std_err) = scipy.stats.linregress(x, y)
if abs(r_value) < 0.6:
break
while intensity[l_range] >= base_intensity:
l_range -= 1
if l_range <= 1:
break
# Place holder for half_intensity index
# if intensity[l_range] < half_intensity:
# pass
# Output a range for the peak list
# If len(intensity) - h_range < 4:
# h_range = h_range + 3
peak_range = []
if h_range - l_range >= min_scan:
if rt[h_range] - rt[l_range] <= rt_window:
peak_range = intensity[l_range:h_range]
else:
if index - scan_window / 2 >= 1:
l_range = int(index - scan_window / 2)
if index + scan_window / 2 <= len(intensity) - 1:
h_range = int(index + scan_window / 2)
peak_range = intensity[l_range:h_range]
# print(index + scan_window)
# Follow Agilent S/N document
width = rt[h_range] - rt[l_range]
if len(peak_range) != 0:
height = max(peak_range)
hw_ratio = round(height / width, 0)
neighbour_blank = (intensity[
l_range - sn_detect: l_range] +
intensity[h_range: h_range +
sn_detect + 1])
noise = np.std(neighbour_blank)
if noise != 0:
sn = round(height / noise, 3)
elif noise == 0:
sn = 0
# Additional global parameters
# 1/2 peak range
h_loc = index
l_loc = index
while intensity[h_loc] > half_intensity:
h_loc += 1
if h_loc >= len(intensity) - 1:
break
while intensity[l_loc] > half_intensity and l_loc > 0:
l_loc -= 1
# Intergration based on the simps function
if len(peak_range) >= min_scan:
integration_result = scipy.integrate.simps(peak_range)
if integration_result >= peak_area_thres:
# https://doi.org/10.1016/j.chroma.2010.02.010
background_area = (h_range - l_range) * height
ab_ratio = round(integration_result / background_area, 3)
if enable_score is True:
h_half = h_loc + \
(half_intensity - intensity[h_loc]) / \
(intensity[h_loc - 1] - intensity[h_loc])
l_half = l_loc + \
(half_intensity - intensity[l_loc]) / \
(intensity[l_loc + 1] - intensity[l_loc])
# when transfer back use rt[index] instead
mb = (height - half_intensity) / \
((h_half - index) * rt_conversion_coef)
ma = (height - half_intensity) / \
((index - l_half) * rt_conversion_coef)
w = rt[h_range] - rt[l_range]
t_r = (h_half - l_half) * rt_conversion_coef
l_width = rt[index] - rt[l_range]
r_width = rt[h_range] - rt[index]
assym = r_width / l_width
# define constant -- upper case
var = (w ** 2 / (1.764 * ((r_width / l_width)
** 2) - 11.15 * (r_width / l_width) + 28))
x_peak = [w, t_r, l_width, r_width, assym,
integration_result, sn, hw_ratio, ab_ratio,
height, ma, mb, ma + mb, mb / ma, var]
x_input = np.asarray(x_peak)
# score = np.argmax(Pmodel.predict(x_input.reshape(1,-1)))
# for tensorflow
score = 1
elif enable_score is False:
score = 1
# appending to result
if len(result_dict) == 0:
(result_dict.update(
{index: [l_range, h_range,
integration_result, sn, score]}))
# Compare with previous item
# * get rid of list()
elif integration_result != list(result_dict.values())[-1][2]:
# test python 3.6 and 3.7
s_window = abs(index - list(result_dict.keys())[-1])
if s_window > overlap_tol:
(result_dict.update(
{index: [l_range, h_range, integration_result,
sn, score]}))
# If still > max_peak then select top max_peak results
if len(result_dict) > max_peak:
result_dict = dict(sorted(result_dict.items(),
key=lambda x: x[1][2], reverse=True))
result_dict = dict(itertools.islice(result_dict.items(), max_peak))
return result_dict
print(peak_pick(299.1765))
def test(n=testlist):
return n+n**2+n**3
testlist = [1,2,3,4,5,6,7,8,9]
with mp.Pool() as pool:
res = pool.map(test,testlist)
print(res)
peak_dict = [{380: [379, 491, 394209.58401489246, 29.902, 1, 100.1120357442187], 482: [379, 491, 394209.58401489246, 29.902, 1, 100.1120357442187], 416: [379, 482, 383281.9418945312, 23.087, 1, 100.1120357442187], 399: [379, 483, 383260.9562886555, 24.667, 1, 100.1120357442187], 432: [379, 478, 375978.6517740885, 18.453, 1, 100.1120357442187]}, {167: [163, 252, 144132.67783610022, 7.449, 1, 103.95618782860778], 194: [163, 231, 120315.50900014241, 5.286, 1, 103.95618782860778], 214: [163, 248, 140641.55963134763, 7.509, 1, 103.95618782860778], 230: [163, 252, 144132.67783610022, 7.449, 1, 103.95618782860778]}, {30: [29, 70, 124077.04874674478, 4.614, 1, 104.10599107350343], 116: [87, 130, 143420.04223632812, 7.537, 1, 104.10599107350343], 134: [132, 172, 121536.43308512369, 2.665, 1, 104.10599107350343], 370: [369, 404, 102039.20190429686, 4.5, 1, 104.10599107350343], 441: [440, 499, 167495.08341471353, 5.342, 1, 104.10599107350343]}, {134: [131, 339, 602878.4699707031, 3.496, 1, 104.10807319332488], 116: [1, 131, 407473.540242513, 7.651, 1, 104.10807319332488], 1: [0, 109, 320440.88435872394, 2.757, 1, 104.10807319332488], 18: [1, 109, 318227.43761189777, 2.757, 1, 104.10807319332488], 340: [339, 421, 242935.97572835285, 6.845, 1, 104.10807319332488]}, {109: [99, 151, 8400665.29119873, 570.177, 1, 104.9925547293939]}, {111: [110, 116, 1460484.0494791665, 2.907, 1, 104.99465458048847]}, {108: [100, 137, 185810.2579345703, 35.078, 1, 105.994726757684], 134: [100, 138, 186370.50796000162, 64.103, 1, 105.994726757684]}, {108: [102, 129, 161483.51025390625, 9.21, 1, 105.99684665221915]}, {158: [154, 256, 102248.36340332031, 6.626, 1, 107.07068719210373]}, {260: [256, 313, 103740.08129882814, 3.951, 1, 114.94899799749963], 396: [393, 499, 191070.6802520752, 8.57, 1, 114.94899799749963]}, {51: [47, 87, 192901.17651367188, 2.945, 1, 117.11171963998481], 332: [330, 369, 141338.69962565103, 4.584, 1, 117.11171963998481], 388: [385, 419, 143335.98323567706, 4.353, 1, 117.11171963998481], 433: [431, 490, 284436.5852864583, 3.625, 1, 117.11171963998481]}, {363: [227, 498, 1084626.318847656, 6.854, 1, 117.11406187437761], 379: [243, 499, 1046168.3783569336, 8.165, 1, 117.11406187437761], 346: [210, 481, 1040740.9916178384, 4.339, 1, 117.11406187437761], 330: [194, 465, 1005228.614501953, 4.254, 1, 117.11406187437761], 396: [260, 499, 991239.3081054686, 8.901, 1, 117.11406187437761]}, {356: [220, 491, 1671707.081705729, 11.875, 1, 119.08150577290671], 340: [204, 475, 1654565.1661783855, 9.278, 1, 119.08150577290671], 373: [237, 499, 1621420.1640625, 15.87, 1, 119.08150577290671], 323: [187, 458, 1620633.76953125, 8.308, 1, 119.08150577290671], 307: [171, 442, 1598573.595703125, 9.671, 1, 119.08150577290671]}, {205: [204, 266, 246873.40472412106, 3.098, 1, 120.9803363543696], 224: [204, 268, 252581.2128702799, 3.078, 1, 120.9803363543696]}, {359: [162, 407, 722038.4838867186, 16.029, 1, 120.98275596109667], 394: [162, 407, 722038.4838867186, 16.029, 1, 120.98275596109667], 329: [162, 405, 720258.3876342772, 15.486, 1, 120.98275596109667], 377: [162, 405, 720258.3876342772, 15.486, 1, 120.98275596109667], 313: [162, 393, 706635.8227539062, 13.949, 1, 120.98275596109667]}, {296: [160, 431, 6390423.988281248, 21.696, 1, 121.05294561069037], 277: [141, 412, 6377732.884765623, 11.566, 1, 121.05294561069037], 313: [177, 448, 6363724.227213541, 15.793, 1, 121.05294561069037], 331: [195, 466, 6357848.328125, 19.165, 1, 121.05294561069037], 347: [211, 482, 6343720.557942707, 20.361, 1, 121.05294561069037]}, {164: [163, 332, 271935.1287638346, 9.923, 1, 122.98075244306195], 291: [163, 332, 271935.1287638346, 9.923, 1, 122.98075244306195], 269: [163, 325, 266604.4825820923, 10.038, 1, 122.98075244306195], 251: [163, 285, 228430.4838053385, 6.725, 1, 122.98075244306195], 234: [163, 279, 221693.73914591467, 6.574, 1, 122.98075244306195]}, {48: [46, 121, 182360.0284016927, 15.671, 1, 124.02309205608452], 102: [46, 114, 161200.22105916342, 14.527, 1, 124.02309205608452]}, {95: [94, 129, 119165.20247395833, 15.604, 1, 124.02557251792561], 130: [129, 188, 111862.35351562501, 3.878, 1, 124.02557251792561]}, {420: [417, 479, 151470.16677347815, 11.821, 1, 136.08597380500234], 452: [417, 463, 120760.09730021158, 7.681, 1, 136.08597380500234], 469: [417, 479, 151470.16677347815, 11.821, 1, 136.08597380500234]}, {157: [21, 292, 3717710.792643229, 128.864, 1, 140.00337963129002], 173: [37, 308, 3712228.6499023438, 141.943, 1, 140.00337963129002], 207: [71, 342, 3706747.2111002607, 96.855, 1, 140.00337963129002], 190: [54, 325, 3706405.1311848955, 139.016, 1, 140.00337963129002], 224: [88, 359, 3697475.2045898433, 124.125, 1, 140.00337963129002]}, {56: [42, 136, 130771.79369099933, 4.78, 1, 141.11377551527187], 147: [146, 272, 184630.04525756836, 4.194, 1, 141.11377551527187], 279: [278, 415, 206497.58719889322, 4.406, 1, 141.11377551527187], 325: [278, 346, 102776.51195271808, 6.763, 1, 141.11377551527187], 341: [278, 415, 206497.58719889322, 4.406, 1, 141.11377551527187]}, {278: [277, 474, 279022.6078694662, 5.048, 1, 142.0339933219424]}, {337: [335, 437, 159684.1484883626, 5.352, 1, 145.00760819377555]}, {347: [211, 482, 604257.2758789062, 8.002, 1, 151.0975719441326], 280: [144, 415, 600675.4518229167, 8.187, 1, 151.0975719441326], 329: [193, 464, 600673.3092447915, 8.281, 1, 151.0975719441326], 313: [177, 448, 599727.6723632812, 6.116, 1, 151.0975719441326], 264: [128, 399, 598779.9373779297, 7.242, 1, 151.0975719441326]}, {2: [1, 102, 152666.16597493488, 0, 1, 153.08995180864605]}, {267: [131, 402, 2016999.8829752603, 4.794, 1, 155.97340038451878], 315: [179, 450, 1992740.1170247393, 15.969, 1, 155.97340038451878], 299: [163, 434, 1985795.2858072917, 14.455, 1, 155.97340038451878], 283: [147, 418, 1984986.4578450522, 7.175, 1, 155.97340038451878], 332: [196, 467, 1979785.0126953125, 14.245, 1, 155.97340038451878]}, {151: [146, 286, 1041497.397257487, 3.062, 1, 155.97651985252642], 312: [311, 403, 648793.1442871093, 6.193, 1, 155.97651985252642], 404: [403, 472, 484892.2151692708, 4.501, 1, 155.97651985252642], 61: [60, 103, 265862.75935872394, 2.527, 1, 155.97651985252642], 34: [29, 60, 198340.46468098962, 4.671, 1, 155.97651985252642]}, {188: [52, 323, 3103052.75, 84.297, 1, 158.01427109861655], 170: [34, 305, 3102411.873046875, 72.513, 1, 158.01427109861655], 207: [71, 342, 3092290.4156901035, 68.195, 1, 158.01427109861655], 153: [17, 288, 3090528.9215494785, 79.602, 1, 158.01427109861655], 223: [87, 358, 3090137.329427083, 72.026, 1, 158.01427109861655]}, {104: [100, 156, 935042.7522989908, 20.867, 1, 158.9969930423098], 150: [100, 157, 940153.7039388021, 21.018, 1, 158.9969930423098]}, {18: [16, 63, 131523.142985026, 6.288, 1, 159.00017298217065], 64: [63, 105, 159193.52563476562, 2.397, 1, 159.00017298217065], 117: [116, 125, 183086.53710937497, 2.703, 1, 159.00017298217065], 181: [180, 293, 332493.4894205729, 5.186, 1, 159.00017298217065], 363: [361, 457, 259191.42226155597, 4.594, 1, 159.00017298217065]}, {270: [243, 481, 286541.42348734534, 5.099, 1, 164.02928496812305], 320: [243, 481, 286541.42348734534, 5.099, 1, 164.02928496812305], 359: [243, 481, 286541.42348734534, 5.099, 1, 164.02928496812305], 422: [243, 481, 286541.42348734534, 5.099, 1, 164.02928496812305], 461: [243, 481, 286541.42348734534, 5.099, 1, 164.02928496812305]}, {124: [117, 133, 131306.71557617188, 3.517, 1, 166.139010564907], 315: [314, 358, 483001.4478352865, 4.874, 1, 166.139010564907]}, {254: [118, 389, 1523533.666422526, 7.224, 1, 166.1423333451183], 323: [117, 380, 1479767.3085123696, 5.935, 1, 166.1423333451183], 340: [117, 374, 1442668.3465983071, 5.2, 1, 166.1423333451183], 272: [136, 407, 1425251.156575521, 3.396, 1, 166.1423333451183], 235: [99, 370, 1416022.4460449219, 4.868, 1, 166.1423333451183]}, {386: [384, 464, 196593.85485839844, 5.057, 1, 170.12689737981296]}, {299: [295, 368, 105384.96525065103, 4.655, 1, 170.13029991776054], 375: [374, 499, 284464.7456868489, 7.992, 1, 170.13029991776054], 418: [374, 486, 258178.83427937824, 7.852, 1, 170.13029991776054], 441: [374, 499, 284464.7456868489, 7.992, 1, 170.13029991776054]}, {324: [321, 386, 107267.80159505208, 6.084, 1, 172.98146453969318], 430: [428, 494, 115422.80836995441, 4.45, 1, 172.98146453969318], 448: [428, 485, 100578.54781087239, 4.999, 1, 172.98146453969318], 466: [428, 494, 115422.80836995441, 4.45, 1, 172.98146453969318]}, {383: [381, 469, 469058.8928222656, 4.262, 1, 173.9807028266894], 255: [253, 338, 448484.89241536456, 6.005, 1, 173.9807028266894], 339: [338, 376, 186698.32450358075, 4.187, 1, 173.9807028266894], 1: [0, 39, 184575.62890625, 4.836, 1, 173.9807028266894], 19: [1, 39, 181620.38065592447, 4.836, 1, 173.9807028266894]}, {144: [143, 328, 933237.6106770834, 2.433, 1, 173.9841824407459], 329: [328, 396, 347410.8791097005, 3.389, 1, 173.9841824407459], 1: [0, 59, 281340.0776367187, 4.95, 1, 173.9841824407459], 19: [1, 59, 278450.06465657544, 4.95, 1, 173.9841824407459], 61: [59, 100, 188454.20564778647, 2.326, 1, 173.9841824407459]}, {285: [283, 294, 196339.9563802083, 2.138, 1, 179.03191813947132], 354: [351, 361, 104687.38907877603, 2.184, 1, 179.03191813947132]}, {300: [164, 435, 6444159.957031249, 33.527, 1, 179.03549877783408], 281: [128, 382, 6008466.626698811, 12.051, 1, 179.03549877783408], 316: [180, 451, 5820834.064290364, 2.738, 1, 179.03549877783408], 265: [128, 339, 5482725.517639159, 10.669, 1, 179.03549877783408], 248: [128, 330, 5353795.846995035, 10.029, 1, 179.03549877783408]}, {176: [165, 177, 429226.7249348958, 2.129, 1, 179.03907948780966], 221: [220, 225, 147152.02734375, 2.141, 1, 179.03907948780966], 238: [237, 242, 120487.35872395833, 1.947, 1, 179.03907948780966]}, {260: [162, 312, 301543.18839518225, 8.52, 1, 180.0372992670589], 242: [162, 292, 279857.63275146484, 8.27, 1, 180.0372992670589], 208: [162, 279, 263528.46990966797, 7.307, 1, 180.0372992670589], 226: [162, 264, 241816.8056894938, 6.214, 1, 180.0372992670589], 192: [162, 234, 188577.78411356607, 4.75, 1, 180.0372992670589]}, {105: [1, 160, 3128650.480346679, 260.758, 1, 180.97954165995415], 148: [1, 161, 3137153.3819376626, 240.817, 1, 180.97954165995415], 171: [35, 306, 3451165.7857666016, 452.559, 1, 180.97954165995415], 190: [54, 325, 3467682.053955078, 363.971, 1, 180.97954165995415]}, {147: [135, 163, 107453.97322591147, 0, 1, 181.02660198776132]}, {445: [315, 499, 917227.9850565592, 5.436, 1, 181.03022251980107], 429: [315, 495, 908391.6087646484, 3.196, 1, 181.03022251980107], 255: [237, 315, 889622.0572916665, 3.111, 1, 181.03022251980107], 239: [237, 313, 872406.281738281, 3.133, 1, 181.03022251980107], 412: [315, 474, 859275.7315266927, 2.904, 1, 181.03022251980107]}, {395: [162, 430, 3207291.03342692, 17.623, 1, 181.03384312425146], 351: [162, 427, 3196581.9288736973, 17.518, 1, 181.03384312425146], 334: [162, 422, 3176855.4355672197, 16.642, 1, 181.03384312425146], 377: [162, 422, 3176855.4355672197, 16.642, 1, 181.03384312425146], 314: [162, 416, 3152247.9932250977, 15.847, 1, 181.03384312425146]}, {106: [100, 136, 121525.34355672201, 11.971, 1, 181.98130102455795], 132: [100, 137, 123248.48160807292, 14.017, 1, 181.98130102455795]}, {400: [398, 474, 279178.35350545245, 6.76, 1, 182.13422800849438], 421: [398, 473, 279734.1333414713, 6.866, 1, 182.13422800849438], 439: [398, 465, 262015.62345377603, 7.837, 1, 182.13422800849438], 460: [398, 473, 279734.1333414713, 6.866, 1, 182.13422800849438]}, {357: [356, 479, 363094.24438476557, 13.371, 1, 182.13787069305454], 478: [356, 479, 363094.24438476557, 13.371, 1, 182.13787069305454], 409: [356, 473, 355738.58768717444, 12.832, 1, 182.13787069305454], 460: [356, 473, 355738.58768717444, 12.832, 1, 182.13787069305454], 389: [356, 474, 354244.9074707031, 13.2, 1, 182.13787069305454]}, {26: [24, 88, 135602.40904744464, 3.814, 1, 186.03293352055258], 210: [205, 278, 154804.0305989583, 4.977, 1, 186.03293352055258], 279: [278, 321, 100313.41455078126, 6.177, 1, 186.03293352055258], 350: [349, 413, 165371.5911458333, 5.026, 1, 186.03293352055258], 415: [413, 455, 110003.17899576822, 3.753, 1, 186.03293352055258]}, {9: [8, 105, 204851.09912109375, 5.439, 1, 186.03665417922298], 190: [189, 350, 354296.71077473956, 3.386, 1, 186.03665417922298], 353: [350, 479, 323516.4837239583, 7.073, 1, 186.03665417922298], 406: [350, 472, 312652.24604288733, 7.057, 1, 186.03665417922298], 422: [350, 479, 323516.4837239583, 7.073, 1, 186.03665417922298]}, {370: [234, 499, 1114893.9477539062, 12.931, 1, 186.9915948613094], 352: [216, 487, 1105425.297281901, 6.066, 1, 186.9915948613094], 335: [199, 470, 1066377.7434082031, 5.175, 1, 186.9915948613094], 386: [250, 499, 1062248.6451822917, 15.316, 1, 186.9915948613094], 232: [96, 367, 1049201.48055013, 315.406, 1, 186.9915948613094]}, {108: [11, 121, 3667483.3295898438, 7.8, 1, 186.99533469320662], 406: [401, 499, 511978.0232747395, 26.792, 1, 186.99533469320662]}, {108: [102, 121, 3319596.291666666, 5.187, 1, 186.99907459990047], 130: [129, 134, 276863.8307291666, 1.271, 1, 186.99907459990047]}, {110: [101, 122, 151586.3439127604, 5.541, 1, 187.9965506357398], 126: [101, 137, 218499.85174560547, 31.668, 1, 187.9965506357398]}, {110: [100, 122, 151079.0491689046, 5.36, 1, 188.0003105667525], 126: [100, 137, 218086.8935953776, 29.759, 1, 188.0003105667525]}, {361: [225, 496, 425942.1067708334, 5.978, 1, 194.11743181841584], 378: [242, 499, 416148.67185465497, 10.792, 1, 194.11743181841584], 345: [209, 480, 409956.20373535156, 5.449, 1, 194.11743181841584], 328: [192, 463, 394971.1361897786, 4.361, 1, 194.11743181841584], 395: [259, 499, 394473.8701985677, 11.842, 1, 194.11743181841584]}, {366: [230, 499, 744781.146484375, 9.717, 1, 195.121656395641], 348: [212, 483, 712189.6137288412, 3.85, 1, 195.121656395641], 382: [246, 499, 708218.243815104, 10.503, 1, 195.121656395641], 329: [193, 464, 677400.5313313802, 4.339, 1, 195.121656395641], 400: [264, 499, 665410.7780761718, 10.716, 1, 195.121656395641]}, {66: [59, 155, 149692.43956502277, 13.329, 1, 196.95261160749567], 127: [59, 149, 141868.0799560547, 12.348, 1, 196.95261160749567], 147: [59, 155, 149692.43956502277, 13.329, 1, 196.95261160749567]}, {273: [137, 408, 568593.848470052, 4.435, 1, 196.99988543417683], 306: [170, 441, 560323.3494873046, 12.141, 1, 196.99988543417683], 323: [187, 458, 559219.6043701172, 11.149, 1, 196.99988543417683], 290: [154, 425, 555856.6126302083, 6.019, 1, 196.99988543417683], 340: [204, 475, 554896.9737548828, 8.706, 1, 196.99988543417683]}, {94: [93, 142, 137117.0804036458, 7.96, 1, 199.03944958269645], 119: [93, 130, 106754.9600423177, 6.556, 1, 199.03944958269645], 136: [93, 142, 137117.0804036458, 7.96, 1, 199.03944958269645]}, {214: [78, 349, 1635660.6927083335, 96.093, 1, 200.02512722618025], 196: [60, 331, 1634662.8492024739, 90.796, 1, 200.02512722618025], 180: [44, 315, 1632242.4589029946, 92.275, 1, 200.02512722618025], 232: [96, 367, 1631971.1267089844, 102.664, 1, 200.02512722618025], 163: [27, 298, 1626920.2783203125, 105.769, 1, 200.02512722618025]}, {463: [351, 499, 272285.33375040686, 15.715, 1, 204.0454566606564], 371: [351, 492, 263236.976196289, 10.03, 1, 204.0454566606564], 443: [351, 492, 263236.976196289, 10.03, 1, 204.0454566606564], 353: [351, 486, 256653.47892252597, 10.226, 1, 204.0454566606564], 389: [351, 469, 235952.53226725257, 12.897, 1, 204.0454566606564]}, {119: [115, 145, 118254.60757954913, 18.576, 1, 214.98801306700386]}, {6: [1, 33, 100624.18237304688, 4.396, 1, 214.9966126735217], 36: [33, 90, 188803.13142903644, 3.175, 1, 214.9966126735217], 242: [241, 318, 281466.9518229166, 2.88, 1, 214.9966126735217], 319: [318, 441, 460112.6243489584, 3.53, 1, 214.9966126735217], 450: [449, 499, 144971.89900716144, 2.418, 1, 214.9966126735217]}, {200: [197, 245, 159850.77801513672, 2.257, 1, 215.00091260577514]}, {271: [135, 406, 923790.0588378906, 3.859, 1, 217.04879239192036], 288: [152, 423, 922004.7213541666, 10.673, 1, 217.04879239192036], 255: [119, 390, 920889.3903808594, 4.43, 1, 217.04879239192036], 305: [169, 440, 914332.7718098959, 9.249, 1, 217.04879239192036], 323: [187, 458, 904710.2534993489, 10.919, 1, 217.04879239192036]}, {197: [195, 420, 375702.1154785156, 3.778, 1, 217.10088983256432]}, {115: [113, 151, 145085.2859395345, 9.429, 1, 217.10523185036095], 132: [113, 145, 134742.28148396808, 10.346, 1, 217.10523185036095], 150: [113, 151, 145085.2859395345, 9.429, 1, 217.10523185036095], 416: [415, 480, 142086.76733398438, 2.792, 1, 217.10523185036095]}, {96: [95, 151, 127509.91554768878, 15.268, 1, 218.0364242610605], 120: [95, 137, 103921.18968709308, 9.713, 1, 218.0364242610605], 136: [95, 151, 127509.91554768878, 15.268, 1, 218.0364242610605]}, {107: [98, 132, 145080.88308461505, 7.179, 1, 222.00545467994445], 123: [98, 136, 156062.66360219318, 11.788, 1, 222.00545467994445], 143: [98, 148, 167925.3042856852, 48.136, 1, 222.00545467994445]}, {355: [219, 490, 1112995.1503092446, 19.585, 1, 233.00070255228943], 335: [199, 470, 1103281.5889485679, 5.432, 1, 233.00070255228943], 373: [237, 499, 1088018.761352539, 20.925, 1, 233.00070255228943], 319: [183, 454, 1071048.7645670571, 3.113, 1, 233.00070255228943], 393: [257, 499, 1011595.6274820963, 22.125, 1, 233.00070255228943]}, {74: [72, 163, 330209.4827067057, 40.771, 1, 240.01494019352677], 103: [72, 143, 304102.1748860677, 27.535, 1, 240.01494019352677], 120: [72, 129, 241176.3037109375, 6.958, 1, 240.01494019352677], 137: [72, 144, 306201.54986572266, 34.617, 1, 240.01494019352677], 154: [72, 163, 330209.4827067057, 40.771, 1, 240.01494019352677]}, {120: [116, 129, 123287.0546875, 7.134, 1, 240.01974049233058], 137: [116, 144, 187657.5730794271, 5.18, 1, 240.01974049233058]}, {106: [101, 120, 141946.8116861979, 20.217, 1, 244.95168049231606], 124: [101, 126, 155972.9288330078, 50.244, 1, 244.95168049231606]}, {106: [101, 120, 141946.8116861979, 20.149, 1, 244.95657952592592], 124: [101, 133, 163724.80778503418, 166.226, 1, 244.95657952592592]}, {104: [95, 119, 156961.89490127563, 5.381, 1, 247.00309952033015], 123: [95, 144, 245365.8421529134, 92.065, 1, 247.00309952033015], 141: [95, 148, 246993.1936848958, 175.714, 1, 247.00309952033015]}, {113: [110, 130, 117334.93841552733, 1.94, 1, 247.00803958232055]}, {24: [22, 187, 540947.742980957, 68.178, 1, 256.9651544280733], 170: [22, 187, 540947.742980957, 68.178, 1, 256.9651544280733], 154: [22, 175, 529218.3109944662, 59.388, 1, 256.9651544280733], 137: [22, 149, 497061.6809488932, 34.591, 1, 256.9651544280733], 120: [22, 138, 452243.28103129065, 9.503, 1, 256.9651544280733]}, {108: [99, 150, 3773662.5711975098, 397.289, 1, 262.9805982180768]}, {108: [99, 149, 3770545.8243789673, 382.648, 1, 262.9858578300412]}, {105: [101, 132, 221471.58056640622, 10.189, 1, 263.98181558737645]}, {105: [99, 143, 246670.00380706787, 142.552, 1, 263.98709522368813]}, {119: [115, 133, 129248.7173055013, 5.646, 1, 266.99771673905684]}, {119: [116, 147, 162305.8036804199, 8.862, 1, 267.0030566933916]}, {109: [101, 138, 5965622.813069662, 565.075, 1, 268.99694757309214]}, {109: [101, 138, 5965622.813069662, 560.303, 1, 269.00232751204356]}, {109: [101, 129, 358314.2976710002, 18.168, 1, 269.99946972798267], 129: [99, 139, 386382.8850326538, 0, 1, 269.99946972798267]}, {109: [101, 129, 358195.7533976237, 18.218, 1, 270.00486971737723], 129: [101, 140, 386908.8703104655, 697.641, 1, 270.00486971737723]}, {111: [110, 149, 128689.13891601561, 7.491, 1, 278.9519846076232], 129: [110, 140, 114325.23404947916, 7.156, 1, 278.9519846076232]}, {102: [100, 144, 141904.426961263, 20.256, 1, 278.95756364731534], 129: [100, 140, 132548.95472208658, 8.909, 1, 278.95756364731534]}, {273: [137, 408, 315350.8868408203, 7.414, 1, 294.2056864319093], 289: [153, 424, 315223.19331868493, 9.898, 1, 294.2056864319093], 324: [188, 459, 315151.4476114909, 12.671, 1, 294.2056864319093], 341: [205, 476, 314922.1157836914, 10.699, 1, 294.2056864319093], 357: [221, 492, 314896.032063802, 10.245, 1, 294.2056864319093]}, {104: [101, 146, 236191.09684244788, 54.699, 1, 295.95253218895255]}, {104: [97, 154, 243299.1443634033, 70.233, 1, 295.95845123959623]}, {108: [99, 147, 108783.58565266927, 19.347, 1, 297.98972577284417], 143: [99, 151, 111510.58253479002, 21.245, 1, 297.98972577284417], 159: [99, 165, 117362.94368362425, 26.621, 1, 297.98972577284417]}, {105: [100, 146, 158301.0996831258, 0, 1, 322.0150337917259], 124: [100, 133, 141734.28971354166, 9.667, 1, 322.0150337917259], 140: [100, 146, 158301.0996831258, 0, 1, 322.0150337917259]}, {105: [100, 144, 157671.20259857178, 91.549, 1, 322.02147409240166], 124: [100, 133, 141734.28971354166, 9.677, 1, 322.02147409240166], 140: [100, 144, 157671.20259857178, 91.549, 1, 322.02147409240166]}, {105: [100, 122, 173338.07376098633, 7.058, 1, 329.00666813629874], 123: [100, 133, 218560.70043945312, 25.713, 1, 329.00666813629874]}, {136: [133, 280, 114455.41754150389, 6.141, 1, 333.2983013540244], 195: [133, 280, 114455.41754150389, 6.141, 1, 333.2983013540244], 265: [133, 280, 114455.41754150389, 6.141, 1, 333.2983013540244], 178: [133, 275, 109955.34939575195, 5.9, 1, 333.2983013540244], 247: [133, 275, 109955.34939575195, 5.9, 1, 333.2983013540244]}, {124: [115, 150, 155035.22016906735, 13.154, 1, 337.9634487541688]}, {104: [101, 146, 172635.71502685547, 70.632, 1, 337.97020802314387]}, {105: [100, 141, 691509.2732747395, 32.415, 1, 338.96529254140506], 140: [100, 149, 708581.6343180337, 113.279, 1, 338.96529254140506], 156: [100, 159, 714959.432647705, 165.21, 1, 338.96529254140506]}, {105: [98, 141, 692417.706471761, 31.206, 1, 338.9720718472559], 122: [97, 141, 692655.7781702677, 31.395, 1, 338.9720718472559], 140: [97, 146, 706218.7353922526, 98.54, 1, 338.9720718472559]}, {111: [100, 141, 4141735.4709879556, 397.789, 1, 344.98382927203005]}, {111: [101, 139, 4128670.5064697266, 266.31, 1, 344.99072894861547]}, {107: [100, 141, 355601.83231608075, 250.658, 1, 345.98572440378314]}, {107: [101, 139, 353026.33639017737, 309.15, 1, 345.99264411827113]}, {107: [101, 139, 111278.11294809976, 0, 1, 346.98358954753286]}, {107: [102, 135, 107255.58052571614, 33.903, 1, 346.9905292193238]}, {297: [269, 484, 138368.16614786783, 6.167, 1, 347.31685040051485], 331: [269, 484, 138368.16614786783, 6.167, 1, 347.31685040051485], 366: [269, 484, 138368.16614786783, 6.167, 1, 347.31685040051485], 405: [269, 484, 138368.16614786783, 6.167, 1, 347.31685040051485], 439: [269, 484, 138368.16614786783, 6.167, 1, 347.31685040051485]}, {292: [156, 427, 868842.9414876301, 11.135, 1, 350.32363219488576], 275: [139, 410, 864930.181640625, 9.693, 1, 350.32363219488576], 310: [174, 445, 863193.2922363281, 11.203, 1, 350.32363219488576], 242: [106, 377, 859512.3439941406, 9.826, 1, 350.32363219488576], 259: [123, 394, 859227.578531901, 10.317, 1, 350.32363219488576]}, {275: [139, 410, 860711.68359375, 9.693, 1, 350.33063866752957], 242: [106, 377, 857403.0949707031, 9.826, 1, 350.33063866752957], 259: [123, 394, 855009.080485026, 10.317, 1, 350.33063866752957], 172: [36, 307, 846341.7331542969, 9.678, 1, 350.33063866752957], 138: [2, 273, 844748.8040364583, 4.504, 1, 350.33063866752957]}, {108: [101, 138, 1713767.2412312825, 651.383, 1, 350.9968929596222]}, {108: [101, 138, 1713767.2412312825, 724.86, 1, 351.0039128974813]}, {109: [101, 118, 100854.77099609375, 8.011, 1, 352.00921092636037], 125: [101, 135, 146239.31383768716, 99.243, 1, 352.00921092636037]}, {262: [126, 397, 395881.6761881511, 8.161, 1, 355.2839219851071], 279: [143, 414, 395565.68310546875, 11.137, 1, 355.2839219851071], 313: [177, 448, 395380.545328776, 11.272, 1, 355.2839219851071], 244: [108, 379, 394132.8658854166, 8.29, 1, 355.2839219851071], 178: [42, 313, 394083.5286051432, 7.825, 1, 355.2839219851071]}, {104: [99, 144, 123931.70823160806, 60.423, 1, 360.9563697690915]}, {292: [156, 427, 839859.8080240885, 10.299, 1, 364.3434614876078], 272: [136, 407, 838732.5026041667, 9.47, 1, 364.3434614876078], 311: [175, 446, 834556.5193684895, 13.057, 1, 364.3434614876078], 327: [191, 462, 832768.9821777343, 10.602, 1, 364.3434614876078], 345: [209, 480, 829599.9341634114, 12.628, 1, 364.3434614876078]}, {56: [1, 186, 131808.9101155599, 4.193, 1, 365.3431214842938], 139: [1, 184, 130574.12009684245, 4.196, 1, 365.3431214842938], 157: [1, 186, 131808.9101155599, 4.193, 1, 365.3431214842938]}, {326: [190, 461, 403257.58447265625, 9.031, 1, 369.2953990705706], 360: [224, 495, 400715.925374349, 12.529, 1, 369.2953990705706], 309: [173, 444, 400414.4434814453, 10.609, 1, 369.2953990705706], 342: [206, 477, 400004.375406901, 12.534, 1, 369.2953990705706], 277: [141, 412, 399330.9100341797, 9.163, 1, 369.2953990705706]}, {272: [136, 407, 802233.7147623698, 9.85, 1, 394.35684866919524], 309: [173, 444, 800525.4167480469, 10.778, 1, 394.35684866919524], 292: [156, 427, 800147.9842122397, 12.305, 1, 394.35684866919524], 325: [189, 460, 798265.2244466146, 12.85, 1, 394.35684866919524], 255: [119, 390, 792063.04296875, 10.594, 1, 394.35684866919524]}, {325: [189, 460, 193888.0332438151, 6.663, 1, 399.3092110816664], 275: [139, 410, 193113.33884684244, 11.983, 1, 399.3092110816664], 341: [205, 476, 192895.99131266272, 8.354, 1, 399.3092110816664], 292: [156, 427, 192756.19274902344, 9.098, 1, 399.3092110816664], 357: [221, 492, 192572.66668701172, 8.561, 1, 399.3092110816664]}, {272: [136, 407, 949786.6223144531, 11.281, 1, 408.370911189493], 306: [170, 441, 947062.525390625, 13.524, 1, 408.370911189493], 322: [186, 457, 946287.141845703, 12.579, 1, 408.370911189493], 289: [153, 424, 943132.2777506509, 15.332, 1, 408.370911189493], 339: [203, 474, 938458.8899739583, 13.47, 1, 408.370911189493]}, {189: [187, 379, 171043.82790629068, 6.052, 1, 409.3685428521826], 274: [187, 312, 109477.4935506185, 9.588, 1, 409.3685428521826], 291: [187, 379, 171043.82790629068, 6.052, 1, 409.3685428521826]}, {105: [100, 117, 101302.61950683594, 6.497, 1, 411.0092799027422], 127: [100, 138, 151792.93344370523, 0, 1, 411.0092799027422]}, {106: [100, 144, 105565.77656046549, 0, 1, 419.97516445662563]}, {105: [100, 143, 796218.3546142576, 121.153, 1, 420.96746637746344], 126: [98, 143, 796632.8306274413, 121.235, 1, 420.96746637746344], 143: [98, 146, 800275.7250226338, 324.268, 1, 420.96746637746344]}, {105: [100, 143, 796144.531697591, 118.951, 1, 420.9758857267909], 143: [100, 148, 800944.4052352904, 476.869, 1, 420.9758857267909]}, {109: [101, 124, 1621020.7845458982, 6.414, 1, 426.9877873155397], 126: [100, 141, 2023786.4584350581, 1023.744, 1, 426.9877873155397]}, {108: [104, 109, 340149.7265625, 2.959, 1, 426.996327071286]}, {108: [101, 122, 161349.14518229166, 5.507, 1, 427.9880986425888], 124: [101, 140, 217509.68043518063, 174.021, 1, 427.9880986425888], 140: [101, 142, 218083.38409932452, 0, 1, 427.9880986425888]}, {108: [102, 122, 160090.7334798177, 5.475, 1, 427.99665840456163], 124: [102, 136, 214859.34419250485, 132.071, 1, 427.99665840456163]}, {109: [101, 121, 1501699.5751571655, 15.958, 1, 433.0075964819585]}, {108: [101, 118, 150882.20760091144, 12.207, 1, 434.0046501555535], 132: [101, 136, 197331.4396870931, 0, 1, 434.0046501555535]}, {108: [104, 118, 146957.91040039062, 11.922, 1, 434.01333024855666]}, {321: [185, 456, 397786.7659098307, 6.999, 1, 438.3751928973538], 303: [167, 438, 397520.874226888, 10.597, 1, 438.3751928973538], 287: [151, 422, 397113.03389485675, 15.133, 1, 438.3751928973538], 271: [135, 406, 396108.0392659505, 8.694, 1, 438.3751928973538], 340: [204, 475, 394621.34375, 10.474, 1, 438.3751928973538]}, {138: [136, 209, 101804.90966796874, 5.242, 1, 438.38396040121177]}, {299: [163, 434, 460649.9789632161, 14.578, 1, 452.39461169950187], 331: [195, 466, 460287.3210449219, 15.845, 1, 452.39461169950187], 281: [145, 416, 459109.2541503906, 11.45, 1, 452.39461169950187], 265: [129, 400, 458174.83146158856, 12.07, 1, 452.39461169950187], 347: [211, 482, 457854.11039225257, 11.014, 1, 452.39461169950187]}, {110: [100, 139, 114170.19509379068, 160.275, 1, 486.98537746956816]}, {106: [101, 119, 153924.6337890625, 9.503, 1, 493.0122264830123], 124: [101, 136, 194765.6066080729, 0, 1, 493.0122264830123]}, {268: [266, 444, 142332.17574564618, 5.452, 1, 496.41581147241743], 309: [266, 444, 142332.17574564618, 5.452, 1, 496.41581147241743], 372: [266, 444, 142332.17574564618, 5.452, 1, 496.41581147241743], 410: [266, 444, 142332.17574564618, 5.452, 1, 496.41581147241743], 292: [266, 434, 134422.87320454916, 5.781, 1, 496.41581147241743]}, {105: [100, 142, 815453.493367513, 474.476, 1, 502.9716335145023], 124: [100, 144, 816195.9096806843, 0, 1, 502.9716335145023]}, {105: [101, 135, 797330.9034016926, 47.702, 1, 502.98169294717246]}, {104: [101, 140, 116693.98389689127, 0, 1, 503.9785733163209]}, {109: [101, 137, 2315544.208305359, 570.196, 1, 508.9926899762755]}, {108: [101, 119, 213164.91279093424, 6.551, 1, 509.991283965147], 126: [101, 138, 299019.0690307617, 0, 1, 509.991283965147]}, {108: [103, 119, 209870.59228515622, 6.551, 1, 510.00148379082617], 126: [103, 135, 294791.90240478516, 443.263, 1, 510.00148379082617]}, {109: [102, 134, 1520937.7724405923, 722.532, 1, 515.0034179867423]}, {109: [102, 134, 1520937.7724405923, 736.733, 1, 515.013718055102]}, {109: [101, 123, 177475.069056193, 22.913, 1, 516.0034843677402], 125: [101, 136, 198350.80715433756, 0, 1, 516.0034843677402]}, {109: [103, 123, 176502.09899902344, 22.664, 1, 516.0138044374273], 125: [103, 131, 194124.93841552734, 167.63, 1, 516.0138044374273]}, {115: [102, 128, 103586.69251505533, 26.286, 1, 575.0128769966461]}, {105: [101, 136, 121058.13063557942, 24.184, 1, 578.9594073573802], 136: [101, 141, 124955.39286295572, 0, 1, 578.9594073573802]}, {106: [102, 108, 131862.763671875, 2.439, 1, 584.9651446872377]}, {106: [101, 140, 876055.4240519205, 1006.887, 1, 584.9768439901314]}, {107: [101, 139, 144823.86702601114, 0, 1, 585.9721404342581]}, {107: [101, 138, 145054.82462565103, 0, 1, 585.9838598770667]}, {110: [101, 136, 1582501.6142578123, 677.774, 1, 590.9859012253627]}, {110: [101, 136, 1582501.6142578123, 734.373, 1, 590.9977209433872]}, {109: [101, 118, 171453.1585286458, 7.561, 1, 591.9914216524732], 127: [101, 137, 241852.33288319904, 0, 1, 591.9914216524732]}, {109: [101, 118, 171453.1585286458, 7.343, 1, 592.0032614809062], 127: [101, 130, 232188.7686360677, 63.267, 1, 592.0032614809062]}, {109: [102, 120, 575109.442199707, 24.62, 1, 597.0089231484378]}, {109: [103, 120, 575667.2522786458, 24.98, 1, 597.0208633269007]}, {118: [103, 128, 103488.2247314453, 82.931, 1, 598.0127310638326]}, {118: [101, 132, 104184.73185221352, 69.717, 1, 628.9875971445721]}, {110: [102, 123, 107440.32808430988, 21.76, 1, 631.9886850189337], 126: [102, 132, 117967.87557220459, 0, 1, 631.9886850189337]}, {110: [102, 123, 107440.32808430988, 23.219, 1, 632.0013247926339], 126: [102, 127, 113268.37125651039, 0, 1, 632.0013247926339]}, {106: [101, 139, 140293.0040181478, 0, 1, 660.9584921489472]}, {106: [101, 136, 137921.45538330078, 0, 1, 660.9717113187902]}, {107: [101, 138, 657452.7327473958, 499.601, 1, 666.973924696208], 123: [101, 140, 658098.4698384602, 0, 1, 666.973924696208]}, {107: [101, 136, 655494.5935465494, 0, 1, 666.9872641747019]}, {109: [101, 124, 104568.5791422526, 6.18, 1, 667.9751262847332], 125: [101, 135, 128579.62088775635, 170.049, 1, 667.9751262847332]}, {109: [102, 124, 104147.12430826822, 6.21, 1, 667.9884857872589], 125: [102, 137, 129123.53678385417, 0, 1, 667.9884857872589]}, {121: [102, 132, 102786.28628540039, 118.665, 1, 669.9820404605969]}, {109: [101, 135, 1009440.6213429768, 674.783, 1, 672.9902633075947]}, {109: [102, 135, 1006320.6448160807, 0, 1, 673.0037231128608]}, {122: [110, 135, 103507.96405029295, 2.775, 1, 673.9870163450972]}, {109: [102, 128, 171565.841023763, 24.472, 1, 674.0004960854241], 127: [102, 132, 179474.99666341144, 0, 1, 674.0004960854241]}, {109: [103, 132, 385551.69181315094, 441.109, 1, 679.0065492487329]}, {109: [103, 131, 385247.72859700513, 0, 1, 679.0201293797179]}, {120: [103, 131, 101433.482228597, 178.314, 1, 713.9909381206319]}, {107: [101, 128, 122222.45874023436, 8.689, 1, 742.9656950246567], 124: [101, 131, 129260.18890889484, 14.831, 1, 742.9656950246567]}, {109: [101, 124, 399441.8792317708, 7.33, 1, 748.9781359906217]}, {109: [102, 124, 398624.2981363932, 7.298, 1, 748.9931155533413]}, {120: [101, 135, 109699.5451405843, 0, 1, 749.9824293762382]}, {109: [102, 134, 642962.7577438354, 484.722, 1, 754.9939320209904]}, {109: [102, 132, 639189.2651367188, 691.238, 1, 755.0090318996308]}, {109: [102, 122, 113629.32850646971, 14.28, 1, 755.9911720725225], 126: [102, 135, 131996.48869832355, 0, 1, 755.9911720725225]}, {109: [102, 122, 113629.32850646971, 14.044, 1, 756.006291895964], 126: [102, 132, 130637.39675394693, 0, 1, 756.006291895964]}, {109: [103, 122, 199334.9052734375, 38.531, 1, 761.0123852979899], 127: [103, 132, 215491.67474365234, 0, 1, 761.0123852979899]}, {109: [104, 114, 163763.9512939453, 21.232, 1, 761.0276055456958]}, {107: [101, 124, 256126.71142578122, 8.765, 1, 830.9825145440085], 124: [101, 136, 300674.5395762126, 0, 1, 830.9825145440085]}, {107: [102, 124, 255227.28869628906, 8.713, 1, 830.9991341942994], 124: [102, 129, 285283.75585937494, 29.34, 1, 830.9991341942994]}, {109: [107, 122, 264359.96468098956, 8.27, 1, 836.987119240147]}, {109: [102, 122, 333449.95570882154, 21.967, 1, 837.0038589825317]}, {109: [104, 119, 128986.3946533203, 32.133, 1, 843.018252303163]}, {109: [101, 119, 135415.78373463947, 7.419, 1, 912.9743738994296]}, {109: [101, 119, 135415.78373463947, 7.404, 1, 912.9926333869075]}, {109: [102, 121, 168621.9246419271, 21.245, 1, 919.0014924830037], 129: [102, 133, 190969.32088216147, 0, 1, 919.0014924830037]}, {109: [106, 116, 130512.08646647134, 13.946, 1, 919.0198725128535]}, {281: [145, 416, 13968891.720052082, 15.537, 1, 922.0207360386056], 299: [163, 434, 13945079.595052082, 19.303, 1, 922.0207360386056], 264: [128, 399, 13912113.52734375, 10.571, 1, 922.0207360386056], 315: [179, 450, 13872516.781249998, 14.628, 1, 922.0207360386056], 331: [195, 466, 13859130.372395836, 19.369, 1, 922.0207360386056]}, {277: [141, 412, 2637134.5836588535, 12.902, 1, 923.0170463811971], 296: [160, 431, 2633287.0865885415, 14.505, 1, 923.0170463811971], 313: [177, 448, 2619264.6272786455, 13.33, 1, 923.0170463811971], 242: [106, 377, 2618346.5135091143, 13.176, 1, 923.0170463811971], 329: [193, 464, 2617588.3225911455, 15.289, 1, 923.0170463811971]}, {236: [100, 371, 434436.51314290357, 10.317, 1, 924.0144333094446], 203: [67, 338, 431975.18729654944, 9.946, 1, 924.0144333094446], 171: [35, 306, 431152.44034830725, 12.066, 1, 924.0144333094446], 187: [51, 322, 430666.23278808594, 10.587, 1, 924.0144333094446], 219: [83, 354, 430643.2830810547, 11.791, 1, 924.0144333094446]}, {287: [151, 422, 358219.212524414, 12.817, 1, 959.9558659049055], 322: [186, 457, 356670.54162597656, 10.925, 1, 959.9558659049055], 339: [203, 474, 356572.33345540357, 9.478, 1, 959.9558659049055], 306: [170, 441, 356330.1040039062, 9.221, 1, 959.9558659049055], 251: [115, 386, 354644.869140625, 3.395, 1, 959.9558659049055]}, {1: [0, 85, 108400.12984212239, 4.345, 1, 959.9750650222235], 17: [1, 85, 106339.88052368164, 4.345, 1, 959.9750650222235]}]
super_list = []
for d in peak_dict:
for k, v in d.items(): # d.items() in Python 3+
super_list.append([k]+v)
[ i[0] for i in super_list]
def add(a,b):
return a+b
from itertools import repeat
list(map(add, [1,2,3], repeat(4)))
```
| github_jupyter |
# Model interpretation
This code translates a model (that is, a set of results produced by Mallet) into a human-interpretable form so that we can label and categorize the topics.
The Mallet output that we will use comes in three files. There's a document-topic matrix (```doctopics```), a list of keywords (```keys```), and some ```diagnostics``` on the topics.
```
import pandas as pd
import numpy as np
from collections import Counter
import xml.etree.ElementTree as ET
```
While we remove only a minimal list of stopwords before topic inference, there are lots of words that human readers find difficult to interpret; those get filtered out before we present the list to them. We need to load that list.
```
with open('functionwords.txt', encoding = 'utf-8') as f:
lines = f.readlines()
functionwords = [x.strip() for x in lines]
```
We also need metadata for the corpus used in modeling. For all the models we'll be using this is "corpus4," which underwent some pruning of e.g. nonfiction and collected works.
```
corpus = pd.read_csv('../metadata/corpus4.tsv', sep = '\t', low_memory = False)
```
For any given model, we start by constructing a dictionary of diagnostic statistics associated with topics.
To interpret these, consult [the Mallet documentation.](http://mallet.cs.umass.edu/diagnostics.php)
```
def extract_diag_stats(xmlfile):
tree = ET.parse(xmlfile)
root = tree.getroot()
topics = dict()
alltokencount = 0
for item in root.findall('topic'):
t = item.attrib
tnum = int(t['id'])
topics[tnum] = dict()
topics[tnum]['tokens'] = float(t['tokens'])
alltokencount += topics[tnum]['tokens']
topics[tnum]['document_entropy'] = t['document_entropy']
topics[tnum]['coherence'] = t['coherence']
topics[tnum]['word-length'] = t['word-length']
topics[tnum]['rank_1_docs'] = t['rank_1_docs']
for tnum, valuedict in topics.items():
valuedict['tokenpct'] = round(100 * topics[tnum]['tokens'] / alltokencount, 3)
print(alltokencount)
return topics
```
Next we translate the keywords into a more human-interpretable form by deleting function words that are too general.
```
def translate_keys(keyfile, topicdict):
outlines = []
global functionwords
with open(keyfile, encoding ='utf-8') as f:
for line in f.readlines():
tokens = line.strip().split()
topicnum = int(tokens[0])
alpha = tokens[1]
words = tokens[2 : ]
wordsneeded = []
for w in words:
if w in functionwords:
continue
else:
wordsneeded.append(w)
if len(wordsneeded) > 75:
break
thistopic = topicdict[topicnum]
outline = '"' + 'TOPIC ' + str(topicnum) + '\n'
outline = outline + 'pct corpus = ' + str(thistopic['tokenpct']) + '%\n'
outline = outline + 'doc entropy = ' + thistopic['document_entropy'] + '\n'
outline = outline + 'word length = ' + thistopic['word-length'] + '\n'
outline = outline + 'coherence = ' + thistopic['coherence'] + '\n'
outline = outline + 'rank 1 docs = ' + thistopic['rank_1_docs'] + '\n"\t"'
for wordfloor in range(0, 70, 10):
outline = outline + ' '.join(wordsneeded[wordfloor : (wordfloor + 10)]) + '\n'
outline = outline + '"\t'
outlines.append(outline)
return outlines
```
Finally, we define a pair of functions that get the doctopic matrix and return the n documents with highest proportion of each topic.
```
def get_doctopics(filename):
chunks = dict()
with open(filename, encoding = 'utf-8') as f:
for line in f:
fields = line.strip().split()
chunkid = fields[1]
docid = fields[1].split('_')[0]
if docid not in chunks:
chunks[docid] = []
vector = np.array([float(x) for x in fields[2: ]])
chunks[docid].append(vector)
docs = dict()
docsizes = dict() # not actually used in current version I think
# early on I used number of chunks as a proxy
# but now using actual number of tokens, from metadata
for docid, value in chunks.items():
avgvector = np.mean(value, axis = 0)
docs[docid] = avgvector
docsizes[docid] = len(vector)
return docs, docsizes
def docs2maxdocs(docs, howmany):
topictuples = dict()
for docid, vector in docs.items():
for idx, fraction in enumerate(vector):
if idx not in topictuples:
topictuples[idx] = []
topictuples[idx].append((fraction, docid))
maxdocs = dict()
for topic, tuples in topictuples.items():
tuples.sort(reverse = True)
maxdocs[topic] = [(round(x[0], 3), x[1]) for x in tuples[0: howmany]]
return maxdocs
def get_maxdocs(filename, howmany):
docs, docsizes = get_doctopics(filename)
maxdocs = docs2maxdocs(docs, howmany)
return maxdocs, docsizes
```
The function below combines all the functions above into a single recipe.
Note, in particular, that we instruct maxdocs to return the top 200 documents for each topic, and use all 200 to infer, for instance, date quartiles.
But we only individually list the top seven.
```
def interpret_model(modelnumber):
global corpus, functionwords
doctopicfile = "final/k" + str(modelnumber) + 'doctopics.txt'
keyfile = "final/k" + str(modelnumber) + 'keys.txt'
diagnosticfile = "final/k" + str(modelnumber) + 'diagnostics.xml'
topicdict = extract_diag_stats(diagnosticfile)
outlines = translate_keys(keyfile, topicdict)
maxdocs, docsizes = get_maxdocs(doctopicfile, 200) # get the top 200 books
# and use all 200 to find the "biggest author"
# and "date quartiles"
finalout = []
for idx, outline in enumerate(outlines):
outline = outline + '"'
docctr = 0
authorproportions = Counter()
firstpubs = []
for fraction, doc in maxdocs[idx]:
author = corpus.loc[corpus.docid == doc, 'hathi_author'].values[0]
if pd.isnull(author):
author = "Unknown Author"
else:
if len(author) > 30:
author = author[0: 30]
title = corpus.loc[corpus.docid == doc, 'hathi_title'].values[0]
if pd.isnull(title):
title = "Unknown Title"
else:
title = title.replace('"', '')
if len(title) > 33:
title = title[0 : 33]
firstpub = corpus.loc[corpus.docid == doc, 'firstpub'].values[0]
firstpubs.append(firstpub)
docctr += 1
if docctr <= 7: # but only list the top seven individually
outline = outline + str(fraction) + ' | ' + author + ' | ' + title + ' | ' + str(firstpub) + '\n'
authorproportions[author] += fraction * float(corpus.loc[corpus.docid == doc, 'tokens'].values[0])
outline = outline + '"\t'
allsum = sum(authorproportions.values())
biggestauth, biggestsum = authorproportions.most_common(1)[0]
bigfrac = round(100 * biggestsum / topicdict[idx]['tokens'], 2)
outline = outline + biggestauth + " = " + str(bigfrac) + '%\t'
percent25, percent75 = np.percentile(firstpubs, [25, 75])
outline = outline + str(int(percent25)) + '-' + str(int(percent75)) + '\n'
finalout.append(outline)
outfile = keyfile.replace('keys.txt', 'interpret.tsv')
with open(outfile, mode = 'w', encoding = 'utf=8') as f:
f.write('topicstats\tkeywords\ttopbooks\tbiggestauth\tdatequartiles\n')
for o in finalout:
f.write(o)
interpret_model(200)
```
## Unrelated stuff below
After starting to examine some early topic models, I found a lot of nonfiction floated to the top, and used this notebook to identify and remove some of it.
```
def docstoremove(topiclist, modelnumber, alreadyremoved, alreadycleared):
global corpus, functionwords
doctopicfile = "k" + str(modelnumber) + 'doctopics.txt'
maxdocs = get_maxdocs(doctopicfile, 25)
for t in topiclist:
print()
print('TOPIC ', t)
suspects = maxdocs[t]
for fraction, doc in suspects:
# print(doc)
if doc in alreadyremoved or doc in alreadycleared:
continue
else:
author = corpus.loc[corpus.docid == doc, 'hathi_author'].values[0]
title = corpus.loc[corpus.docid == doc, 'hathi_title'].values[0]
print(fraction, ' | ', author, ' | ', title)
user = input('remove? ')
if user == 'y':
alreadyremoved.add(doc)
else:
alreadycleared.add(doc)
return alreadyremoved, alreadycleared
alreadyremoved = set()
alreadycleared = set()
suspicious = [5, 18]
alreadyremoved, alreadycleared = docstoremove(suspicious, 100, alreadyremoved, alreadycleared)
alreadyremoved
suspicious = [20, 22, 95]
alreadyremoved, alreadycleared = docstoremove(suspicious, 100, alreadyremoved, alreadycleared)
alreadyremoved
suspicious = [41]
alreadyremoved, alreadycleared = docstoremove(suspicious, 300, alreadyremoved, alreadycleared)
len(alreadyremoved)
corpus.shape
corpus = corpus.loc[~corpus.docid.isin(alreadyremoved), : ]
corpus.shape
pwd
corpus.to_csv('../metadata/corpus3.tsv', sep = '\t', index = False)
oldcorpus = pd.read_csv('../metadata/modelcorpus.tsv', sep = '\t')
len(corpus.loc[corpus.firstpub == 1998, : ])
len(oldcorpus.loc[(oldcorpus.firstpub == 1999) & (oldcorpus.nonficprob < .95), :])
supplement = oldcorpus.loc[(oldcorpus.firstpub == 1999) & (oldcorpus.nonficprob < .95), :].sample(225)
supplement.head()
set(corpus.columns) - set(supplement.columns)
parsed = pd.read_csv('../getEF/parsing_metadata.tsv', sep = '\t')
parsed.head()
def get_docid(astring):
return astring.split('_')[0]
parsed = parsed.assign(docid = parsed['id'].apply(get_docid))
docsums = parsed.groupby('docid').sum()
docsums.reset_index(inplace = True)
docsums.head()
supplement = supplement.merge(docsums.loc[:, ['docid', 'pagesinchunk', 'tokens']], on= 'docid')
supplement.head()
corpus = pd.read_csv('../metadata/corpus3.tsv', sep = '\t', low_memory = False)
corpus.drop(columns = ['skipped_pages', 'trimmed_pages'], inplace = True)
corpus.shape
corpus = pd.concat([corpus, supplement])
corpus.shape
corpus.to_csv('../metadata/corpus4.tsv',sep = '\t', index = False)
corpus = corpus.assign(tokensperpage = np.round(corpus.tokens / corpus.pagesinchunk, 3))
allowed = corpus.docid.tolist()
len(allowed)
paths = pd.read_csv('../getEF/pathlistwithauthors.tsv', sep = '\t')
subpaths = paths.loc[paths.docid.isin(allowed), : ]
subpaths.shape
subpaths.head()
subpaths = subpaths.merge(corpus.loc[: , ['docid', 'tokensperpage']], on = 'docid')
subpaths.head()
sum(pd.isnull(subpaths.tokensperpage))
subpaths.to_csv('../getEF/cohort3_pathlist.tsv', sep = '\t', index = False)
np.median(subpaths.tokensperpage)
c4 = pd.read_csv('../metadata/corpus4.tsv', sep = '\t', low_memory = False)
len(alreadyremoved)
alreadyremoved.intersection(set(c4.docid.tolist()))
```
| github_jupyter |
Deep Learning
=============
Assignment 1
------------
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
```
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
# Config the matplotlib backend as plotting inline in IPython
%matplotlib inline
```
First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labeled examples. Given these sizes, it should be possible to train models quickly on any machine.
```
url = 'https://commondatastorage.googleapis.com/books1000/'
last_percent_reported = None
data_root = '.' # Change me to store data elsewhere
def download_progress_hook(count, blockSize, totalSize):
"""A hook to report the progress of a download. This is mostly intended for users with
slow internet connections. Reports every 5% change in download progress.
"""
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
def maybe_download(filename, expected_bytes, force=False):
"""Download a file if not present, and make sure it's the right size."""
dest_filename = os.path.join(data_root, filename)
if force or not os.path.exists(dest_filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook)
print('\nDownload Complete!')
statinfo = os.stat(dest_filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', dest_filename)
else:
raise Exception(
'Failed to verify ' + dest_filename + '. Can you get to it with a browser?')
return dest_filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
```
Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labeled A through J.
```
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall(data_root)
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
```
---
Problem 1
---------
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
---
Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
```
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
"""Load the data for a single letter label."""
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
num_images = 0
for image in image_files:
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
```
---
Problem 2
---------
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
---
---
Problem 3
---------
Another check: we expect the data to be balanced across classes. Verify that.
---
Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
```
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
```
Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
```
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
```
---
Problem 4
---------
Convince yourself that the data is still good after shuffling!
---
Finally, let's save the data for later reuse:
```
pickle_file = os.path.join(data_root, 'notMNIST.pickle')
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
```
---
Problem 5
---------
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions:
- What about near duplicates between datasets? (images that are almost identical)
- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.
---
---
Problem 6
---------
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.
Optional question: train an off-the-shelf model on all the data!
---
| github_jupyter |
# An overview of Gate Set Tomography
The `pygsti` package provides multiple levels of abstraction over the core Gate Set Tomography (GST) algorithms. This tutorial will show you how to run Gate Set Tomography on some simulated (generated) data, hopefully giving you an overall sense of what it takes (and how easy it is!) to run GST. For more details and options for running GST, see the [GST circuits tutorial](../objects/advanced/GSTCircuitConstruction.ipynb) and the [tutorial covering the different protocols for running GST](GST-Protocols.ipynb).
There are three basic steps to running protocols in pyGSTi:
## Step 1: create an experiment design
The first step is creating an object that specifies what data (from the quantum processor) will be needed to perform GST, and how it should be taken. This is called an "experiment design" in pyGSTi.
To run GST, we need the following three inputs:
1. a "**target model**" which describes the desired, or ideal, operations we want our experimental hardware to perform. In the example below, we use the target model from one of pyGSTi's build-in "model packs" (see the [tutorial on model packs](objects/advanced/ModelPacks.ipynb)) - which acts on a single qubit with the following operations:
- three gates: the identity, and $\pi/2$ rotations around the $x$- and $y$-axes.
- a single state preparation in the $|0\rangle$ state.
- a 2-outcome measurement with the label "0" associated with measuring $|0\rangle$ and "1" with measuring $|1\rangle$.
2. a list of circuits tailored to the target model; essentially a list of what experiments we need to run. Using a standard model makes things especially straightforward here, since the building blocks, called *germ* and *fiducial* circuits, needed to make good GST circuits have already been computed (see the [tutorial on GST circuits](../objects/advanced/GSTCircuitConstruction.ipynb)). In the example below, the model pack also provides the necessary germ and fiducial lists, so that all that is needed is a list of "maximum lengths" describing how long (deep) the circuits should be.
3. data, in the form of experimental outcome counts, for each of the required sequences. In this example we'll generate "fake" or "simulated" data from a depolarized version of our ideal model. For more information about `DataSet` objects, see the [tutorial on DataSets](../objects/DataSet.ipynb).
The first two inputs form an "experiment design", as they describe the experiment that must be performed on a quantum processor (usually running some prescribed set of circuits) in order to run the GST protocol. The third input - the data counts - is packaged with the experiment design to create a `ProtocolData`, or "data" object. As we will see later, a data object serves as the input to the GST protocol.
**The cell below creates an experiment design for running standard GST on the 1-qubit quantum process described by the gates above using circuits whose depth is at most 32.**
```
import pygsti
from pygsti.modelpacks import smq1Q_XYI
#Step 1: create an "experiment design" for doing GST on the std1Q_XYI gate set
target_model = smq1Q_XYI.target_model() # a Model object
prep_fiducials = smq1Q_XYI.prep_fiducials() # a list of Circuit objects
meas_fiducials = smq1Q_XYI.meas_fiducials() # a list of Circuit objects
germs = smq1Q_XYI.germs() # a list of Circuit objects
maxLengths = [1,2,4,8,16,32]
exp_design = pygsti.protocols.StandardGSTDesign(target_model, prep_fiducials, meas_fiducials,
germs, maxLengths)
```
**Pro tip:** the contents of the cell above (except the imports) could be replaced by the single line:
```exp_design = smq1Q_XYI.create_gst_experiment_design(max_max_length=32)```
## Step 2: collect data as specified by the experiment design
Next, we just follow the instructions in the experiment design to collect data from the quantum processor (or the portion of the processor we're characterizing). In this example, we'll generate the data using a depolarizing noise model since we don't have a real quantum processor lying around. The call to `simulate_taking_data` should be replaced with the user filling out the empty "template" data set file with real data. Note also that we set `clobber_ok=True`; this is so the tutorial can be run multiple times without having to manually remove the dataset.txt file - we recommend you leave this set to False (the default) when using it in your own scripts.
```
def simulate_taking_data(data_template_filename):
"""Simulate taking 1-qubit data and filling the results into a template dataset.txt file"""
datagen_model = smq1Q_XYI.target_model().depolarize(op_noise=0.01, spam_noise=0.001)
pygsti.io.fill_in_empty_dataset_with_fake_data(datagen_model, data_template_filename, num_samples=1000, seed=1234)
pygsti.io.write_empty_protocol_data(exp_design, '../tutorial_files/test_gst_dir', clobber_ok=True)
# -- fill in the dataset file in tutorial_files/test_gst_dir/data/dataset.txt --
simulate_taking_data("../tutorial_files/test_gst_dir/data/dataset.txt") # REPLACE with actual data-taking
data = pygsti.io.load_data_from_dir('../tutorial_files/test_gst_dir')
```
## Step 3: Run the GST protocol and create a report
Now we just instantiate a `StandardGST` protocol and `.run` it on our data object. This returns a results object that can be used to create a report.
```
#run the GST protocol and create a report
gst_protocol = pygsti.protocols.StandardGST('full TP,CPTP,Target')
results = gst_protocol.run(data)
report = pygsti.report.construct_standard_report(
results, title="GST Overview Tutorial Example Report", verbosity=2)
report.write_html("../tutorial_files/gettingStartedReport", verbosity=2)
```
You can now open the file [../tutorial_files/gettingStartedReport/main.html](../tutorial_files/gettingStartedReport/main.html) in your browser (Firefox works best) to view the report. **That's it! You've just run GST!**
In the cell above, `results` is a `ModelEstimateResults` object, which is used to generate a HTML report. For more information see the [Results object tutorial](../objects/advanced/Results.ipynb) and [report generation tutorial](../reporting/ReportGeneration.ipynb).
| github_jupyter |
# Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
```
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
```
## Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the `cnt` column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower overall ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
```
rides[:24*10].plot(x='dteday', y='cnt')
```
### Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to `get_dummies()`.
```
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
```
### Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
```
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
```
### Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
```
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
```
We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
```
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
```
## Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called *forward propagation*.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called *backpropagation*.
> **Hint:** You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set `self.activation_function` in `__init__` to your sigmoid function.
2. Implement the forward pass in the `train` method.
3. Implement the backpropagation algorithm in the `train` method, including calculating the output error.
4. Implement the forward pass in the `run` method.
```
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
self.activation_function = lambda x : 1 / (1 + np.exp(-x))
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
error = y - final_outputs
# TODO: Backpropagated output error term
output_error_term = error
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output, output_error_term)
# TODO: Backpropagated hidden error term
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# TODO: Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# TODO: Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:, None]
# TODO: Update the weights
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
```
## Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
```
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
```
## Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
### Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
### Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
### Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
```
import sys
### Set the hyperparameters here ###
iterations = 3000
learning_rate = 0.6
hidden_nodes = 20
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.iloc[batch].values, train_targets.iloc[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
```
## Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
```
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
```
## OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
> **Note:** You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
#### Your answer below
The model does not fully account for the holidays in the second half of December, which cause a radical drop in ridership against what one would expect in the second half of most months, or even on other holidays throughout the year. This is especially true for the day following Christmas, which sees almost no ridership.
| github_jupyter |
## Project Overall Objects
This notebook file would approach and achieve the objects highlighted in **black**. Those in *<font color=green>green</font>* have already been accomplished in earlier analysis.
- <font color=green> Clean dirty log data and transform it for analytics. </font>
- <font color=green> Exploratory Data Analysis (EDA). </font>
- <font color=green> Find the conversion rate of users, identify key factors that bottleneck the conversion rate. <font>
- <font color=green> Propose any hypothesis and test through analyzing features.</font>
- Build machine learning models to predict user behaviors, including but not limited to signup, churn, etc.
- Discover interesting insights in the dataset and suggest how to improve the user signup rate.
This notebook file is to develop machine learning models to predict user behaviors, based on selected features from the previous analyses. We would build and compare the results of Logistic Regression Models, Decision Trees (Single and Bagged), K-Nearest Neighbors (Single and Bagged), Random Forest, Gradient Boosting Tree, MLP Neural Network, and Linear & Non-Linear SVM. Lastly, we apply a hyper-parametering tuning on a selected model.
## Load data and browse data
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
% matplotlib inline
# Always make it pretty.
plt.style.use('ggplot')
# Load data from file
df = pd.read_csv('data/selected_data.csv')
# Browse dataset
df.head(10)
# Show summary stats
df.describe()
```
## Build Logistic Regression Model
#### Encode categorical columns to numeric values
```
df.columns
col_category = ['model_bin','os_bin','CT_bin','PRO_bin','source','content','medium']
df_dummies = pd.get_dummies(df[col_category], columns=col_category)
df_dummies.head()
df = df.join(df_dummies)
df.head()
df.columns
df['avg_st'] = df['avg_st'].fillna(0)
# Use scatter_matrix from Pandas
from pandas.plotting import scatter_matrix
scatter_matrix(df[[u'signup', u'Ttoend', u'count',u'avg_st']],
alpha=0.2, figsize=(16, 16), diagonal='kde')
plt.savefig('timescatter_feature_selection.png',bbox_inches="tight",dpi=500)
plt.show()
```
### Define Features and Target
```
df['log_st'] = np.log((df.avg_st+1)/1000)
df['log_Ttoend'] = np.log(df.Ttoend+1)
df['log_ileave'] = np.log(df.ileave+1)
df['log_npage'] = np.log(df.npage+1)
df['log_nclick'] = np.log(df.nclick+1)
selected_features = [u'count',u'worktime',u'weekend',
u'log_st',
u'model_bin_mac',u'model_bin_others', u'model_bin_pc', u'os_bin_macosx', u'os_bin_others',
u'os_bin_windows', u'CT_bin_Beijing', u'CT_bin_Guangzhou',
u'CT_bin_Shanghai', u'CT_bin_others', u'PRO_bin_BJ', u'PRO_bin_GD',
u'PRO_bin_SH', u'PRO_bin_ZJ', u'PRO_bin_others', u'source_baidu',
u'source_others', u'source_others',u'content_data',u'content_others',u'content_ukcontent',
u'medium_cpc', u'medium_others', u'medium_ukmedium',
u'log_npage', u'log_nclick', u'log_ileave',
u'log_Ttoend',
u'is_fd', u'is_ft', u'CN']
target = u'signup'
X = df[selected_features].values
y = df['signup'].values
X.shape
```
#### Let's Train-test split the data!
```
# import train test split function from sklearn
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
```
### Logistic Regression model using sklearn
```
# Import logistic regression from sklearn
from sklearn.linear_model import LogisticRegression
# Initialize model by providing parameters
# http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
clf = LogisticRegression(C=1.0, penalty='l1')
# Fit a model by providing X and y from training set
clf.fit(X_train, y_train)
# Make prediction on the training data
y_train_pred = clf.predict(X_train)
p_train_pred = clf.predict_proba(X_train)[:,1]
# Make predictions on test data
y_test_pred = clf.predict(X_test)
p_test_pred = clf.predict_proba(X_test)[:,1]
```
### Calculate the metric scores for the model
```
# Import metrics functions from sklearn
from sklearn.metrics import precision_score, accuracy_score, recall_score, f1_score, roc_auc_score
# Helper method to print metric scores
def get_performance_metrics(y_train, y_train_pred, y_test, y_test_pred, threshold=0.5):
metric_names = ['AUC','Accuracy','Precision','Recall','f1-score']
metric_values_train = [roc_auc_score(y_train, y_train_pred),
accuracy_score(y_train, y_train_pred>threshold),
precision_score(y_train, y_train_pred>threshold),
recall_score(y_train, y_train_pred>threshold),
f1_score(y_train, y_train_pred>threshold)
]
metric_values_test = [roc_auc_score(y_test, y_test_pred),
accuracy_score(y_test, y_test_pred>threshold),
precision_score(y_test, y_test_pred>threshold),
recall_score(y_test, y_test_pred>threshold),
f1_score(y_test, y_test_pred>threshold)
]
all_metrics = pd.DataFrame({'metrics':metric_names,
'train':metric_values_train,
'test':metric_values_test},columns=['metrics','train','test']).set_index('metrics')
print(all_metrics)
from sklearn.metrics import roc_curve, auc
def plot_roc_curve(y_train, y_train_pred, y_test, y_test_pred, title):
roc_auc_train = roc_auc_score(y_train, y_train_pred)
fpr_train, tpr_train, _ = roc_curve(y_train, y_train_pred)
roc_auc_test = roc_auc_score(y_test, y_test_pred)
fpr_test, tpr_test, _ = roc_curve(y_test, y_test_pred)
plt.figure()
lw = 2
plt.plot(fpr_train, tpr_train, color='green',
lw=lw, label='ROC Train (AUC = %0.4f)' % roc_auc_train)
plt.plot(fpr_test, tpr_test, color='darkorange',
lw=lw, label='ROC Test (AUC = %0.4f)' % roc_auc_test)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.savefig('ROC_' + title + '.png', bbox_inches="tight",dpi=500)
plt.show()
# print model results
get_performance_metrics(y_train, p_train_pred, y_test, p_test_pred)
plot_roc_curve(y_train, p_train_pred, y_test, p_test_pred, 'LgReg')
```
##### recall=tp/(tp+fn): for this problem, sign up rate is low, it is expected that we see low tp compared with fn
##### precision=tp/(tp+fp): this metric is more importatn, low precision might due to small number of data amount
Understanding the Estimated Coefficients
```
df_coeffs = pd.DataFrame(list(zip(selected_features, clf.coef_.flatten()))).sort_values(by=[1], ascending=False)
df_coeffs.columns = ['feature', 'coeff']
df_coeffs
from pylab import rcParams
#plt.barh(y_pos, performance, align='center', alpha=0.5)
ax = df_coeffs.plot.barh(align='center',width=0.35)
t = np.arange(X.shape[1])
ax.set_yticks(t)
ax.set_yticklabels(df_coeffs['feature'])
rcParams['figure.figsize'] = 12, 12
plt.savefig('features.png',bbox_inches="tight",dpi=500)
plt.tight_layout()
plt.show()
```
### model evaluation
#### confusion matrix
```
from sklearn.metrics import confusion_matrix, classification_report, roc_curve
confusion_matrix(y_test, y_test_pred)
# Helper method to plot confusion matrix
def plot_confusion_matrix(y_true, y_pred):
'''
Code from sklearn example.
'''
cm = confusion_matrix(y_true, y_pred)
print(cm)
# Show confusion matrix in a separate window
plt.matshow(cm)
plt.title('Confusion matrix')
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
plot_confusion_matrix(y_train, y_train_pred)
plot_confusion_matrix(y_test, y_test_pred)
print("Area Under Curve (AUC) of the Logistic Regression is: {}".format(roc_auc_score(y_test, y_test_pred)))
```
### Bagging
#### Single Tree
```
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(max_depth=20,min_samples_leaf=10)
# Fit a model by providing X and y from training set
clf.fit(X_train, y_train)
# Make prediction on the training data
y_train_pred = clf.predict(X_train)
p_train_pred = clf.predict_proba(X_train)[:,1]
# Make predictions on test data
y_test_pred = clf.predict(X_test)
p_test_pred = clf.predict_proba(X_test)[:,1]
# print model results
get_performance_metrics(y_train, p_train_pred, y_test, p_test_pred)
plot_roc_curve(y_train, p_train_pred, y_test, p_test_pred, 'Single_Tree')
# define function to perform train, test, and get model performance
def train_test_model(clf, X_train, y_train, X_test, y_test, title):
# Fit a model by providing X and y from training set
clf.fit(X_train, y_train)
# Make prediction on the training data
y_train_pred = clf.predict(X_train)
p_train_pred = clf.predict_proba(X_train)[:,1]
# Make predictions on test data
y_test_pred = clf.predict(X_test)
p_test_pred = clf.predict_proba(X_test)[:,1]
# print model results
get_performance_metrics(y_train, p_train_pred, y_test, p_test_pred)
plot_roc_curve(y_train, p_train_pred, y_test, p_test_pred, title)
```
#### Bagged Trees
```
# http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.BaggingClassifier.html
from sklearn.ensemble import BaggingClassifier
base_classifier = DecisionTreeClassifier(max_depth=20,min_samples_leaf=10)
# Choose some parameter combinations to try
parameters = {
'base_estimator':base_classifier,
'n_estimators': 50,
'n_jobs': -1
}
clf = BaggingClassifier(**parameters)
# Train test model
train_test_model(clf, X_train, y_train, X_test, y_test, 'Bagged_Tree')
```
##### bagging is a very useful and effective methods for low amount of data points.
#### Single KNN
```
from sklearn.neighbors import KNeighborsClassifier
parameters = {
#'weights':'distance',
'n_neighbors':3,
'leaf_size':10
}
base_classifier = KNeighborsClassifier(**parameters)
clf = base_classifier
# Train test model
train_test_model(clf, X_train, y_train, X_test, y_test, 'Single_KNN')
```
#### Bagged KNN
```
from sklearn.ensemble import BaggingClassifier
# Choose some parameter combinations to try
parameters = {
'base_estimator':base_classifier,
'n_estimators': 30,
'n_jobs': -1
}
clf = BaggingClassifier(**parameters)
# Train test model
train_test_model(clf, X_train, y_train, X_test, y_test, 'Bagged_KNN')
```
### Random Forest
```
# http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
from sklearn.ensemble import RandomForestClassifier
# Choose some parameter combinations to try
parameters = {'n_estimators': 50,
'max_features': 'auto',
'criterion': 'gini',
'max_depth': 20,
'min_samples_split': 2,
'min_samples_leaf': 20,
'random_state': 0,
'n_jobs': -1
}
clf = RandomForestClassifier(**parameters)
# Fit a model by providing X and y from training set
clf.fit(X_train, y_train)
# Train test model
train_test_model(clf, X_train, y_train, X_test, y_test, 'Random_Forest')
```
### Gradient Boosting Trees
```
# http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html
from sklearn.ensemble import GradientBoostingClassifier
# Choose some parameter combinations to try
parameters = {
'n_estimators': 50,
'max_depth': 5,
'learning_rate': 0.2,
'random_state': 42
}
# parameters = {
# 'n_estimators': 50,
# 'max_depth': 5,
# 'learning_rate': 0.2,
# 'subsample': 0.7,
# 'max_features':0.8,
# 'random_state': 42
# }
clf = GradientBoostingClassifier(**parameters)
# Train test model
train_test_model(clf, X_train, y_train, X_test, y_test, 'Gradient_Boosting')
df_importance = pd.DataFrame(list(zip(selected_features, clf.feature_importances_.flatten()))).sort_values(by=[1], ascending=False)
df_importance.columns = ['feature', 'importance']
df_importance
```
### Neural Network
```
# http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html#sklearn.neural_network.MLPClassifier
from sklearn.neural_network import MLPClassifier
# Choose some parameter combinations to try
parameters = {
'solver':'adam',
'activation':'relu',
'alpha':1e-5, #increase alpha->increase penalty :: http://scikit-learn.org/stable/auto_examples/neural_networks/plot_mlp_alpha.html#sphx-glr-auto-examples-neural-networks-plot-mlp-alpha-py
'hidden_layer_sizes':(5,5),
'learning_rate':'adaptive',
'random_state':1
}
clf = MLPClassifier(**parameters)
# Train test model
train_test_model(clf, X_train, y_train, X_test, y_test, 'MLP_NN')
```
### SVM
#### Linear SVM
```
from sklearn.svm import LinearSVC
# Choose some parameter combinations to try
clf = LinearSVC()
# Fit a model by providing X and y from training set
clf.fit(X_train, y_train)
# No predict_proba for LinearSVC
# Make prediction on the training data
p_train_pred = clf.predict(X_train)
# Make predictions on test data
p_test_pred = clf.predict(X_test)
# print model results
get_performance_metrics(y_train, p_train_pred, y_test, p_test_pred)
plot_roc_curve(y_train, p_train_pred, y_test, p_test_pred, 'Linear_SVM')
```
#### NonLinear SVM
```
# http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html
from sklearn.svm import SVC
# Choose some parameter combinations to try
parameters = {
'probability':True, # get simulated probability
'max_iter':2000
}
clf = SVC(**parameters)
# Train test model
train_test_model(clf, X_train, y_train, X_test, y_test, 'NonLinear_SVM')
```
### HyperParameter Tuning: Grid Search
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import make_scorer, roc_auc_score, accuracy_score
from sklearn.model_selection import GridSearchCV
# Choose the type of classifier.
clf = RandomForestClassifier()
# Choose some parameter combinations to try
param_grid = {'n_estimators': [100,200],
'max_features': ['auto'],
'criterion': ['gini'],
'max_depth': [15,20,25],
'min_samples_split': [2],
'min_samples_leaf': [2,10,20],
'n_jobs':[-1]
}
# Type of scoring used to compare parameter combinations
acc_scorer = make_scorer(roc_auc_score)
# Run the grid search
# read theory
grid_obj = GridSearchCV(clf, param_grid, cv=5, scoring=acc_scorer)
grid_obj = grid_obj.fit(X_train, y_train)
# Set the clf to the best combination of parameters
clf = grid_obj.best_estimator_
# Fit the best algorithm to the data.
clf.fit(X_train, y_train)
# Train test model
train_test_model(clf, X_train, y_train, X_test, y_test, 'selected_RF_tuned')
```
| github_jupyter |
# Imports
```
import matplotlib.pyplot as plt
import numpy as np
import random
from collections import namedtuple
from sklearn.datasets import load_digits
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
EPSILON = 1e-8 # small constant to avoid underflow or divide per 0
%matplotlib inline
```
# I - Data loading and preprocessing
## Load the data
#### This time, the data will correspond to greyscale images. <br> Two different datasets can be used here:
- The MNIST dataset, small 8*8 images, corresponding to handwritten digits → 10 classes
- The Fashion MNIST dataset, medium 28*28 images, corresponding to clothes pictures → 10 classes
#### Starting with the simple MNIST is recommended
```
dataset = "MNIST"
# dataset = "FASHION_MNIST"
def load_data(dataset='MNIST'):
if dataset == 'MNIST':
digits = load_digits()
X, Y = np.asarray(digits['data'], dtype='float32'), np.asarray(digits['target'], dtype='int32')
return X, Y
elif dataset == 'FASHION_MNIST':
import tensorflow as tf
fashion_mnist = tf.keras.datasets.fashion_mnist
(X, Y), (_, _) = fashion_mnist.load_data()
X = X.reshape((X.shape[0], X.shape[1] * X.shape[2]))
X, Y = np.asarray(X, dtype='float32'), np.asarray(Y, dtype='int32')
return X, Y
X, Y = load_data(dataset=dataset)
n_classes = len(np.unique(Y))
print('Number of samples: {:d}'.format(X.shape[0]))
print('Input dimension: {:d}'.format(X.shape[1])) # images 8x8 or 28*28 actually
print('Number of classes: {:d}'.format(n_classes))
print("Range max-min of greyscale pixel values: ({0:.1f}, {1:.1f})".format(np.max(X), np.min(X)))
print("First image sample:\n{0}".format(X[0]))
print("First image label: {0}".format(Y[0]))
print("Input design matrix shape: {0}".format(X.shape))
```
### What does the data look like?
Each image in the dataset consists of a 8 x 8 (or 28 x 28) matrix, of greyscale pixels. For the MNIST dataset, the values are between 0 and 16 where 0 represents white, 16 represents black and there are many shades of grey in-between. For the Fashion MNIST dataset, the values are between 0 and 255.<br>Each image is assigned a corresponding numerical label, so the image in ```X[i]``` has its corresponding label stored in ```Y[i]```.
The next cells below demonstrate how to visualise the input data. Make sure you understand what's happening, particularly how the indices correspond to individual items in the dataset.
## Visualize the data
```
def visualize_data_sample(X, Y, nrows=2, ncols=2):
fig, ax = plt.subplots(nrows, ncols)
for row in ax:
for col in row:
index = random.randint(0, X.shape[0])
dim = np.sqrt(X.shape[1]).astype(int)
col.imshow(X[index].reshape((dim, dim)), cmap=plt.cm.gray_r)
col.set_title("image label: %d" % Y[index])
plt.tight_layout()
plt.show()
visualize_data_sample(X, Y)
```
# II - Multiclass classification MLP with Numpy
## II a) - Problem definition
<img src="../images/mlp_mnist.svg">
The task here will be to implement "from scratch" a Multilayer Perceptron for classification.
We will define the formal categorical cross entropy loss as follows:
$$
l(\mathbf{\Theta}, \mathbf{X}, \mathbf{Y}) = - \frac{1}{n} \sum_{i=1}^n \log \mathbf{f}(\mathbf{x}^i ; \mathbf{\Theta})^\top y^i
$$
<center>with $y^i$ being the one-hot encoded true label for the sample $i$, and $\Theta = (\mathbf{W}^h; \mathbf{b}^h; \mathbf{W}^o; \mathbf{b}^o)$</center>
<center>In addition, $\mathbf{f}(\mathbf{x}) = softmax(\mathbf{z^o}(\mathbf{x})) = softmax(\mathbf{W}^o\mathbf{h}(\mathbf{x}) + \mathbf{b}^o)$</center>
<center>and $\mathbf{h}(\mathbf{x}) = g(\mathbf{z^h}(\mathbf{x})) = g(\mathbf{W}^h\mathbf{x} + \mathbf{b}^h)$, $g$ being the activation function and could be implemented with $sigmoid$ or $relu$</center>
## Objectives:
- Write the categorical cross entropy loss function
- Write the activation functions with their associated gradient
- Write the softmax function that is going to be used to output the predicted probabilities
- Implement the forward pass through the neural network
- Implement the backpropagation according to the used loss: progagate the gradients using the chain rule and return $(\mathbf{\nabla_{W^h}}l ; \mathbf{\nabla_{b^h}}l ; \mathbf{\nabla_{W^o}}l ; \mathbf{\nabla_{b^o}}l)$
- Implement dropout regularization in the forward pass: be careful to consider both training and prediction cases
- Implement the SGD optimization algorithm, and improve it with simple momentum
#### Simple graph function to let you have a global overview:
<img src="../images/function_graph.png" style="width: 750px;"/>
## II b) - Implementation
```
class MultiLayerPerceptron():
"""MLP with one hidden layer having a hidden activation,
and one output layer having a softmax activation"""
def __init__(self, X, Y, hidden_size, activation='relu',
initialization='uniform', dropout=False, dropout_rate=1):
# input, hidden, and output dimensions on the MLP based on X, Y
self.input_size, self.output_size = X.shape[1], len(np.unique(Y))
self.hidden_size = hidden_size
# initialization strategies: avoid a full-0 initialization of the weight matrices
if initialization == 'uniform':
self.W_h = np.random.uniform(size=(self.input_size, self.hidden_size), high=0.01, low=-0.01)
self.W_o = np.random.uniform(size=(self.hidden_size, self.output_size), high=0.01, low=-0.01)
elif initialization == 'normal':
self.W_h = np.random.normal(size=(self.input_size, self.hidden_size), loc=0, scale=0.01)
self.W_o = np.random.normal(size=(self.hidden_size, self.output_size), loc=0, scale=0.01)
# the bias could be initializated to 0 or a random low constant
self.b_h = np.zeros(self.hidden_size)
self.b_o = np.zeros(self.output_size)
# our namedtuple structure of gradients
self.Grads = namedtuple('Grads', ['W_h', 'b_h', 'W_o', 'b_o'])
# and the velocities associated which are going to be useful for the momentum
self.velocities = {'W_h': 0., 'b_h': 0., 'W_o': 0., 'b_o': 0.}
# the hidden activation function used
self.activation = activation
# arrays to track back the losses and accuracies evolution
self.training_losses_history = []
self.validation_losses_history = []
self.training_acc_history = []
self.validation_acc_history = []
# train val split and normalization of the features
self.X_tr, self.X_val, self.Y_tr, self.Y_val = self.split_train_validation(X, Y)
self.scaler = MinMaxScaler(feature_range=(0, 1), copy=False)
self.X_tr = self.scaler.fit_transform(self.X_tr)
self.X_val = self.scaler.transform(self.X_val)
# dropout parameters
self.dropout = dropout
self.dropout_rate = dropout_rate
# step used for the optimization algorithm and setted later
self.step = None
# One-hot encoding of the target
# Transform the integer represensation to a sparse one
@staticmethod
def one_hot(n_classes, Y):
return np.eye(n_classes)[Y]
# Reverse one-hot encoding of the target
# Recover the former integer representation
# ex: from (0,0,1,0) to 2
@staticmethod
def reverse_one_hot(Y_one_hot):
return np.asarray(np.where(Y_one_hot==1)[1], dtype='int32')
"""
Activation functions and their gradient
"""
# In implementations below X is a matrix of shape (n_samples, p)
# A max_value value is indicated for the relu and grad_relu functions
# Make sure to clip the output to it to prevent numerical overflow (exploding gradient)
# Make it so the max value reachable is max_value
@staticmethod
def relu(X, max_value=20):
assert max_value > 0
# TODO:
return np.zeros(X.shape)
# Make it so the gradient becomes 0 when X becomes greater than max_value
@staticmethod
def grad_relu(X, max_value=20):
assert max_value > 0
# TODO:
return np.zeros(X.shape)
@staticmethod
def sigmoid(X):
# TODO:
return np.zeros(X.shape)
@staticmethod
def grad_sigmoid(X):
# TODO:
return np.zeros(X.shape)
# Softmax function to output probabilities
@staticmethod
def softmax(X):
# TODO:
return np.zeros(X.shape)
# Loss function
# Consider using EPSILON to prevent numerical issues (log(0) is undefined)
# Y_true and Y_pred are of shape (n_samples,n_classes)
@staticmethod
def categorical_cross_entropy(Y_true, Y_pred):
# TODO:
return 0.
@staticmethod
def split_train_validation(X, Y, test_size=0.25, seed=False):
random_state = 42 if seed else np.random.randint(1e3)
X_tr, X_val, Y_tr, Y_val = train_test_split(X, Y, test_size=test_size, random_state=random_state)
return X_tr, X_val, Y_tr, Y_val
# Sample random batch in (X, Y) with a given batch_size for SGD
@staticmethod
def get_random_batch(X, Y, batch_size):
indexes = np.random.choice(X.shape[0], size=batch_size, replace=False)
return X[indexes], Y[indexes]
# Forward pass: compute f(x) as y, and return optionally the hidden states h(x) and z_h(x) for compute_grads
def forward(self, X, return_activation=False, training=False):
if self.activation == 'relu':
g_activation = self.relu
elif self.activation == 'sigmoid':
g_activation = self.sigmoid
else:
raise NotImplementedError
# TODO:
if self.dropout:
if training:
# TODO:
pass
else:
# TODO:
pass
# TODO:
y = np.zeros((X.shape[0], self.output_size)) if len(X.shape) > 1 else np.zeros(self.output_size)
h = np.zeros((X.shape[0], self.hidden_size)) if len(X.shape) > 1 else np.zeros(self.hidden_size)
z_h = np.zeros((X.shape[0], self.hidden_size)) if len(X.shape) > 1 else np.zeros(self.hidden_size)
if return_activation:
return y, h, z_h
else:
return y
# Backpropagation: return an instantiation of self.Grads that contains the average gradients for the given batch
def compute_grads(self, X, Y_true, vectorized=False):
if self.activation == 'relu':
g_grad = self.grad_relu
elif self.activation == 'sigmoid':
g_grad = self.grad_sigmoid
else:
raise NotImplementedError
if len(X.shape) == 1:
X = X.reshape((1,) + X.shape)
if not vectorized:
n = X.shape[0]
grad_W_h = np.zeros((self.input_size, self.hidden_size))
grad_b_h = np.zeros((self.hidden_size, ))
grad_W_o = np.zeros((self.hidden_size, self.output_size))
grad_b_o = np.zeros((self.output_size, ))
for x, y_true in zip(X, Y_true):
y_pred, h, z_h = self.forward(x, return_activation=True, training=True)
# TODO:
grads = self.Grads(grad_W_h/n, grad_b_h/n, grad_W_o/n, grad_b_o/n)
else:
Y_pred, h, z_h = self.forward(X, return_activation=True, training=True)
# TODO (optional), try to do the backprop without Python loops in a vectorized way:
grad_W_h = np.zeros((X.shape[0], self.input_size, self.hidden_size))
grad_b_h = np.zeros((X.shape[0], self.hidden_size, ))
grad_W_o = np.zeros((X.shape[0], self.hidden_size, self.output_size))
grad_b_o = np.zeros((X.shape[0], self.output_size, ))
grads = self.Grads(
np.mean(grad_W_h, axis=0),
np.mean(grad_b_h, axis=0),
np.mean(grad_W_o, axis=0),
np.mean(grad_b_o, axis=0)
)
return grads
# Perform the update of the parameters (W_h, b_h, W_o, b_o) based of their gradient
def optimizer_step(self, optimizer='gd', momentum=False, momentum_alpha=0.9,
batch_size=None, vectorized=True):
if optimizer == 'gd':
grads = self.compute_grads(self.X_tr, self.Y_tr, vectorized=vectorized)
elif optimizer == 'sgd':
batch_X_tr, batch_Y_tr = self.get_random_batch(self.X_tr, self.Y_tr, batch_size)
grads = self.compute_grads(batch_X_tr, batch_Y_tr, vectorized=vectorized)
else:
raise NotImplementedError
if not momentum:
# TODO:
pass
else:
# remember: use the stored velocities
# TODO:
pass
# Loss wrapper
def loss(self, Y_true, Y_pred):
return self.categorical_cross_entropy(self.one_hot(self.output_size, Y_true), Y_pred)
def loss_history_flush(self):
self.training_losses_history = []
self.validation_losses_history = []
# Main function that trains the MLP with a design matrix X and a target vector Y
def train(self, optimizer='sgd', momentum=False, min_iterations=500, max_iterations=5000, initial_step=1e-1,
batch_size=64, early_stopping=True, early_stopping_lookbehind=100, early_stopping_delta=1e-4,
vectorized=False, flush_history=True, verbose=True):
if flush_history:
self.loss_history_flush()
cpt_patience, best_validation_loss = 0, np.inf
iteration_number = 0
self.step = initial_step
while len(self.training_losses_history) < max_iterations:
iteration_number += 1
self.optimizer_step(
optimizer=optimizer, momentum=momentum, batch_size=batch_size, vectorized=vectorized
)
training_loss = self.loss(self.Y_tr, self.forward(self.X_tr))
self.training_losses_history.append(training_loss)
training_accuracy = self.accuracy_on_train()
self.training_acc_history.append(training_accuracy)
validation_loss = self.loss(self.Y_val, self.forward(self.X_val))
self.validation_losses_history.append(validation_loss)
validation_accuracy = self.accuracy_on_validation()
self.validation_acc_history.append(validation_accuracy)
if iteration_number > min_iterations and early_stopping:
if validation_loss + early_stopping_delta < best_validation_loss:
best_validation_loss = validation_loss
cpt_patience = 0
else:
cpt_patience += 1
if verbose:
msg = "iteration number: {0}\t training loss: {1:.4f}\t" + \
"validation loss: {2:.4f}\t validation accuracy: {3:.4f}"
print(msg.format(iteration_number,
training_loss,
validation_loss,
validation_accuracy))
if cpt_patience >= early_stopping_lookbehind:
break
# Return the predicted class once the MLP has been trained
def predict(self, X, normalize=True):
if normalize:
X = self.scaler.transform(X)
if len(X.shape) == 1:
return np.argmax(self.forward(X))
else:
return np.argmax(self.forward(X), axis=1)
"""
Metrics and plots
"""
def accuracy_on_train(self):
return (self.predict(self.X_tr, normalize=False) == self.Y_tr).mean()
def accuracy_on_validation(self):
return (self.predict(self.X_val, normalize=False) == self.Y_val).mean()
def plot_loss_history(self, add_to_title=None):
import warnings
warnings.filterwarnings("ignore")
plt.figure(figsize=(12, 8))
plt.plot(range(len(self.training_losses_history)),
self.training_losses_history, label='Training loss evolution')
plt.plot(range(len(self.validation_losses_history)),
self.validation_losses_history, label='Validation loss evolution')
plt.legend(fontsize=15)
plt.yscale('log')
plt.xlabel("iteration number", fontsize=15)
plt.ylabel("Cross entropy loss", fontsize=15)
base_title = "Cross entropy loss evolution during training"
if not self.dropout:
base_title += ", no dropout penalization"
else:
base_title += ", {:.1f} dropout penalization"
base_title = base_title.format(self.dropout_rate)
title = base_title + ", " + add_to_title if add_to_title else base_title
plt.title(title, fontsize=20)
plt.show()
def plot_validation_prediction(self, sample_id):
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, figsize=(10, 4))
classes = np.unique(self.Y_tr)
dim = np.sqrt(self.X_val.shape[1]).astype(int)
ax0.imshow(self.scaler.inverse_transform([self.X_val[sample_id]]).reshape(dim, dim), cmap=plt.cm.gray_r,
interpolation='nearest')
ax0.set_title("True image label: %d" % self.Y_val[sample_id]);
ax1.bar(classes, self.one_hot(len(classes), self.Y_val[sample_id]), label='true')
ax1.bar(classes, self.forward(self.X_val[sample_id]), label='prediction', color="red")
ax1.set_xticks(classes)
prediction = self.predict(self.X_val[sample_id], normalize=False)
ax1.set_title('Output probabilities (prediction: %d)' % prediction)
ax1.set_xlabel('Digit class')
ax1.legend()
mlp = MultiLayerPerceptron(X, Y, hidden_size=50, activation='relu')
mlp.train()
```
## Questions:
#### - Did you succeed to train the MLP and get a high validation accuracy? <br> Display available metrics (training and validation accuracies, training and validation losses)
#### - Plot the prediction for a given validation sample. Is it accurate?
#### - Compare the full gradient descent with the SGD.
#### - Play with the hyper parameters you have: the hidden size, the activation function, the initial step and the batch size. <br> Comment. Don't hesitate to visualize results.
#### - Once properly implemented, compare the training using early stopping, dropout, or both of them. <br> Why are these methods useful here?
#### - Once properly implemented, compare the training using momentum.
```
mlp.plot_validation_prediction(sample_id=42)
mlp.plot_loss_history()
```
# III - Multiclass classification MLP with Keras
#### - Implement the same network architecture with Keras;
- First using the Sequential API
- Secondly using the functional API
#### - Check that the Keras model can approximately reproduce the behavior of the Numpy model.
#### - Compute the negative log likelihood of a sample 42 in the test set (can use `model.predict_proba`).
#### - Compute the average negative log-likelihood on the full test set.
#### - Compute the average negative log-likelihood on the full training set and check that you can get the value of the loss reported by Keras.
#### - Is the model overfitting or underfitting? (ensure that the model has fully converged by increasing the number of epochs to 500 or more if necessary).
```
X_tr, X_val, Y_tr, Y_val = train_test_split(X, Y)
scaler = MinMaxScaler(feature_range=(0, 1), copy=False)
X_tr = scaler.fit_transform(X_tr)
X_val = scaler.transform(X_val)
n_features = X[0].shape[0]
n_classes = len(np.unique(Y_tr))
n_hidden = 10
```
### Sequential
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
activation = "relu"
# activation = "sigmoid"
print('Model with {} activation'.format(activation))
keras_model = Sequential()
# TODO:
```
### Functional
```
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
activation = "relu"
# activation = "sigmoid"
print('Model with {} activation'.format(activation))
inputs = Input(shape=(n_features,))
# TODO:
```
#### Now that you know if the model is underfitting or overfitting:
#### - In case of underfitting, can you explain why ? Also change the structure of the 2 previous networks to cancell underfitting
#### - In case of overfitting, explain why and change the structure of the 2 previous networks to cancell the overfitting
| github_jupyter |
# "SpreadSheet Munging Strategies in Python - Pivot Tables - Complex Unpivoting"
> "Extract data from Complex Pivot tables in a spreadsheet"
- toc: true
- branch: master
- badges: true
- hide_binder_badge: True
- hide_colab_badge: True
- comments: true
- author: Samuel Oranyeli
- categories: [Spreadsheet, python, Pandas]
- image: images/some_folder/your_image.png
- hide: false
- search_exclude: true
- metadata_key1: metadata_value1
- metadata_key2: metadata_value2
## __Pivot Tables - Complex Unpivoting__
This is part of a series of blog posts about extracting data from spreadsheets using Python. It is based on the [book](https://nacnudus.github.io/spreadsheet-munging-strategies/index.html) written by [Duncan Garmonsway](https://twitter.com/nacnudus?lang=en), which was written primarily for R users. LInks to the other posts are on the [homepage](https://samukweku.github.io/data-wrangling-blog/).
We've dealt with pivot tables in one of the [previous posts](https://samukweku.github.io/data-wrangling-blog/fastpages/jupyter/excel/spreadsheet/python/pandas/2020/05/10/Pivot-Tables-Simple-Unpivoting.html). Here, we take the complexity up a notch. Let's dive in to the various scenarios.
### **Case 1 : Centre-aligned headers**

In this case, the headers are not aligned completely with the subjects or names. If the data is read into Pandas, columns B and C are set as the index of the dataframe, and a forward/backward fill applied, "Humanities" could be wrongly assigned to "Music" or "Performance" to " Literature" ("Music" should be paired with "Performance", while "Humanities" should be paired with "Literature"). Same goes for the header columns - if rows 2 and 3 are read in as header columns, and a forward/backward fill applied, "Female" may be wrongly assigned to "Lenny", while "Male" could be wrongly assigned to "Olivia".

The solution is to get the coordinates for the horizontal and vertical borders, and use that to correctly pair the header rows and header columns. [Openpyxl](https://openpyxl.readthedocs.io/en/stable/tutorial.html) can get us the coordinates. Once we get the boundaries, and pair the header rows and columns, we'll create [MultiIndexes](https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html) for the rows and columns of the new dataframe. It'll be clearer with code.
```
import pandas as pd
import janitor
import numpy as np
from itertools import product
from openpyxl import load_workbook
filename = "Data_files/worked-examples.xlsx"
wb = load_workbook(filename)
ws = wb["pivot-centre-aligned"]
from collections import defaultdict
hor = set() #horizontal borders
ver = set() #vertical borders
text = []
numbers = defaultdict(list)
for row in ws.iter_rows(min_row=1,
max_row=ws.max_row,
min_col=1,
max_col=ws.max_column):
#get the row/col numbers and add to hor/ver
for cell in row:
#for this scenario, null values are not needed
if cell.value is None:
continue
if cell.border.bottom.style: #this gets us the boundaries for the header columns
hor.add(cell.row + 1) #+1 cos we need values within the boundaries
if cell.border.right.style: #boundaries for the header rows
ver.add(cell.column + 1)
#separate lists for texts and numbers
if cell.data_type == "s":
text.append((cell.row, cell.column, cell.value))
if cell.data_type == "n" :
numbers[cell.row].append(cell.value)
print(hor, ver)
from operator import itemgetter
#easy identifiers :
row = 0
col = 1
#extract gender
min_row = min(map(itemgetter(row),text))
gender = [entry for entry in text
if entry[row] == min_row]
#extract field
min_col = min(map(itemgetter(col), text))
fields = [entry for entry in text
if entry[col] == min_col]
#remove gender and field from text
rest = [entry for entry in text
if entry[row] != min_row
and entry[col] != min_col
]
#Get the sorted pairs for the boundaries
#and pair the texts within the boundaries
def pair(seq):
seq = sorted(seq)
return zip(seq, seq[1:])
#extra identifier
word = -1
#get header columns -
#this will be data within pair(hor)
from itertools import product
header_col = [(field[word], subject[word])
for field, subject, bound
in product(fields,rest,pair(hor))
if field[row] in range(*bound)
and subject[row] in range(*bound)
]
#get the header rows -
#this will be data within pair(ver)
header_row = [(sex[word], name[word])
for sex, name, bound
in product(gender,rest,pair(ver))
if sex[col] in range(*bound)
and name[col] in range(*bound)
]
header_row
header_col
#create multiIndexes
index = pd.MultiIndex.from_tuples(header_col, names = ['field', 'subject'])
columns = pd.MultiIndex.from_tuples(header_row, names = ['sex','name'])
#create dataframe, with the numbers list as the data
df = pd.DataFrame(numbers.values(), index = index, columns = columns)
df
#the hard work is done - reshaping is much easier, and just a few lines :
df = (df
.stack(['sex', 'name'])
.reset_index(name='scores')
.astype({'scores':'uint8'})
)
df.head()
```
### **Case 2: Repeated rows/columns of headers within the table**

**Observations** : <br>
The row header (Term1, Term2, Term3) is repeated in four locations; we only need one.<br>
The index columns are clearly delineated; the pairing of subjects and names is assured.
```
sheet = "pivot-repeated-headers"
df = (pd.read_excel(filename, sheet_name = sheet,header=None)
#remove completely empty rows and columns
.remove_empty()
.ffill()
#replace remaining nulls with specific values
.fillna({1:'subject', 2:'student'})
.set_index([1,2])
)
df
#cleanup some more into tidy form
df = (df.row_to_names(0, True)
.rename_axis(index = ['subject','student'],
columns = ['Term'])
.query('`Term 1` != "Term 1"')
.stack('Term')
.reset_index(name='scores')
)
df.head(10)
```
### **Case 3 : Headers amongst the data**

In this scenario, we have the subjects as a row header, mixed with the data. Note that the Term1,Term2,Term3 row is repeated.
```
df = (pd.read_excel(filename,
sheet_name = "pivot-header-within-data",
header = None)
.remove_empty()
#extract subjects
#where the last column is null , get the entry in column 2
.assign(temp = lambda x: np.where(x.iloc[:,-1].isna(), x[2], np.nan),
subject = lambda x: x.temp.ffill()
)
.drop('temp',axis=1)
#get rid of nulls in column 4
#gives the added advantage of
#removing the sujbects rows from column 2,
#as they are redundant
.dropna(subset=[4])
.ffill()
#fill the remaining null in column 1 with a relevant word
.fillna({1:'student'})
.set_index([1,'subject'])
)
df
#final steps
df = (df
.row_to_names(0, True)
.rename_axis(index= ['student','subject'],
columns= ['Term'])
.query("`Term 1` != 'Term 1'")
.stack()
.reset_index(name= 'Scores')
)
df.head(10)
```
| github_jupyter |
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# NRMS: Neural News Recommendation with Multi-Head Self-Attention
NRMS \[1\] is a neural news recommendation approach with multi-head selfattention. The core of NRMS is a news encoder and a user encoder. In the newsencoder, a multi-head self-attentions is used to learn news representations from news titles by modeling the interactions between words. In the user encoder, we learn representations of users from their browsed news and use multihead self-attention to capture the relatedness between the news. Besides, we apply additive
attention to learn more informative news and user representations by selecting important words and news.
## Properties of NRMS:
- NRMS is a content-based neural news recommendation approach.
- It uses multi-self attention to learn news representations by modeling the iteractions between words and learn user representations by capturing the relationship between user browsed news.
- NRMS uses additive attentions to learn informative news and user representations by selecting important words and news.
## Data format:
For quicker training and evaluaiton, we sample MINDdemo dataset of 5k users from [MIND small dataset](https://msnews.github.io/). The MINDdemo dataset has the same file format as MINDsmall and MINDlarge. If you want to try experiments on MINDsmall and MINDlarge, please change the dowload source. Select the MIND_type parameter from ['large', 'small', 'demo'] to choose dataset.
**MINDdemo_train** is used for training, and **MINDdemo_dev** is used for evaluation. Training data and evaluation data are composed of a news file and a behaviors file. You can find more detailed data description in [MIND repo](https://github.com/msnews/msnews.github.io/blob/master/assets/doc/introduction.md)
### news data
This file contains news information including newsid, category, subcatgory, news title, news abstarct, news url and entities in news title, entities in news abstarct.
One simple example: <br>
`N46466 lifestyle lifestyleroyals The Brands Queen Elizabeth, Prince Charles, and Prince Philip Swear By Shop the notebooks, jackets, and more that the royals can't live without. https://www.msn.com/en-us/lifestyle/lifestyleroyals/the-brands-queen-elizabeth,-prince-charles,-and-prince-philip-swear-by/ss-AAGH0ET?ocid=chopendata [{"Label": "Prince Philip, Duke of Edinburgh", "Type": "P", "WikidataId": "Q80976", "Confidence": 1.0, "OccurrenceOffsets": [48], "SurfaceForms": ["Prince Philip"]}, {"Label": "Charles, Prince of Wales", "Type": "P", "WikidataId": "Q43274", "Confidence": 1.0, "OccurrenceOffsets": [28], "SurfaceForms": ["Prince Charles"]}, {"Label": "Elizabeth II", "Type": "P", "WikidataId": "Q9682", "Confidence": 0.97, "OccurrenceOffsets": [11], "SurfaceForms": ["Queen Elizabeth"]}] []`
<br>
In general, each line in data file represents information of one piece of news: <br>
`[News ID] [Category] [Subcategory] [News Title] [News Abstrct] [News Url] [Entities in News Title] [Entities in News Abstract] ...`
<br>
We generate a word_dict file to tranform words in news title to word indexes, and a embedding matrix is initted from pretrained glove embeddings.
### behaviors data
One simple example: <br>
`1 U82271 11/11/2019 3:28:58 PM N3130 N11621 N12917 N4574 N12140 N9748 N13390-0 N7180-0 N20785-0 N6937-0 N15776-0 N25810-0 N20820-0 N6885-0 N27294-0 N18835-0 N16945-0 N7410-0 N23967-0 N22679-0 N20532-0 N26651-0 N22078-0 N4098-0 N16473-0 N13841-0 N15660-0 N25787-0 N2315-0 N1615-0 N9087-0 N23880-0 N3600-0 N24479-0 N22882-0 N26308-0 N13594-0 N2220-0 N28356-0 N17083-0 N21415-0 N18671-0 N9440-0 N17759-0 N10861-0 N21830-0 N8064-0 N5675-0 N15037-0 N26154-0 N15368-1 N481-0 N3256-0 N20663-0 N23940-0 N7654-0 N10729-0 N7090-0 N23596-0 N15901-0 N16348-0 N13645-0 N8124-0 N20094-0 N27774-0 N23011-0 N14832-0 N15971-0 N27729-0 N2167-0 N11186-0 N18390-0 N21328-0 N10992-0 N20122-0 N1958-0 N2004-0 N26156-0 N17632-0 N26146-0 N17322-0 N18403-0 N17397-0 N18215-0 N14475-0 N9781-0 N17958-0 N3370-0 N1127-0 N15525-0 N12657-0 N10537-0 N18224-0`
<br>
In general, each line in data file represents one instance of an impression. The format is like: <br>
`[Impression ID] [User ID] [Impression Time] [User Click History] [Impression News]`
<br>
User Click History is the user historical clicked news before Impression Time. Impression News is the displayed news in an impression, which format is:<br>
`[News ID 1]-[label1] ... [News ID n]-[labeln]`
<br>
Label represents whether the news is clicked by the user. All information of news in User Click History and Impression News can be found in news data file.
## Global settings and imports
```
import sys
sys.path.append("../../")
import os
import numpy as np
import zipfile
from tqdm import tqdm
import papermill as pm
from tempfile import TemporaryDirectory
import tensorflow as tf
from reco_utils.recommender.deeprec.deeprec_utils import download_deeprec_resources
from reco_utils.recommender.newsrec.newsrec_utils import prepare_hparams
from reco_utils.recommender.newsrec.models.nrms import NRMSModel
from reco_utils.recommender.newsrec.io.mind_iterator import MINDIterator
from reco_utils.recommender.newsrec.newsrec_utils import get_mind_data_set
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
```
## Prepare parameters
```
epochs = 5
seed = 42
batch_size = 32
# Options: demo, small, large
MIND_type = 'demo'
```
## Download and load data
```
tmpdir = TemporaryDirectory()
data_path = tmpdir.name
train_news_file = os.path.join(data_path, 'train', r'news.tsv')
train_behaviors_file = os.path.join(data_path, 'train', r'behaviors.tsv')
valid_news_file = os.path.join(data_path, 'valid', r'news.tsv')
valid_behaviors_file = os.path.join(data_path, 'valid', r'behaviors.tsv')
wordEmb_file = os.path.join(data_path, "utils", "embedding.npy")
userDict_file = os.path.join(data_path, "utils", "uid2index.pkl")
wordDict_file = os.path.join(data_path, "utils", "word_dict.pkl")
yaml_file = os.path.join(data_path, "utils", r'nrms.yaml')
mind_url, mind_train_dataset, mind_dev_dataset, mind_utils = get_mind_data_set(MIND_type)
if not os.path.exists(train_news_file):
download_deeprec_resources(mind_url, os.path.join(data_path, 'train'), mind_train_dataset)
if not os.path.exists(valid_news_file):
download_deeprec_resources(mind_url, \
os.path.join(data_path, 'valid'), mind_dev_dataset)
if not os.path.exists(yaml_file):
download_deeprec_resources(r'https://recodatasets.blob.core.windows.net/newsrec/', \
os.path.join(data_path, 'utils'), mind_utils)
```
## Create hyper-parameters
```
hparams = prepare_hparams(yaml_file,
wordEmb_file=wordEmb_file,
wordDict_file=wordDict_file,
userDict_file=userDict_file,
batch_size=batch_size,
epochs=epochs,
show_step=10)
print(hparams)
```
## Train the NRMS model
```
iterator = MINDIterator
model = NRMSModel(hparams, iterator, seed=seed)
print(model.run_eval(valid_news_file, valid_behaviors_file))
%%time
model.fit(train_news_file, train_behaviors_file, valid_news_file, valid_behaviors_file)
%%time
res_syn = model.run_eval(valid_news_file, valid_behaviors_file)
print(res_syn)
pm.record("res_syn", res_syn)
```
## Save the model
```
model_path = os.path.join(data_path, "model")
os.makedirs(model_path, exist_ok=True)
model.model.save_weights(os.path.join(model_path, "nrms_ckpt"))
```
## Output Predcition File
This code segment is used to generate the prediction.zip file, which is in the same format in [MIND Competition Submission Tutorial](https://competitions.codalab.org/competitions/24122#learn_the_details-submission-guidelines).
Please change the `MIND_type` parameter to `large` if you want to submit your prediction to [MIND Competition](https://msnews.github.io/competition.html).
```
group_impr_indexes, group_labels, group_preds = model.run_fast_eval(valid_news_file, valid_behaviors_file)
with open(os.path.join(data_path, 'prediction.txt'), 'w') as f:
for impr_index, preds in tqdm(zip(group_impr_indexes, group_preds)):
impr_index += 1
pred_rank = (np.argsort(np.argsort(preds)[::-1]) + 1).tolist()
pred_rank = '[' + ','.join([str(i) for i in pred_rank]) + ']'
f.write(' '.join([str(impr_index), pred_rank])+ '\n')
f = zipfile.ZipFile(os.path.join(data_path, 'prediction.zip'), 'w', zipfile.ZIP_DEFLATED)
f.write(os.path.join(data_path, 'prediction.txt'), arcname='prediction.txt')
f.close()
```
## Reference
\[1\] Wu et al. "Neural News Recommendation with Multi-Head Self-Attention." in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)<br>
\[2\] Wu, Fangzhao, et al. "MIND: A Large-scale Dataset for News Recommendation" Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. https://msnews.github.io/competition.html <br>
\[3\] GloVe: Global Vectors for Word Representation. https://nlp.stanford.edu/projects/glove/
| github_jupyter |
## Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs.
> Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune.
We do transfer learning when we use a pre-trained network on images not in the training set.
We'll use transfer learning to train a network that can classify cats and dogs photos.
In this notebook, you'll be using a pre-trained model from the [ImageNet dataset](http://www.image-net.org/) as a feature extractor. Below is a diagram showing the architecture of the model we'll be using. It has a series of convolutional and maxpooling layers, and some fully-connected layers at the end that aid in classifying the images (For us it's cat's and dogs).
<img src="data/feature_extractor.jpeg" width=700px>
The idea is to keep all the convolutional layers, but **replace the final fully-connected layer** with our own classifier. This way we can use VGGNet as a _fixed feature extractor_ for our images then easily train a simple classifier on top of that.
* Use all but the last fully-connected layer as a fixed feature extractor.
* Define a new, final classification layer and apply it to a task of our choice!
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
from torch.autograd import Variable
import torch.nn.functional as F
from torchvision import datasets, transforms, models
```
Most of the require a 224x224 image as input.
We need to resize the images.
```
data_dir = 'dogs_vs_cats'
# The model takes 224x224 images as input, so we resize all of them
data_transform = transforms.Compose([transforms.RandomResizedCrop(224),
transforms.ToTensor()])
train_data = datasets.ImageFolder(data_dir + '/train', transform=data_transform)
test_data = datasets.ImageFolder(data_dir + '/test', transform=data_transform)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=32)
# The model has two parts: the features, and the classifier.
classifier = nn.Sequential(
nn.Linear(2048, 512),
nn.ReLU(),
nn.Dropout(p=0.2),
nn.Linear(512, 2),
nn.LogSoftmax(dim=1))
# replace the classifier with our own
model.fc = classifier
# use negative log likelihood loss
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.fc.parameters(), lr=0.003)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
epochs = 1
steps = 0
running_loss = 0
print_every = 5
for epoch in range(epochs):
for images, labels in train_loader:
steps += 1
images, labels = images.to(device), labels.to(device)
# clear gradients
optimizer.zero_grad()
logps = model(images)
loss = criterion(logps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
# after training loop, test our network's accuracy and loss
if steps % print_every == 0:
model.eval() # turn model into evaluation mode to make predictions
test_loss = 0
accuracy = 0
for images, labels in test_loader:
images, labels = images.to(device), labels.to(device)
logps = model(images)
loss = criterion(logps, labels)
test_loss += loss.item()
# accuracy
ps = torch.exp(logps)
top_ps, top_class = ps.topk(1, dim=1)
equality = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equality.type(torch.FloatTensor)).item()
print(f'Epoch {epoch+1}/{epochs}.. ')
print(f'Train loss: {running_loss/print_every}')
print(f'Test loss: {test_loss/len(test_loader)}')
print(f'Test accuracy: {accuracy}/{len(test_loader)}')
running_loss=0
model.train()
```
| github_jupyter |
```
GITHUB_USERNAME = "$GITHUB_USERNAME$"
GITHUB_REF = "$GITHUB_REF$"
NOTEBOOK_TYPE = "$NOTEBOOK_TYPE$"
PYTHON_VERSION = "$PYTHON_VERSION$"
IPYTHON_VERSION = "$IPYTHON_VERSION$"
import warnings
from pathlib import Path
import requests
warnings.filterwarnings('error', module='davos')
if NOTEBOOK_TYPE == 'colab':
# utils module doesn't exist on colab VM, so get current version from GitHub
utils_module = Path('utils.py').resolve()
response = requests.get(f'https://raw.githubusercontent.com/{GITHUB_USERNAME}/davos/{GITHUB_REF}/tests/utils.py')
utils_module.write_text(response.text)
# also need to install davos locally
from utils import install_davos
install_davos(source='github', ref=GITHUB_REF, fork=GITHUB_USERNAME)
import inspect
import json
import subprocess
import sys
import davos
import IPython
import pkg_resources
from utils import (
is_imported,
is_installed,
mark,
raises,
run_tests,
TestingEnvironmentError
)
IPYTHON_SHELL = get_ipython()
```
# tests for general testing environment & package initialization
tests GitHub runner itself, as well as contents of `__init__.py` & `implementations.__init__.py`
```
def test_import_davos():
global davos
import davos
assert is_imported('davos')
def test_expected_python_version():
installed_version = '.'.join(map(str, sys.version_info[:2]))
expected_version = PYTHON_VERSION
if installed_version != expected_version:
raise TestingEnvironmentError(
f"Test environment has Python {sys.version.split()[0]}, expected "
"{PYTHON_VERSION}"
)
@mark.jupyter
def test_notebook_using_kernel_python():
if not sys.executable.endswith('envs/kernel-env/bin/python'):
raise TestingEnvironmentError(
"Notebook does not appear to be using the correct python "
"executable. Expected a path ending in "
f"'envs/kernel-env/bin/python', found {sys.executable}"
)
@mark.skipif(IPYTHON_VERSION == 'latest', reason="runs when IPYTHON_VERSION != 'latest'")
def test_expected_ipython_version():
try:
pkg_resources.get_distribution(f"IPython=={IPYTHON_VERSION}")
except pkg_resources.VersionConflict as e:
raise TestingEnvironmentError(
f"Test environment has IPython=={IPython.__version__}, expected "
f"{IPYTHON_VERSION}") from e
@mark.skipif(IPYTHON_VERSION != 'latest', reason="runs when IPYTHON_VERSION == 'latest'")
def test_latest_ipython_version():
pip_exe = davos.config.pip_executable
outdated_pkgs = subprocess.check_output(
[pip_exe, 'list', '--outdated', '--format', 'json'], encoding='utf-8'
)
outdated_pkgs_json = json.loads(outdated_pkgs)
for pkg in outdated_pkgs_json:
if pkg['name'] == 'ipython':
raise TestingEnvironmentError(
f"Test environment has IPython=={pkg['version']}, expected "
f"latest version (IPython=={pkg['latest_version']})"
)
def test_scipy_installed():
"""used as an example package for some tests"""
assert is_installed('scipy')
def test_fastdtw_installed():
"""used as an example package for some tests"""
assert is_installed('fastdtw==0.3.4')
def test_tqdm_installed():
"""used as an example package for some tests"""
assert is_installed('tqdm')
import tqdm
assert tqdm.__version__ != '==4.45.0'
def test_smuggle_in_namespace():
assert 'smuggle' in globals()
assert 'smuggle' in IPYTHON_SHELL.user_ns
assert globals()['smuggle'] is IPYTHON_SHELL.user_ns['smuggle']
def test_activated_on_import():
assert davos.is_active()
def test_deactivate_reactivate_toplevel():
assert davos.is_active()
davos.deactivate()
assert not davos.is_active()
with raises(NameError, match="name 'smuggle' is not defined"):
smuggle ast
davos.activate()
assert davos.is_active()
def test_all_configurable_fields_settable_via_configure():
all_properties = []
for name, val in davos.core.config.DavosConfig.__dict__.items():
if isinstance(val, property):
all_properties.append(name)
read_only_fields = {
'conda_avail',
'conda_envs_dirs',
'environment',
'ipython_shell',
'smuggled'
}
configurable_fields = set(all_properties) - read_only_fields
configure_func_kwargs = set(inspect.signature(davos.configure).parameters)
assert not configurable_fields.symmetric_difference(
configure_func_kwargs
), (
f"configurable fields: {configurable_fields}\ndavos.configure kwargs: "
f"{configure_func_kwargs}"
)
def test_configure_resets_fields_on_fail():
active_before = davos.config.active
confirm_install_before = davos.config.confirm_install
with raises(davos.core.exceptions.DavosConfigError):
davos.configure(
active=False,
confirm_install=True,
suppress_stdout='BAD VALUE'
)
assert davos.config.active is active_before
assert davos.config.confirm_install is confirm_install_before
run_tests()
```
| github_jupyter |
# Evaluate your machine-learning model
In the previous notebook, we check how decision trees and ensemble of trees are working. However, we did not explicitely presented how to evaluate such model once we created it. In this notebook, we aim at presenting how you should evaluate your model. We will first present the benefit of using cross-validation and have a quick look at the different strategies as well as the metrics that should be used in supervised learning.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
```
## Benefit of cross-validation
### Load our dataset
To illustrate our discussion, we will use the California housing dataset. This dataset is a regression problem in which we want to estimate the median housing value given housing information.
```
from sklearn.datasets import fetch_california_housing
X, y = fetch_california_housing(return_X_y=True, as_frame=True)
y *= 100
X.head()
y.head()
```
### Emperical error vs. generalization error
Let's start by creating a regressor which will be a decision tree.
```
from sklearn.tree import DecisionTreeRegressor
regressor = DecisionTreeRegressor()
```
Let's first train our regressor on the full dataset.
```
regressor.fit(X, y)
```
Now that our regressor is trained, we can check the regressor performance. For this purpose, we will use the mean absolute error which will give us an error in the native unit of the target, i.e. k$.
```
from sklearn.metrics import mean_absolute_error
y_pred = regressor.predict(X)
score = mean_absolute_error(y_pred, y)
print(f"In average, our regressor make an error of {score:.2f} k$")
```
<div class="alert alert-success">
<b>QUESTION</b>:
<ul>
<li>Are you surprised to get such performance with this regressor?</li>
<li>Do you expect this regressor to perform this way in production?</li>
</ul>
</div>
Surprisingly or not, our regressor is perfect. However, we are currently not able to confirm if our model would work on future unseen data. We can simulate this stage by splitting out our dataset into two datasets and keep a dataset out of the learning process.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state=0
)
regressor.fit(X_train, y_train)
y_pred = regressor.predict(X_train)
score = mean_absolute_error(y_pred, y_train)
print(f"In average, our regressor make an error of {score:.2f} k$")
```
When evaluating our regressor on the training data, we still get a perfect model. This error is called the **emperical error**. Let see, if we get as lucky on the left-out dataset.
```
y_pred = regressor.predict(X_test)
score = mean_absolute_error(y_pred, y_test)
print(f"In average, our regressor make an error of {score:.2f} k$")
```
... and not. By evaluating our model on a left-out set, we are checking that our model will be able to work on unseen data and thus if it is able to generalize. This type of error computed is called **generalization error**. This error is the one that any data scientist hopes to decrease when creating a model.
<div class="alert alert-success">
<b>QUESTION</b>:
<ul>
<li>Are we sure that our algorithm is robust?</li>
</ul>
</div>
<div class="alert alert-info">
<h3>Generalization error:</h3>
The aim of model training is to select the model $f$ out of a class of models $\mathcal F$ that minimizes a measure of the risk. The risk is measured with a loss $l$ between the true value $y$ associated to $x$ and the prediction $f(x)$ and thus we want to find:
$$
f^\star = \arg\min_{f \in \mathcal F}\mathbb E_{(x, y) \sim \pi}[l(f(x), y]
$$
The issue is that we cannot compute the expectation $\mathbb E_{(x, y) \sim \pi}$ because we don't know the input distribution $\pi$. Therefor, we approximate it with a set of examples $\{(x_1, y_1), \dots (x_N, y_N)\}$ drawn <i>i.i.d.</i> from $\pi$ and use the <b>Empirical Risk Minimization</b> (ERM):
$$
\widehat{f} = \arg\min_{f \in \mathcal F}\frac1N\sum_{i=1}^Nl(f(x_i), y_i]
$$
If the samples are drawn independently, we know that the error has a variance of $\mathcal O\left(\frac{1}{\sqrt{N}}\right)$. Thus there is a gap between the minimizer of the risk and the minimizer of the empirical risk. If we optimize too much for the ERM, the gap might be big and the selected model will have bad performance on unseen data. This is what is called <b>over-fitting</b>. To control, this, one need to have a measure of the risk independent from the measure of the risk which is used to select the model: the Empircal Risk on the test set!
</div>
### A single error is not enough, what about the variance?
While we were able to estimate the generalization error, we are indeed unable to know anything about the variance of our model and thus if it is robust our not. This is where the framework of cross-validation is used. Indeed, we can repeat our experiment and compute several time our generalization error and get intuition about the stability of our model.
The simpler way that we can think about is to shuffle our data and split into two sets has we previously did and repeat several time our experiment. In scikit-learn, using the function `cross_validate` with the cross-validation `ShuffleSplit` allows us to make such evaluation.
```
%%time
from sklearn.model_selection import cross_validate
from sklearn.model_selection import ShuffleSplit
cv = ShuffleSplit(n_splits=30, test_size=0.2)
result_cv = cross_validate(
regressor, X, y, cv=cv, scoring="neg_mean_absolute_error",
n_jobs=2
)
result_cv
```
Once our cross-validation is done, we see that we got a Python dictionary with information regarding the cross-validation. Let's use a pandas dataframe to easily explore the content of the results.
```
result_cv = pd.DataFrame(result_cv)
result_cv
```
We got information about the training and testing time which we can discard for the moment. However, we see that we got the `test_score` which is our generalization scores. We have 30 numbers because we repeated 30 times the same experiment by shuffling the data. Now, we can plot the distribution of these results.
```
sns.displot(-result_cv["test_score"], kde=True, bins=20)
_ = plt.xlabel("Mean absolute error (k$)")
```
We can observe that the generalization error is centered around 45.5 k\\$ and range from 44 k\\$ to 47 k\\$.
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Can we conclude anything about the performance of our regressor?</li>
<li>Can we conclude anything about our cross-validation analysis?</li>
</ul>
<b>Hint: </b>Plot the distribution of the target.
</div>
To be able to know if we can use our cross-validation results, we should put them into perspective with the problem that we try to solve. To know more about this, we can check the distribution of the original target (which we should have done beforehand).
```
# %load solutions/solution_1.py
```
We see that the median value range from 50 k\\$ up to 500 k\\$. Thus an error range of 3 k\\$ means that our cross-validation results can be trusted and do not suffer from an execessive variance. Regarding the performance of our model itself, we can see that making an error of 45 k\\$ would be problematic even more if this happen for housing with low value. However, we also see some limitation regarding the metric that we are using. Making an error of 45 k\\$ for a target at 50 k\\$ and at 500 k\\$ should not have the same impact. We should instead use the mean absolute percentage error which will give a relative error.
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Check the distribution of the training errors similarly to what we did for the generalization errors.</li>
<li>What can you conclude?</li>
</ul>
</div>
What about the emperical error?
```
# %load solutions/solution_2.py
# %load solutions/solution_3.py
```
### Effect of the sample size on the variance analysis
We are quite lucky. Our dataset count many samples.
```
y.size
```
Let's make an experiment and reduce the number of samples and repeat the previous experiment.
```
def make_cv_analysis(regressor, X, y):
cv = ShuffleSplit(n_splits=10, test_size=0.2)
result_cv = pd.DataFrame(
cross_validate(
regressor, X, y, cv=cv,
scoring="neg_mean_absolute_error",
n_jobs=-1
)
)
return y.size, (result_cv["test_score"] * -1).values
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Make an experiment by subsampling the dataset and plotting the distribution of the generalization errors.</li>
<li>What can you conclude?</li>
</ul>
</div>
```
# %load solutions/solution_4.py
sample_sizes = [100, 500, 1000, 5000, 10000, 15000, y.size]\n
# TODO
```
We see that with a low number of samples, the variance is much larger. Indeed, for low number of sample, we cannot even trust our cross-validation and therefore cannot conclude anything about our regressor. Therefore, it is really important to make experiment with a large enough sample size to be sure about the conclusions which would be drawn.
### Comparing results with baseline and chance level
Previously, we compare the generalization error by taking into account the target distribution. A good practice is to compare the generalization error with a dummy baseline and the chance level. In regression, we could use the `DummyRegressor` and predict the mean target without using the data. The chance level can be determined by permuting the labels and check the difference of result.
```
from sklearn.dummy import DummyRegressor
dummy = DummyRegressor()
result_dummy = cross_validate(
dummy, X, y, cv=cv, scoring="neg_mean_absolute_error",
n_jobs=-1
)
from sklearn.model_selection import permutation_test_score
score, permutation_score, pvalue = permutation_test_score(
regressor, X, y, cv=cv, scoring="neg_mean_absolute_error",
n_jobs=-1, n_permutations=10,
)
```
We plot the generalization errors for each of the experiment. We see that even our regressor does not perform well, it is far above chances our a regressor that would predict the mean target.
```
final_result = pd.concat(
[
result_cv["test_score"] * -1,
pd.Series(result_dummy["test_score"]) * -1,
pd.Series(permutation_score) * -1,
], axis=1
).rename(columns={
"test_score": "Cross-validation score",
0: "Dummy score",
1: "Permuted score",
})
sns.displot(final_result, kind="kde")
_ = plt.xlabel("Mean absolute error (k$)")
```
## Choice of cross-validation
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Is `ShuffleSplit` cross-validation always the best cross-validation to use?</li>
</ul>
</div>
Let's take an example of some financial quotes. These are the value of compagny stocks with the time.
```
import pandas as pd
import os
from urllib.request import urlretrieve
symbols = {'TOT': 'Total', 'XOM': 'Exxon', 'CVX': 'Chevron',
'COP': 'ConocoPhillips', 'VLO': 'Valero Energy'}
quotes = pd.DataFrame()
for symbol, name in symbols.items():
url = ('https://raw.githubusercontent.com/scikit-learn/examples-data/'
'master/financial-data/{}.csv')
filename = "data/{}.csv".format(symbol)
if not os.path.exists(filename):
urlretrieve(url.format(symbol), filename)
this_quote = pd.read_csv(filename)
quotes[name] = this_quote['open']
quotes.index = pd.to_datetime(this_quote['date'])
_, ax = plt.subplots(figsize=(10, 6))
_ = quotes.plot(ax=ax)
from sklearn.ensemble import GradientBoostingRegressor
X, y = quotes.drop(columns=["Chevron"]), quotes["Chevron"]
regressor = GradientBoostingRegressor()
cv = ShuffleSplit(n_splits=30)
result_cv = pd.DataFrame(
cross_validate(
regressor, X, y, cv=cv,
)
)
print(f'Mean R2: {result_cv["test_score"].mean():.2f}')
```
<div class="alert alert-success">
<b>QUESTION</b>:
It seems that we have the perfect regressor. Is this normal?
</div>
```
# %load solutions/solution_5.py
```
Let's check the different type of cross-validation that are available in scikit-learn:
https://scikit-learn.org/stable/auto_examples/model_selection/plot_cv_indices.html#sphx-glr-auto-examples-model-selection-plot-cv-indices-py
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import os
os.chdir("..")
from deepsvg.svglib.svg import SVG
from deepsvg import utils
from deepsvg.difflib.tensor import SVGTensor
from deepsvg.svglib.utils import to_gif
from deepsvg.svglib.geom import Bbox
from deepsvg.svgtensor_dataset import SVGTensorDataset, load_dataset
from deepsvg.utils.utils import batchify, linear
import torch
import numpy as np
```
# DeepSVG latent space operations
```
device = torch.device("cuda:0"if torch.cuda.is_available() else "cpu")
```
Load the pretrained model and dataset
```
pretrained_path = "./pretrained/hierarchical_ordered.pth.tar"
from configs.deepsvg.hierarchical_ordered import Config
cfg = Config()
model = cfg.make_model().to(device)
utils.load_model(pretrained_path, model)
model.eval();
dataset = load_dataset(cfg)
def load_svg(filename):
svg = SVG.load_svg(filename)
svg = dataset.simplify(svg)
svg = dataset.preprocess(svg)
return svg
def easein_easeout(t):
return t*t / (2. * (t*t - t) + 1.);
def interpolate(z1, z2, n=25, filename=None, ease=True, do_display=True):
alphas = torch.linspace(0., 1., n)
if ease:
alphas = easein_easeout(alphas)
z_list = [(1-a) * z1 + a * z2 for a in alphas]
img_list = [decode(z, do_display=False, return_png=True) for z in z_list]
to_gif(img_list + img_list[::-1], file_path=filename, frame_duration=1/12)
def encode(data):
model_args = batchify((data[key] for key in cfg.model_args), device)
with torch.no_grad():
z = model(*model_args, encode_mode=True)
return z
def encode_icon(idx):
data = dataset.get(id=idx, random_aug=False)
return encode(data)
def encode_svg(svg):
data = dataset.get(svg=svg)
return encode(data)
def decode(z, do_display=True, return_svg=False, return_png=False):
commands_y, args_y = model.greedy_sample(z=z)
tensor_pred = SVGTensor.from_cmd_args(commands_y[0].cpu(), args_y[0].cpu())
svg_path_sample = SVG.from_tensor(tensor_pred.data, viewbox=Bbox(256), allow_empty=True).normalize().split_paths().set_color("random")
if return_svg:
return svg_path_sample
return svg_path_sample.draw(do_display=do_display, return_png=return_png)
def interpolate_icons(idx1=None, idx2=None, n=25, *args, **kwargs):
z1, z2 = encode_icon(idx1), encode_icon(idx2)
interpolate(z1, z2, n=n, *args, **kwargs)
```
# "Addition" operation
```
z_list = []
for i in range(500):
tensors, fillings = dataset._load_tensor(dataset.random_id())
t_sep = tensors[0]
t_sep_rm, fillings_rm = t_sep[:-1], fillings[:-1]
if len(t_sep) >= 2:
z1 = encode(dataset.get_data(t_sep, fillings))
z2 = encode(dataset.get_data(t_sep_rm, fillings_rm))
z_list.append(z2 - z1)
z_rmv = torch.cat(z_list).mean(dim=0, keepdims=True)
```
`z_rmv` now represents the latent direction that removes the last path of an SVG icon.
```
z_baloon = encode_icon("548")
```
Now, what happens if one subtracts `z_rmv` from the representation of an SVG icon? 🤔
```
interpolate(z_baloon - 2 * z_rmv, z_baloon + 2 * z_rmv)
z_bubbles = encode_icon("76279")
interpolate(z_bubbles - 3 * z_rmv, z_bubbles + 3 * z_rmv)
```
It automagically adds new paths!!
# "Squarify" operation
```
svg1 = load_svg("docs/frames/circles.svg")
z1 = encode_svg(svg1)
svg2 = load_svg("docs/frames/squares.svg")
z2 = encode_svg(svg2)
z_squarify = z2 - z1
```
`z_squarify` is the latent direction that transforms round shapes to square shapes.
```
z_drill = encode_icon("29775") # Drill
interpolate(z_drill - z_squarify/2, z_drill + z_squarify/2, n=25)
```
Quite surprisingly, adding or removing this vector to SVG icons makes them look more square/round!
| github_jupyter |
# Time-Series Forecasting FBProphet
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import missingno as msno
%matplotlib inline
```
# STEP #1: IMPORTING DATA
```
# !pip install fbprophet
# from fbprophet import Prophet
%%time
chicago_df_1 = pd.read_csv('Chicago_Crimes_2005_to_2007.csv', error_bad_lines=False)
chicago_df_2 = pd.read_csv('Chicago_Crimes_2008_to_2011.csv', error_bad_lines=False)
chicago_df_3 = pd.read_csv('Chicago_Crimes_2012_to_2017.csv', error_bad_lines=False)
chicago_df_1.shape
chicago_df_2.shape
chicago_df_3.shape
df = pd.concat(objs=[chicago_df_1, chicago_df_2, chicago_df_3], axis=0)
df.shape
1872343 + 2688710 + 1456714
```
# STEP #2: EXPLORING THE DATASET
```
df.head()
%%time
# plt.figure(figsize=(10,10))
# sns.heatmap(data=df.isna(), cbar = False);
%%time
# ID Case Number Date Block IUCR Primary Type Description Location Description Arrest Domestic Beat District Ward Community Area FBI Code X Coordinate Y Coordinate Year Updated On Latitude Longitude Location
df = df.drop(labels=['Unnamed: 0', 'Case Number', 'Case Number', 'IUCR', 'X Coordinate', 'Y Coordinate','Updated On','Year', 'FBI Code', 'Beat','Ward','Community Area', 'Location', 'District', 'Latitude' , 'Longitude'], axis=1)
df["Date"] = pd.to_datetime(df["Date"], format='%m/%d/%Y %I:%M:%S %p')
df["Primary Type"].value_counts().iloc[:15]
df["Primary Type"].value_counts().iloc[:15].index
plt.figure(figsize=(10, 10))
sns.countplot(data=df, y="Primary Type");
plt.figure(figsize=(10, 20))
sns.countplot(data=df, y="Location Description", color="lightblue", edgecolor="black");
# df.index = pd.DatetimeIndex(data=df["Date"])
df.index = df["Date"]
df.index
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.resample.html
# Resample is a Convenience method for frequency conversion and resampling of time series.
plt.plot(df.resample(rule='Y').size())
plt.title('Crimes Count Per Year')
plt.xlabel('Years')
plt.ylabel('Number of Crimes');
plt.plot(df.resample('M').size())
plt.title('Crimes Count Per Month')
plt.xlabel('Months')
plt.ylabel('Number of Crimes')
```
# STEP #3: PREPARING THE DATA
```
df_prophet = df.resample('M').size().reset_index()
df_prophet.columns = ['Date', 'Crime Count']
df_prophet.head()
# Rename the columns names
df_prophet.rename(mapper={"Date": "ds", "Crime Count": "y"}, axis=1)
#
df_prophet.rename(columns={"Date": "ds", "Crime Count": "y"})
#
df_prophet.columns = ["ds", "y"]
df_prophet
for row in df.to_dict(orient="records"):
print(row)
break
```
# STEP #4: MAKE PREDICTIONS
```
from fbprophet import Prophet
model = Prophet()
model.fit(df_prophet)
# Forcasting into the future
future = model.make_future_dataframe(periods=730)
forecast = model.predict(future)
figure = model.plot(forecast, xlabel='Date', ylabel='Crime Rate')
figure = model.plot_components(forecast)
```
---
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import matplotlib.pyplot as plt
import numpy as np
from ctapipe_io_lst import LSTEventSource
from ctapipe.image.extractor import LocalPeakWindowSum
from ctapipe.instrument import CameraGeometry
from ctapipe.visualization import CameraDisplay
from traitlets.config.loader import Config
from lstchain.calib.camera.r0 import LSTR0Corrections
from lstchain.calib.camera.time_correction_calculate import TimeCorrectionCalculate
from lstchain.calib.camera.pulse_time_correction import PulseTimeCorrection
plt.rcParams['font.size'] = 25
```
# Create coefficients to time correction and save to h5py file using camera calibration data
```
%%time
reader = LSTEventSource(
input_url="/media/pawel1/ADATA HD330/20191124/LST-1.1.Run01625.0000.fits.fz",
max_events=20000)
charge_config = Config({
"LocalPeakWindowSum":{
"window_shift":4,
"window_width":11
}
})
# Here you have to give path, where you want save calibration file
timeCorr = TimeCorrectionCalculate(calib_file_path='time_calib_run1625_new.h5',
config=charge_config,)
config = Config({
"LSTR0Corrections": {
"pedestal_path": None, # if baseline correction was done in EVB
"tel_id": 1,
"r1_sample_start":2,
"r1_sample_end":38
}
})
lst_r0 = LSTR0Corrections(config=config)
for i, ev in enumerate(reader):
if ev.r0.event_id%5000 == 0:
print(ev.r0.event_id)
lst_r0.calibrate(ev) # Cut in signal to avoid cosmic events
if ev.r0.tel[1].trigger_type == 1 and np.mean(ev.r1.tel[1].waveform[:, :, :]) > 100:
timeCorr.calibrate_pulse_time(ev)
timeCorr.finalize()
```
# Apply time correction to camera calibration data
```
reader = LSTEventSource(
input_url="/media/pawel1/ADATA HD330/20191124/LST-1.1.Run01625.0001.fits.fz",
max_events=15)
pulse_corr = PulseTimeCorrection(calib_file_path='time_calib_run1625_new.h5')
config = Config({
"LSTR0Corrections": {
"pedestal_path": None, # if baseline correction was done in EVB
"tel_id": 1,
"r1_sample_start":2,
"r1_sample_end":38
}
})
lst_r0 = LSTR0Corrections(config=config)
extractor = LocalPeakWindowSum(window_width=11, window_shift=4)
gain = 0
for i, ev in enumerate(reader):
if ev.r0.event_id%50 == 0:
print(ev.r0.event_id)
lst_r0.calibrate(ev) # Cut in signal to avoid cosmic events
if ev.r0.tel[1].trigger_type == 1 and np.mean(ev.r1.tel[1].waveform[:, :, :]) > 100:
charge, pulse = extractor(ev.r1.tel[1].waveform[:, :, :])
pulse_corr_array = pulse_corr.get_corr_pulse(ev, pulse)
plt.figure(figsize=(10, 5))
plt.hist(pulse[gain,:], bins=70, range=(15, 35),
histtype='step', lw=2.5)
plt.hist(pulse_corr_array[gain,:], bins=70, range=(-8, 12),
histtype='step', lw=2.5, label="after corr")
plt.xlabel("Pulse time")
plt.ylabel("Number of pixels")
plt.legend()
plt.grid(True)
plt.show()
print("std before correction {:.2f}".format(np.std(pulse)))
print("std after correction {:.2f}".format(np.std(pulse_corr_array)))
```
# Apply time correction to cosmic events
```
input_file_1 = "/media/pawel1/ADATA HD330/20191124/LST-1.1.Run01627.0000.fits.fz"
reader = LSTEventSource(input_url=input_file_1, max_events=15)
config_corr = Config({
"LSTR0Corrections": {
"pedestal_path": None, # For run with baseline correction online
"offset": 400,
"tel_id": 1,
"r1_sample_start":2,
"r1_sample_end":38
}
})
tel_id = 1
lst_r0 = LSTR0Corrections(config=config_corr)
extractor = LocalPeakWindowSum(window_width=11, window_shift=4)
gain = 0
pulse_corr = PulseTimeCorrection(calib_file_path='time_calib_run1625.h5')
for i, ev in enumerate(reader):
if ev.r0.event_id%10 == 0:
print(ev.r0.event_id)
lst_r0.calibrate(ev) # Cut to see some signal from cosmic events
if ev.r0.tel[tel_id].trigger_type == 1 and np.sum(ev.r1.tel[tel_id].waveform[gain, :, 2:38]>600) > 10:
#Add offset to avoid negative value related to Issue #269
ev.r1.tel[tel_id].waveform = ev.r1.tel[tel_id].waveform + 100
charge, pulse_time = extractor(ev.r1.tel[tel_id].waveform[:, :, :])
pulse_corr_array = pulse_corr.get_corr_pulse(ev, pulse_time)
fig, ax = plt.subplots(1, 2, figsize=(14, 7))
geom = ev.inst.subarray.tel[tel_id].camera
disp1 = CameraDisplay(geom, ax=ax[0])
disp1.image = pulse_time[gain, :]
disp1.add_colorbar(ax=ax[0], label="time [ns]")
disp1.cmap = 'gnuplot2'
ax[0].set_title("Pulse time")
disp2 = CameraDisplay(geom, ax=ax[1])
disp2.image = pulse_corr_array[0, :]
disp2.add_colorbar(ax=ax[1], label="time [ns]")
disp2.cmap = 'gnuplot2'
ax[1].set_title("Pulse time correction")
plt.tight_layout()
plt.show()
```
| github_jupyter |
# Pharmacokinetic model class
The PKModel abstract base class represents pharmacokinetic models, i.e. models to predict tissue CA concentration. Subclasses represent specific models, e.g. Patlak.
The main function of the class is to provide a conc method, which returns the CA concentration for the capillary plasma and EES spaces, and the whole tissue.
```
import sys
import matplotlib.pyplot as plt
import numpy as np
sys.path.append('../src')
import aifs, pk_models
%load_ext autoreload
%autoreload 2
```
#### Example
Create a PKModel object and use to predict CA concentration.
To do this we need to specify:
- The time points at which we need to calculate concentration
- An AIF object:
```
dt = 2
t = np.arange(0,100)*dt + dt/2 # t = 1, 3, ..., 199 s
aif = aifs.Parker(hct=0.42, t_start=15)
pkm_tcxm = pk_models.TCXM(t, aif)
```
The names and order of the parameters can be obtained from the object:
```
pkm_tcxm.parameter_names
```
Calculate concentration using the conc method (parameters can be passed as positional or keyword arguments):
```
pkp = {'vp': 0.01, 'ps': 5e-3, 've': 0.2, 'fp': 20}
C_t, C_cp, C_e = pkm_tcxm.conc(**pkp)
plt.plot(t, C_t, '-', label='tissue conc.')
plt.plot(t, C_cp, '-', label='blood plasma contribution')
plt.plot(t, C_e, '-', label='EES contribution')
plt.legend()
plt.xlabel('time (s)')
plt.ylabel('concentration (mM)');
```
Parameters can be converted between dict (for readability) and array (for optimisation algorithms) formats:
```
pkp_array = pkm_tcxm.pkp_array(pkp)
pkp_dict = pkm_tcxm.pkp_dict(pkp_array)
print(f"Parameters as array: {pkp_array}")
print(f"Parameters as dict: {pkp_dict}")
```
Required parameters have typical values (for scaling and as default initial estimates) and bounds (for fitting):
```
print(f"Parameter names for this model are: {pkm_tcxm.parameter_names}")
print(f"Typical values for these parameters are: {pkm_tcxm.typical_vals}")
print(f"Bounds for these parameters are: {pkm_tcxm.bounds}")
```
We can also specify a fixed artery-capillary delay for the model:
```
C_t_delayed, _, _ = pk_models.TCXM(t, aif, fixed_delay=100).conc(**pkp)
plt.plot(t, C_t_delayed, '-', label='tissue conc.')
plt.legend()
plt.xlabel('time (s)')
plt.ylabel('concentration (mM)');
```
The default delay is zero. If set to None, the delay is assumed to be a fitted parameter and must be specified with the other parameters when calling the conc method:
```
pkm_tcxm_var_delay = pk_models.TCXM(t, aif, fixed_delay=None)
pkm_tcxm_var_delay.parameter_names
```
The irf method returns the impulse response functions for the plasma and EES compartments:
```
irf_cp, irf_ees = pkm_tcxm.irf(**pkp)
irf_tissue = irf_cp + irf_ees
plt.plot(pkm_tcxm.tau_upsample, irf_tissue, '-', label='tissue IRF')
plt.plot(pkm_tcxm.tau_upsample, irf_cp, '-', label='blood plasma IRF')
plt.plot(pkm_tcxm.tau_upsample, irf_ees, '-', label='EES IRF')
plt.legend()
plt.xlabel('time (s)')
plt.ylabel('IRF (/s)');
```
#### Interpolation
Concentration is calculated by discrete convolution of the AIF and IRF at the required time points. If either function is temporally undersampled then the calculated concentration may not be accurate. In our example, if we set a high Fp then the IRF will not be properly sampled:
```
C_t_default, _, _ = pk_models.TCXM(t, aif).conc(vp=0.01, ps=5e-3, ve=0.2, fp=200)
```
We can correct this by upsampling the AIF and IRF for the convolution step (concentration is still calculated at the requested time points). To do this, use the upsample_factor parameter when creating the PKModel object:
```
C_t_upsampled, _, _ = pk_models.TCXM(t, aif, upsample_factor=10).conc(vp=0.01, ps=5e-3, ve=0.2, fp=200)
plt.plot(t, C_t_default, '-', label='default (undersampled IRF)')
plt.plot(t, C_t_upsampled, '-', label='upsampled x10')
plt.legend()
plt.xlabel('time (s)')
plt.ylabel('concentration (mM)');
```
Of course, if the AIF is based on Patient data that is temporally undersampled then upsampling won't correct for this. In our example, the AIF is a continuous (Parker) function, so upsampling of the AIF should further increase the accuracy.
#### Compare models
```
pkm_tcxm = pk_models.TCXM(t, aif, upsample_factor=10)
pkm_etofts = pk_models.ExtendedTofts(t, aif, upsample_factor=10)
pkm_tcum = pk_models.TCUM(t, aif, upsample_factor=10)
pkm_patlak = pk_models.Patlak(t, aif, upsample_factor=10)
pkm_tofts = pk_models.Tofts(t, aif, upsample_factor=10)
pkm_steady_state = pk_models.SteadyStateVp(t, aif, upsample_factor=10)
pk_pars = {'vp': 0.01, 'ps': 5e-2, 've': 0.2, 'fp': 10, 'ktrans': 5e-2}
# N.B. unnecessary parameters are ignored.
C_t_tcxm, _, _ = pkm_tcxm.conc(**pk_pars)
C_t_etofts, _, _ = pkm_etofts.conc(**pk_pars)
C_t_tcum, _, _ = pkm_tcum.conc(**pk_pars)
C_t_patlak, _, _ = pkm_patlak.conc(**pk_pars)
C_t_tofts, _, _ = pkm_tofts.conc(**pk_pars)
C_t_steady_state, _, _ = pkm_steady_state.conc(**pk_pars)
plt.figure(0, figsize=(12,8))
plt.plot(t, C_t_tcxm, '-', label='2CXM')
plt.plot(t, C_t_etofts, '-.', label='extended Tofts')
plt.plot(t, C_t_tcum, '-.', label='2CUM')
plt.plot(t, C_t_patlak, '--', label='Patlak')
plt.plot(t, C_t_tofts, '--', label='Tofts')
plt.plot(t, C_t_steady_state, ':', label='Steady-state (vascular)')
plt.legend()
plt.xlabel('time (s)')
plt.ylabel('tissue concentration (mM)');
```
| github_jupyter |
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title"><b>A Sudoku Solver</b></span> by <a xmlns:cc="http://creativecommons.org/ns#" href="http://mate.unipv.it/gualandi" property="cc:attributionName" rel="cc:attributionURL">Stefano Gualandi</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.<br />Based on a work at <a xmlns:dct="http://purl.org/dc/terms/" href="https://github.com/mathcoding/opt4ds" rel="dct:source">https://github.com/mathcoding/opt4ds</a>.
**NOTE:** Run the following script whenever running this script on a Google Colab.
```
import shutil
import sys
import os.path
if not shutil.which("pyomo"):
!pip install -q pyomo
assert(shutil.which("pyomo"))
if not (shutil.which("glpk") or os.path.isfile("glpk")):
if "google.colab" in sys.modules:
!apt-get install -y -qq glpk-utils
else:
try:
!conda install -c conda-forge glpk
except:
pass
```
# Sudoku
The **Sudoku** is a logic-based combinatorial number-placement puzzle (source: [wikipedia](https://en.wikipedia.org/wiki/Sudoku)). The objective is to fill a $9 \times 9$ grid with digits so that each column, each row, and each of the nine $3 \times 3$ subgrids that compose the grid contain all of the digits from 1 to 9.
The puzzle setter provides a partially completed grid, which for a well-posed puzzle has a single solution.
Completed games are always an example of a *Latin square* which include an additional constraint on the contents of individual regions.
### Example: Game of the day (22-03-2020)
An example of an instance of the [game of the day](http://www.dailysudoku.com/sudoku/today.shtml) is a s follows:
```
. . . | . 9 4 | 8 . .
. 2 . | . 1 7 | 5 . .
. . 6 | . . . | . 1 .
---------------------
. 6 2 | . . 8 | . . 7
. . . | 3 . 2 | . . .
3 . . | 9 . . | 4 2 .
---------------------
. 9 . | . . . | 6 . .
. . 1 | 7 8 . | . 9 .
. . 3 | 4 5 . | . . .
```
We show next how to solve this puzzle (and any other instance of the game) by using **Integer Linear Programming (ILP)**.
## Integer Linear Programming model
A solution strategy for the Sudoku game can be represented by the following **ILP model**.
**Decision Variables:** The variable $x_{ijk} \in \{0,1\}$ is equal to 1 if in position $(i,j)$ in the grid we set the digit $k$, and it is equal to 0 otherwise. For easy of exposition, we use the set $I,J,K:=\{1,\dots,9\}$.
**Objective function:** Since the problem is a feasibility problem, we can set the objective function equal to a constant value. Otherwise, we can add the sum of every variable, and we will expect an optimal solution of value equal to 81 (this way we avoid also a warning from the solver).
**Constraints:** We introduce the following linear constraints, which encode the puzzle rules:
1. In every position, we can place a single digit:
$$
\sum_{k \in K} x_{ijk} = 1, \;\; \forall i \in I, \; \forall j \in J
$$
2. Each digit appears once per row:
$$
\sum_{j \in J} x_{ijk} = 1, \;\; \forall i \in I, \; \forall k \in K
$$
3. Each digit appears once per column:
$$
\sum_{i \in I} x_{ijk} = 1, \;\; \forall j \in J, \; \forall k \in K
$$
4. Each digit appears once per block $3 \times 3$:
$$
\sum_{i \in I} \sum_{j \in J} x_{(i_0+i)(j_0+j)k} = 1, \;\; \forall i_0,j_0 \in \{1,4,7\}, \;\forall k \in K
$$
5. The digit in the input data must be fixed to 1:
$$
x_{ijk} = 1, \;\; \forall i,j \in I \times J \; \mbox{ such that } \; Data[i+1][j+1] = 1
$$
We show next how to implement this model in Pyomo.
## Pyomo implementation
As a first step we import the Pyomo libraries.
```
from pyomo.environ import ConcreteModel, Var, Objective, Constraint, SolverFactory
from pyomo.environ import Binary, RangeSet, ConstraintList
```
We define the input of the problem as a list of list, where the 0 digit is used to denote an unknown cell value.
```
Data= [[0, 0, 0, 0, 9, 4, 8, 0, 0],
[0, 2, 0, 0, 1, 7, 5, 0, 0],
[0, 0, 6, 0, 0, 0, 0, 1, 0],
[0, 6, 2, 0, 0, 8, 0, 0, 7],
[0, 0, 0, 3, 0, 2, 0, 0, 0],
[3, 0, 0, 9, 0, 0, 4, 2, 0],
[0, 9, 0, 0, 0, 0, 6, 0, 0],
[0, 0, 1, 7, 8, 0, 0, 9, 0],
[0, 0, 3, 4, 5, 0, 0, 0, 0]]
```
Then, we create an instance of the class *ConcreteModel*, and we start to add the *RangeSet* and *Var* corresponding to the index sets and the variables of our model.
```
# Create concrete model
model = ConcreteModel()
# Sudoku of size 9x9, with subsquare 3x3
n = 9
model.I = RangeSet(1, n)
model.J = RangeSet(1, n)
model.K = RangeSet(1, n)
# Variables
model.x = Var(model.I, model.J, model.K, within=Binary)
```
At this point, we set the *dummy* objective function.
```
# Objective Function
model.obj = Objective(
expr = sum(model.x[i,j,k] for i in model.I for j in model.J for k in model.K))
```
Regarding the constraints, we start with the simpler constraints (1)--(3), which set a single digit per cell, per row and per column.
```
# 1. A single digit for each position
model.unique = ConstraintList()
for i in model.I:
for j in model.J:
expr = 0
for k in model.K:
expr += model.x[i,j,k]
model.unique.add( expr == 1 )
# 2. Row constraints
model.rows = ConstraintList()
for i in model.I:
for k in model.K:
expr = 0
for j in model.J:
expr += model.x[i,j,k]
model.rows.add( expr == 1 )
# 3. Column constraints
model.columns = ConstraintList()
for j in model.J:
for k in model.K:
expr = 0
for i in model.I:
expr += model.x[i,j,k]
model.columns.add( expr == 1 )
```
Finally, we declare the constraint for the 9 submatrices 3x3, with the following constraint.
```
# 4. Submatrix constraints
model.blocks = ConstraintList()
S = [1, 4, 7]
for i0 in S:
for j0 in S:
for k in model.K:
expr = 0
for i in range(3):
for j in range(3):
expr += model.x[i0+i, j0+j,k]
model.blocks.add( expr == 1 )
# 5. Fix input data
for i in range(n):
for j in range(n):
if Data[i][j] > 0:
model.x[i+1,j+1,Data[i][j]].fix(1)
```
At this point, we only need to solve the problem and print the solution in a readable format.
```
# Solve the model
sol = SolverFactory('glpk').solve(model, tee=True)
print(sol)
# Print objective value of the solution
print("objective value:", model.obj())
# Print a readable format of the solution
for i in model.I:
for j in model.J:
for k in model.K:
if model.x[i,j,k]() > 0:
print(k, end=" ")
print()
```
### Prettify Solution
As a general recommendation, try to have *pretty* output of your solution that can help **humans** to quickly check visually the solution for likely bugs.
For the Sudoku puzzle, we can use the **matplotlib** as follows.
```
def PlotSudoku(x, size=6):
import matplotlib.pyplot as plt
import numpy as np
boardgame = np.zeros((9, 9))
plt.figure(figsize=(size, size))
plt.imshow(boardgame, cmap='binary')
for i, j, k in x:
if x[i,j,k]() > 0:
if Data[i-1][j-1] == k:
plt.text(i-1, j-1, k, fontsize=4*size, color='red',
ha='center', va='center')
else:
plt.text(i-1, j-1, k, fontsize=4*size, color='darkblue',
ha='center', va='center')
# Prettify output
for i in range(9):
plt.axhline(y=i+0.5, color='grey', linestyle='--', alpha=0.5)
plt.axvline(x=i+0.5, color='grey', linestyle='--', alpha=0.5)
for i in range(3):
plt.axhline(y=i*3+2.5, color='grey', linestyle='-', lw=2)
plt.axvline(x=i*3+2.5, color='grey', linestyle='-', lw=2)
plt.xticks([])
plt.yticks([])
plt.show()
PlotSudoku(model.x)
```
### Optional Exercise
Can you use this ILP model of Sudoku to develop an instance generator?
> **Enjoy!!**
| github_jupyter |
# String alignment using dynamic programming
```
import numpy as np
import random
```
Alphabet of the strings we consider for alignment, for this example we use nucliotides that encode DNA.
```
alphabet = ['A', 'C', 'G', 'T']
```
Scoring matrix, substituting an `A` by a `C` or `T` yields a penalty of -1, while substituing an `A` by a `G` yields a reward of 1, and matching an `A` yields 2.
```
scoring_matrix = np.array(
[[ 2, -1, 1, -1],
[-1, 2, -1, 1],
[ 1, -1, 2, -1],
[-1, 1, -1, 2]]
)
```
Alignments can introduce gaps, and they come with an associated score as well.
```
gap_score = -2
```
The following function generates a function to generate random strings for the given alphabet.
```
def random_string_generator(alphabet):
def random_string(n):
return ''.join(random.choices(alphabet, k=n))
return random_string
```
Create a function to generate DNA fragments of a given length.
```
random_dna = random_string_generator(alphabet)
```
Seed the random number generator for reproducible results.
```
random.seed(1234)
```
Test the function for a few strings:
```
for i in range(3, 21):
print(random_dna(i//3))
```
Implement the alignment algorithm as a class so that the alphabet, the scoring matrix and the gap score can be initialized once for alignment of many string pairs.
```
class Aligner:
def __init__(self, alphabet, scoring_matrix, gap_score):
self._idx = {char: index for index, char in enumerate(alphabet)}
self._scoring_matrix = scoring_matrix
self._gap_score = gap_score
def _init(self, str1, str2):
self._str1 = 'X' + str1
self._str2 = 'X' + str2
self._dist = np.empty((1 + len(str1), 1 + len(str2)), np.int)
self._dist[0, 0] = 0
for i in range(1, self._dist.shape[0]):
self._dist[i, 0] = self._dist[i - 1, 0] + self._gap_score
for j in range(1, self._dist.shape[1]):
self._dist[0, j] = self._dist[0, j - 1] + self._gap_score
def _compute_edit_distance(self):
# from IPython.core.debugger import Tracer; Tracer()()
for i in range(1, self._dist.shape[0]):
for j in range(1, self._dist.shape[1]):
self._dist[i, j] = self._distance(i, j)
def _distance(self, i, j):
idx1 = self._idx[self._str1[i]]
idx2 = self._idx[self._str2[j]]
match = self._dist[i - 1, j - 1] + self._scoring_matrix[idx1, idx2]
gap1 = self._dist[i, j - 1] + self._gap_score
gap2 = self._dist[i - 1, j] + self._gap_score
return max((match, gap1, gap2,))
def align(self, str1, str2):
self._init(str1, str2)
self._compute_edit_distance()
a1, a2 = '', ''
i, j = self._dist.shape[0] - 1, self._dist.shape[1] - 1
while i > 0 and j > 0:
idx1 = self._idx[self._str1[i]]
idx2 = self._idx[self._str2[j]]
match = self._dist[i - 1, j - 1] + self._scoring_matrix[idx1, idx2]
if self._dist[i, j] == match:
a1 = self._str1[i] + a1
a2 = self._str2[j] + a2
i, j = i - 1, j - 1
elif self._dist[i, j] == self._dist[i - 1, j] + self._gap_score:
a1 = self._str1[i] + a1
a2 = '_' + a2
i -= 1
elif self._dist[i, j] == self._dist[i, j - 1] + self._gap_score:
a1 = '_' + a1
a2 = self._str2[j] + a2
j -= 1
if i > 0:
while i > 0:
a1 = self._str1[i] + a1
a2 = '_' + a2
i -= 1
elif j > 0:
while j > 0:
a1 = '_' + a1
a2 = self._str2[j] + a2
j -= 1
return a1, a2
@property
def index(self):
return self._idx
@property
def string1(self):
return self._str1[1:]
@property
def string2(self):
return self._str2[1:]
@property
def distance_matrix(self):
return np.copy(self._dist)
```
Test the implementation on the example of http://www.biorecipes.com/DynProgBasic/code.html
```
alignment = Aligner(alphabet=alphabet, scoring_matrix=scoring_matrix, gap_score=gap_score)
a1, a2 = alignment.align('CCTAAG', 'ACGGTAG',)
print(a1)
print(a2)
```
The result is tested against the expected result:
```
assert(a1 == 'CCTA_AG')
assert(a2 == 'ACGGTAG')
alignment.distance_matrix
```
To test implementations of new algorithms or in other programming languages, a test set is quite useful. For each example, two DNA strings are generated. The first will have a length between `min_length` and `max_length`, and the second string's length will deviated from that by a number between `min_dev` and `max_dev` (`min_legnth` can be negative, if so, it should be choosen such that `min_length` + `_min_dev` > 0.
```
random.seed(1234)
file_name = 'alignment_data.txt'
min_length, max_length = 6, 12
min_dev, max_dev = -5, 5
nr_samples = 10
aligner = Aligner(alphabet, scoring_matrix, gap_score)
data = []
for sample_nr in range(nr_samples):
data_str = ''
n1 = random.randint(min_length, max_length)
n2 = n1 + random.randint(min_dev, max_dev)
str1, str2 = random_dna(n1), random_dna(n2)
data_str += f'{str1} {str2}'
aligned1, aligned2 = aligner.align(str1, str2)
data_str += f' {aligned1} {aligned2}'
data.append(data_str)
data_str = '\n'.join(data)
print(data_str)
with open(file_name, 'w') as file:
print(data_str, file=file)
```
| github_jupyter |
```
import numpy as np
from scipy.interpolate import UnivariateSpline
__all__ = ['ferro_spline', 'ferri_spline']
# digitized using WebPlotDigitizer
# from figure 2a, of
# `X‐ray Spectroscopic Study of Solvent Effects on the Ferrous and
# Ferric Hexacyanide Anions`
ferro_cyanide_reference = np.array(
[
[7.095, 0.0], #added by me
[7.100, 0.0], #added by me
[7.105, 0.0], #added by me
[7.106, 0.0], #added by me
[7.107, 0.0], #added by me
[7.108, 0.0], #added by me
[7.109, 0.0], #added by me
[7.109769204980843, -0.0005164529015979635],
[7.110969300766284, -0.0005164529015979635],
[7.112169396551724, 0.0007081963452861828],
[7.113314942528736, 0.013553406223717479],
[7.114460488505747, 0.02819476833091228],
[7.115406018518518, 0.030167814339781218],
[7.115769683908046, 0.06384566862910002],
[7.117419815613027, 0.10294939944280901],
[7.118497174329502, 0.09895228037311732],
[7.119697270114942, 0.1132398549201008],
[7.120897365900383, 0.1363721184723603],
[7.122097461685824, 0.14276750898386736],
[7.123297557471264, 0.1639947625965288],
[7.12417035440613, 0.19915580319596282],
[7.1247158524904215, 0.23477948906644253],
[7.125108611111111, 0.27459419680403685],
[7.125414090038314, 0.31171467619848614],
[7.125588649425287, 0.34624071348472096],
[7.125850488505747, 0.3835607653490327],
[7.12602504789272, 0.42008252733389395],
[7.126243247126436, 0.45750236543313716],
[7.126515996168582, 0.5058986893748247],
[7.126734195402299, 0.5560412724278105],
[7.126897844827586, 0.5996977502102607],
[7.127072404214559, 0.6410092514718251],
[7.127170593869732, 0.6812729972666106],
[7.127334243295019, 0.7179444386038688],
[7.127443342911877, 0.7591062605130362],
[7.127661542145594, 0.8178554063288478],
[7.127552442528735, 0.7867969407064761],
[7.1278251915708815, 0.8706173780487805],
[7.1279342911877395, 0.9087856129100085],
[7.127988840996168, 0.9414656048500141],
[7.128097940613027, 0.9783865117746006],
[7.128207040229885, 1.0163052810485],
[7.128343414750958, 1.0547229814970565],
[7.128452514367816, 1.094388009882254],
[7.128561613984674, 1.134427236648444],
[7.128643438697318, 1.175963256938604],
[7.128752538314176, 1.2128841638631904],
[7.128861637931034, 1.2483082772638072],
[7.128992557471264, 1.2948086627417998],
[7.129210756704981, 1.3448015664423887],
[7.129298036398467, 1.388009006167648],
[7.1297126149425285, 1.48],
[7.130770881226053, 1.4042242693439866],
[7.1309345306513405, 1.3675528280067284],
[7.131109090038314, 1.327139402859546],
[7.131305469348659, 1.278044575273339],
[7.131452753831417, 1.2395769817073172],
[7.13158912835249, 1.203903402719372],
[7.131698227969348, 1.172470738716008],
[7.131807327586206, 1.132556244743482],
[7.132003706896551, 1.0882511564339783],
[7.132189176245211, 1.0365119936220915],
[7.132352825670498, 0.9851220826324643],
[7.132549204980843, 0.9457564129520606],
[7.132723764367816, 0.8987570962994114],
[7.132871048850575, 0.8616366169049622],
[7.133116522988505, 0.8238425804247267],
[7.133334722222222, 0.7838033536585366],
# [7.133771120689655, 0.2644160008410428],
[7.1335965613026815, 0.7460841568544996],
[7.13383658045977, 0.7092630361648445],
[7.134262068965517, 0.676408418313709],
# [7.134971216475096, 0.2644160008410428],
[7.134753017241379, 0.6377163057190917],
[7.135625814176245, 0.5975205959935774],
# [7.136116762452107, 0.2644160008410428],
[7.1368259099616855, 0.5840494542778498],
[7.137916906130268, 0.6058512346976919],
[7.138680603448275, 0.6404105340622372],
[7.139226101532567, 0.673938708999159],
[7.139717049808429, 0.7108346693650126],
[7.1401534482758615, 0.7471319123212784],
[7.1405898467432944, 0.7879195358494533],
[7.140971695402299, 0.8242167788057191],
[7.141298994252874, 0.8576451675077096],
[7.141626293103448, 0.8880799691617607],
[7.142008141762452, 0.9252503416736754],
[7.142444540229885, 0.9664121635828428],
[7.142880938697318, 1.0045803984440707],
[7.143317337164751, 1.037135657590412],
[7.143808285440613, 1.0720857863751052],
[7.144408333333333, 1.1043666333753857],
[7.145172030651341, 1.1372337245058874],
[7.146208477011494, 1.1683432171802126],
[7.147408572796935, 1.1966462219970946],
[7.1486086685823755, 1.215696321393073],
[7.149808764367816, 1.221139206934781],
[7.151008860153256, 1.2118863015138772],
[7.152208955938697, 1.1977347991054363],
[7.153324196466581, 1.177792671245678],
[7.154124260323542, 1.1573364930847585],
[7.1552637452107275, 1.141673078025843],
[7.156463840996168, 1.1158193717027296],
[7.157663936781609, 1.0881967275785611],
[7.15886403256705, 1.0643841033335883],
[7.160064128352491, 1.0363532427937916],
[7.1612642241379305, 1.011996329994648],
[7.162464319923371, 0.9845097580090221],
[7.163664415708812, 0.9593364123786222],
[7.164864511494253, 0.9385173751815888],
[7.166064607279694, 0.9227330071106354],
[7.1672647030651335, 0.9087175768407371],
[7.168464798850574, 0.9021861141906874],
[7.169555795019157, 0.9000543173535184],
[7.130055555555556, 1.4804878048780488],
[7.129611111111111, 1.453048780487805],
[7.1305555555555555, 1.4454268292682928],
[7.113722222222222, 0.026219512195121863],
[7.114111111111111, 0.036890243902439],
[7.115055555555555, 0.015548780487804947],
[7.115944444444445, 0.08719512195121948],
[7.116388888888888, 0.11158536585365852],
]
)
ferro_sort = np.argsort(ferro_cyanide_reference[:,0])
ferro_cyanide_reference = ferro_cyanide_reference[ferro_sort,:]
ferro_spline = UnivariateSpline(ferro_cyanide_reference[:,0], ferro_cyanide_reference[:,1])
ferro_spline.set_smoothing_factor(0.0001)
smooth_ferro_spline = UnivariateSpline(ferro_cyanide_reference[:,0], ferro_cyanide_reference[:,1])
smooth_ferro_spline.set_smoothing_factor(1e-2)
# digitized using WebPlotDigitizer
# from figure 4a, of
# `X‐ray Spectroscopic Study of Solvent Effects on the Ferrous and
# Ferric Hexacyanide Anions`
ferri_cyanide_reference = np.array([
[7.105, 0.0], #added by me
[7.106, 0.0], #added by me
[7.107, 0.0], #added by me
[7.108, 0.0], #added by me
[7.109, 0.0], #added by me
[7.109558914647689, -0.0003532465836775245],
[7.110776643591469, 0.010844049235102515],
[7.111994372535251, 0.006911542385623637],
[7.113212101479032, 0.004644701884943592],
[7.1144298304228135, 0.020561385691987732],
[7.115647559366595, 0.010521435563578763],
[7.116699234363496, 0.014090693470370397],
[7.1173634501510135, 0.05414572524165129],
[7.117861611991652, 0.09497045360947354],
[7.118747233041674, 0.11409219695839856],
[7.119964961985455, 0.1066895518822546],
[7.121182690929237, 0.10886448831170603],
[7.122400419873018, 0.1367184476184775],
[7.123618148816799, 0.15582765859405345],
[7.124669823813702, 0.17926345784850195],
[7.125334039601218, 0.21416533435318574],
[7.12577685012623, 0.25002985378576836],
[7.126230730914366, 0.2891767062349977],
[7.126496417229373, 0.32481888047707375],
[7.126717822491878, 0.3622186156449554],
[7.126939227754384, 0.4049623636819011],
[7.12716063301689, 0.45114154856324484],
[7.127404178805646, 0.5004499626334837],
[7.127650887526724, 0.5544042766992578],
[7.127824848804407, 0.6095282938637996],
[7.128068394593163, 0.677235380811816],
[7.127990902751287, 0.6335700855879128],
[7.128267659329419, 0.7293701583816735],
[7.128406037618484, 0.7724987547168894],
[7.128489064591924, 0.8132119101497538],
[7.128599767223177, 0.8483255315458198],
[7.128738145512243, 0.8854739230037498],
[7.1288488481434955, 0.9270767028836793],
[7.128931875116936, 0.9724976791773856],
[7.129042577748188, 1.0121918830326493],
[7.1291809560372545, 1.0495947512938681],
[7.129291658668508, 1.0938695376083294],
[7.12940236129976, 1.1369991783079914],
[7.1294853882732, 1.1816567241918314],
[7.129596090904452, 1.2198240672273624],
[7.129706793535705, 1.2559555958365833],
[7.129817496166958, 1.2931050316589592],
[7.129955874456024, 1.332798191149777],
[7.130066577087277, 1.370583818980375],
[7.130304587744471, 1.4234805648498745],
[7.13058, 1.4653810035163437],
[7.130979873795113, 1.4976605181964002],
[7.131671765240443, 1.4380868371156856],
[7.131810143529509, 1.4034727700461853],
[7.132031548792015, 1.3602033585715296],
[7.132186532475769, 1.3125594525549773],
[7.1323359810279605, 1.27316080383787],
[7.132474359317026, 1.2319303398828616],
[7.1326736240532815, 1.1888653453423936],
[7.132806467210785, 1.1403061583255583],
[7.13300573194704, 1.0951035586374647],
[7.133138575104543, 1.048300261563322],
[7.133337839840799, 1.00622772655568],
# [7.1338027908920605, 0.27009669948701487],
[7.133514964050803, 0.9570561242278386],
[7.133664412602994, 0.9184972489615841],
[7.1339134935233135, 0.874972316319194],
[7.134134898785819, 0.8360290105004473],
[7.134356304048325, 0.7955588438619681],
[7.134621990363332, 0.7557740936092554],
# [7.135020519835842, 0.2699119419223337],
[7.134854465888963, 0.7211055803399815],
[7.135297276413974, 0.6810087739908692],
[7.135795438254611, 0.6406045067489188],
# [7.13612754614837, 0.26987016734450253],
[7.136570356673381, 0.6042359770348019],
[7.13767738298591, 0.5851778449748468],
[7.138784409298438, 0.6022862646550218],
[7.139614679032834, 0.6382209885864598],
[7.1401681921890985, 0.67090037052015],
[7.1406110027141105, 0.7056197443379334],
[7.141053813239122, 0.7430111245902487],
[7.141496623764133, 0.7842196568918953],
[7.141884082973518, 0.820849695463236],
[7.142216190867276, 0.8539191475174256],
[7.142548298761035, 0.8880065067847701],
[7.142880406654793, 0.9220938660521146],
[7.143267865864178, 0.9608869574514097],
[7.1437106763891896, 1.0013320593431903],
[7.144153486914201, 1.0352880027511073],
[7.144651648754839, 1.0687074561432264],
[7.145315864542356, 1.1042800607937213],
[7.146256836908005, 1.138729794631096],
[7.14741921453616, 1.1694591171184374],
[7.148636943479941, 1.1913444386753453],
[7.149854672423722, 1.19990442944186],
[7.151072401367504, 1.195971922592381],
[7.152290130311285, 1.1828782508245066],
[7.153507859255066, 1.1567368593243719],
[7.154725588198847, 1.137952160864767],
[7.155943317142628, 1.1129212135971651],
[7.15716104608641, 1.087335044213297],
[7.158378775030191, 1.0610548471840957],
[7.159596503973972, 1.0357462888583608],
[7.160814232917754, 1.0101601194744925],
[7.162031961861534, 0.9880440883172893],
[7.163249690805316, 0.9653728350438197],
[7.164467419749097, 0.9464493310551482],
[7.165685148692878, 0.9294691044734089],
[7.16690287763666, 0.9167918492927342],
[7.168120606580441, 0.9095280097456568],
[7.169338335524222, 0.9071223637159103],
[7.17000255131174, 0.9083465487308109],
[7.111299589603283, 0.026328368995689466],
[7.110259917920657, -0.0008022094313808736],
[7.112558139534884, -0.0008889347753140431],
[7.113816689466485, 0.015667346359342194],
[7.115020519835841, 0.018640786722762703],
[7.116279069767442, 0.006517822574400478],
[7.117154582763338, 0.03365459566889495],
[7.117592339261286, 0.07439279353689687],
[7.1181395348837215, 0.11210799370208813],
[7.121751025991792, 0.12253774875461354],
[7.123009575923393, 0.1496600676251194],
])
ferri_sort = np.argsort(ferri_cyanide_reference[:,0])
ferri_cyanide_reference = ferri_cyanide_reference[ferri_sort,:]
ferri_spline = UnivariateSpline(ferri_cyanide_reference[:,0], ferri_cyanide_reference[:,1])
ferri_spline.set_smoothing_factor(0.0001)
```
| github_jupyter |
# TSG060 - Persistent Volume disk space for all BDC PVCs
## Description
Connect to each container and get the disk space used/available for each
Persisted Volume (PV) mapped to each Persisted Volume Claim (PVC) of a
Big Data Cluster (BDC)
## Steps
### Parameters
Set the space used percentage, if disk space used crosses this
threshold, this notebook will raise an exception.
```
SPACED_USED_PERCENT_THRESHOLD = 80
```
### Common functions
Define helper functions used in this notebook.
```
# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows
import sys
import os
import re
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {} # Output in stderr known to be transient, therefore automatically retry
error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help
install_hint = {} # The SOP to help install the executable if it cannot be found
def run(cmd, return_output=False, no_output=False, retry_count=0, base64_decode=False, return_as_json=False):
"""Run shell command, stream stdout, print stderr and optionally return output
NOTES:
1. Commands that need this kind of ' quoting on Windows e.g.:
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}
Need to actually pass in as '"':
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name}
The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:
`iter(p.stdout.readline, b'')`
The shlex.split call does the right thing for each platform, just use the '"' pattern for a '
"""
MAX_RETRIES = 5
output = ""
retry = False
# When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see:
#
# ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')
#
if platform.system() == "Windows" and cmd.startswith("azdata sql query"):
cmd = cmd.replace("\n", " ")
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`
#
if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ:
cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc")
# To aid supportability, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
# Display an install HINT, so the user can click on a SOP to install the missing binary
#
if which_binary == None:
print(f"The path used to search for '{cmd_actual[0]}' was:")
print(sys.path)
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
try:
line_decoded = line.decode()
except UnicodeDecodeError:
# NOTE: Sometimes we get characters back that cannot be decoded(), e.g.
#
# \xa0
#
# For example see this in the response from `az group create`:
#
# ERROR: Get Token request returned http error: 400 and server
# response: {"error":"invalid_grant",# "error_description":"AADSTS700082:
# The refresh token has expired due to inactivity.\xa0The token was
# issued on 2018-10-25T23:35:11.9832872Z
#
# which generates the exception:
#
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte
#
print("WARNING: Unable to decode stderr line, printing raw bytes:")
print(line)
line_decoded = ""
pass
else:
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
# inject HINTs to next TSG/SOP based on output in stderr
#
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
# Verify if a transient error, if so automatically retry (recursive)
#
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
# Hints for tool retry (on transient fault), known errors and install guide
#
retry_hints = {'azdata': ['Endpoint sql-server-master does not exist', 'Endpoint livy does not exist', 'Failed to get state for cluster', 'Endpoint webhdfs does not exist', 'Adaptive Server is unavailable or does not exist', 'Error: Address already in use', 'Login timeout expired (0) (SQLDriverConnect)', 'SSPI Provider: No Kerberos credentials available', ], 'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond', ], 'python': [ ], }
error_hints = {'azdata': [['Please run \'azdata login\' to first authenticate', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['The token is expired', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Reason: Unauthorized', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Max retries exceeded with url: /api/v1/bdc/endpoints', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Look at the controller logs for more details', 'TSG027 - Observe cluster deployment', '../diagnose/tsg027-observe-bdc-create.ipynb'], ['provided port is already allocated', 'TSG062 - Get tail of all previous container logs for pods in BDC namespace', '../log-files/tsg062-tail-bdc-previous-container-logs.ipynb'], ['Create cluster failed since the existing namespace', 'SOP061 - Delete a big data cluster', '../install/sop061-delete-bdc.ipynb'], ['Failed to complete kube config setup', 'TSG067 - Failed to complete kube config setup', '../repair/tsg067-failed-to-complete-kube-config-setup.ipynb'], ['Data source name not found and no default driver specified', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Can\'t open lib \'ODBC Driver 17 for SQL Server', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Control plane upgrade failed. Failed to upgrade controller.', 'TSG108 - View the controller upgrade config map', '../diagnose/tsg108-controller-failed-to-upgrade.ipynb'], ['NameError: name \'azdata_login_secret_name\' is not defined', 'SOP013 - Create secret for azdata login (inside cluster)', '../common/sop013-create-secret-for-azdata-login.ipynb'], ['ERROR: No credentials were supplied, or the credentials were unavailable or inaccessible.', 'TSG124 - \'No credentials were supplied\' error from azdata login', '../repair/tsg124-no-credentials-were-supplied.ipynb'], ['Please accept the license terms to use this product through', 'TSG126 - azdata fails with \'accept the license terms to use this product\'', '../repair/tsg126-accept-license-terms.ipynb'], ], 'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb'], ], 'python': [['Library not loaded: /usr/local/opt/unixodbc', 'SOP012 - Install unixodbc for Mac', '../install/sop012-brew-install-odbc-for-sql-server.ipynb'], ['WARNING: You are using pip version', 'SOP040 - Upgrade pip in ADS Python sandbox', '../install/sop040-upgrade-pip.ipynb'], ], }
install_hint = {'azdata': [ 'SOP063 - Install azdata CLI (using package manager)', '../install/sop063-packman-install-azdata.ipynb' ], 'kubectl': [ 'SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb' ], }
print('Common functions defined successfully.')
```
### Instantiate Kubernetes client
```
# Instantiate the Python Kubernetes client into 'api' variable
import os
from IPython.display import Markdown
try:
from kubernetes import client, config
from kubernetes.stream import stream
except ImportError:
# Install the Kubernetes module
import sys
!{sys.executable} -m pip install kubernetes
try:
from kubernetes import client, config
from kubernetes.stream import stream
except ImportError:
display(Markdown(f'HINT: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except:
display(Markdown(f'HINT: Use [TSG118 - Configure Kubernetes config](../repair/tsg118-configure-kube-config.ipynb) to resolve this issue.'))
raise
api = client.CoreV1Api()
print('Kubernetes client instantiated')
```
### Get the namespace for the big data cluster
Get the namespace of the Big Data Cluster from the Kuberenetes API.
**NOTE:**
If there is more than one Big Data Cluster in the target Kubernetes
cluster, then either:
- set \[0\] to the correct value for the big data cluster.
- set the environment variable AZDATA_NAMESPACE, before starting Azure
Data Studio.
```
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
```
### Connect to each container that mounts a PVC and run the `df` linux command line tool
For each pod:
1. Get the claim_names from the volumes which have a PVC
2. Join that to the containers who volume_mount that claim_name
3. Get the ‘mount_path’ from the ‘volume_mount’
4. Exec into the container and run the ‘df’ tool.
This technique seems to work across kubeadm and AKS, but does require
‘kubectl exec’ into each container (which requires permission and some
time).
```
pods = api.list_namespaced_pod(namespace)
low_diskspace = False
for pod in pods.items:
for volume in pod.spec.volumes:
if volume.persistent_volume_claim is not None:
for container in pod.spec.containers:
for volume_mount in container.volume_mounts:
if volume_mount.name == volume.name:
pvc = api.read_namespaced_persistent_volume_claim(name=volume.persistent_volume_claim.claim_name, namespace=namespace)
print (f"Disk Space for {pod.metadata.name}/{container.name} PVC: {volume.persistent_volume_claim.claim_name} bound to PV: {pvc.spec.volume_name} ({pvc.status.capacity}) Storage Class: {pvc.spec.storage_class_name}")
try:
output=stream(api.connect_get_namespaced_pod_exec, pod.metadata.name, namespace, container=container.name, command=['/bin/sh', '-c', f'df {volume_mount.mount_path} -h'], stderr=True, stdout=True)
except Exception as err:
print(err)
else:
print(output)
# Get the same output as a CSV, so we can check the space used
output=stream(api.connect_get_namespaced_pod_exec, pod.metadata.name, namespace, container=container.name, command=['/bin/sh', '-c', f"""df {volume_mount.mount_path} -h -P | awk '{{print $1","$2","$3","$4","$5","$6" "$7}}'"""], stderr=True, stdout=True)
s = output.split(",")
space_used = int(s[9][:-1])
if space_used > SPACED_USED_PERCENT_THRESHOLD:
low_diskspace = True
# NOTE: This string is used to match an `expert rule` (SOP013)
#
print(f"WARNING: LOW DISK SPACE! ({pod.metadata.name}/{container.name})")
print("^^^^^^^^^^^^^^^^^^^^^^^^^")
if low_diskspace:
raise SystemExit(f"Disk space on one or more Persisted Volumes is greater than {SPACED_USED_PERCENT_THRESHOLD}%")
print("Notebook execution is complete.")
```
| github_jupyter |
# Connecting to a port in the container
Another common use case is that we start some kind of long-running process in the container, and talk to it through a port. That process could be a Jupyter Notebook, for example.
## Example http.server
For demonstration purposes, we will use Python's in-built web server. To run it from the host, we could use
```
# NBVAL_SKIP
!python -m http.server
```
This program will show the contents of the current file system in a webbrowser interface at port 8000 of this machine. So typically at one or some of these links http://127.0.0.1:8000 or http://localhost:8000 or http://0.0.0.0:8000).
(If you have executed the above cell by pressing SHIFT+RETURN, you need to interrupt the http.server process to get the control back in the notebook. This can be done by choosing from the menu "Kernel" -> "Interrupt".)
We will now create a container and run this server inside the container. We like to use a webbrowser on the host machine to inspect the files.
First, we create the Dockerfile:
```
%%file Dockerfile
FROM ubuntu:18.04
RUN apt-get -y update
RUN apt-get -y install python3
CMD python3 -m http.server
```
The last line starts the `http.server` when the container is run.
```
#NBVAL_IGNORE_OUTPUT
!docker build -t portdemo .
```
We now need to export the port 8000 in the container. We can do this using:
```
#NBVAL_SKIP
!docker run -p 8123:8000 portdemo
```
The numbers `8123:8000` mean that the internal port (8000) of the container should be connected to the port (8123) on the host system.
Once the above command is executing, we should be able to browse the file system in the container by going to the link http://localhost:8123 (or http://127.0.0.1:8123 or http://0.0.0.0:8123) on the host system.
We could have mapped port 8000 in the container to port 8000 on the host as well (`-p 8000:8000`).
As before, to stop the process, select `Kernel->Interrupt`.
## Jupyter Notebook
A common application of exposing ports is to install computational or data analysis software inside the container, and to control it from a Jupyter notebook running inside the container, but to use a webbrowser from the host system to interact with the notebook. In that case, the above example of exposing the port is in principle the right way to go, too. However, as this is a common usecase, there are a number of prepared Dockerfiles to install the notebook inside the container available at https://github.com/jupyter/docker-stacks, so that one can start the Dockerfile with `FROM jupyter/...`, (instead of `FROM ubuntu/...`) and in this way build on the Dockerfiles that the Jupyter team provides already.
The container image for https://github.com/jupyter/docker-stacks/tree/master/scipy-notebook might be a good starting point for work based on the Scientific Python stack.
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# This will need to be done in the future so
# get accustomed to using now
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
%matplotlib inline
mvc = pd.read_csv("nypd_mvc_2018.csv")
data = np.random.choice([1.0, np.nan],
size=(3, 3),
p=[.3, .7])
df = pd.DataFrame(data, columns=['A','B','C'])
print(df)
print(df.isnull())
print(df.isnull().sum())
null_counts = (mvc.isnull().sum())
print(null_counts)
null_counts_pct = null_counts / mvc.shape[0] * 100
null_df = pd.DataFrame({'null_counts': null_counts, 'null_pct': null_counts_pct})
# Rotate the dataframe so that rows become columns and vice-versa
null_df = null_df.T.astype(int)
print(null_df)
killed_cols = [col for col in mvc.columns if 'killed' in col]
print(null_df[killed_cols])
killed_cols = [col for col in mvc.columns if 'killed' in col]
killed = mvc[killed_cols].copy()
killed_manual_sum = killed.iloc[:,:3].sum(axis=1)
killed_mask = killed_manual_sum != killed['total_killed']
killed_non_eq = killed[killed_mask]
# fix the killed values
killed['total_killed'] = killed['total_killed'].mask(killed['total_killed'].isnull(), killed_manual_sum)
killed['total_killed'] = killed['total_killed'].mask(killed['total_killed'] != killed_manual_sum, np.nan)
# Create an injured dataframe and manually sum values
injured = mvc[[col for col in mvc.columns if 'injured' in col]].copy()
injured_manual_sum = injured.iloc[:,:3].sum(axis=1)
injured['total_injured'] = injured['total_injured'].mask(injured['total_injured'].isnull(), injured_manual_sum)
injured['total_injured'] = injured['total_injured'].mask(injured['total_injured'] != injured_manual_sum, np.nan)
summary = {
'injured': [
mvc['total_injured'].isnull().sum(),
injured['total_injured'].isnull().sum()
],
'killed': [
mvc['total_killed'].isnull().sum(),
killed['total_killed'].isnull().sum()
]
}
print(pd.DataFrame(summary, index=['before','after']))
mvc['total_injured'] = injured['total_injured']
mvc['total_killed'] = killed['total_killed']
def plot_null_correlations(df):
# create a correlation matrix only for columns with at least
# one missing value
cols_with_missing_vals = df.columns[df.isnull().sum() > 0]
missing_corr = df[cols_with_missing_vals].isnull().corr()
# create a triangular mask to avoid repeated values and make
# the plot easier to read
missing_corr = missing_corr.iloc[1:, :-1]
mask = np.triu(np.ones_like(missing_corr), k=1)
# plot a heatmap of the values
plt.figure(figsize=(20,14))
ax = sns.heatmap(missing_corr, vmin=-1, vmax=1, cbar=False,
cmap='RdBu', mask=mask, annot=True)
# format the text in the plot to make it easier to read
for text in ax.texts:
t = float(text.get_text())
if -0.05 < t < 0.01:
text.set_text('')
else:
text.set_text(round(t, 2))
text.set_fontsize('x-large')
plt.xticks(rotation=90, size='x-large')
plt.yticks(rotation=0, size='x-large')
plt.show()
cols_with_missing_vals = mvc.columns[mvc.isnull().sum() > 0]
missing_corr = mvc[cols_with_missing_vals].isnull().corr()
print(missing_corr)
veh_cols = [c for c in mvc.columns if 'vehicle' in c]
plot_null_correlations(mvc[veh_cols])
```
#### When a vehicle is in an accident, there is likely to be a cause, and vice-versa.
#### The pairs of column names with higher correlations are:
- vehicle_1 and cause_vehicle_1
- vehicle_2 and cause_vehicle_2
- vehicle_3 and cause_vehicle_3
- vehicle_4 and cause_vehicle_4
- vehicle_5 and cause_vehicle_5
```
col_labels = ['v_number', 'vehicle_missing', 'cause_missing']
vc_null_data = []
for v in range(1,6):
v_col = 'vehicle_{}'.format(v)
c_col = 'cause_vehicle_{}'.format(v)
# Count the number of rows where vehicle_{} is null and
# cause_vehicle_{} is not null
v_null = (mvc[v_col].isnull() & mvc[c_col].notnull()).sum()
# do the reverse for cause_vehicle_{} null, vehicle_{} not null
c_null = (mvc[c_col].isnull() & mvc[v_col].notnull()).sum()
# Append item to the vc_null_data list
vc_null_data.append([v, v_null, c_null])
# create a dataframe using the vc_null_data list of lists
vc_null_df = pd.DataFrame(vc_null_data, columns=col_labels)
cause_cols = [c for c in mvc.columns if "cause_" in c]
cause = mvc[cause_cols]
print(cause.head())
cause_1d = cause.stack()
print(cause_1d.head())
cause_counts = cause_1d.value_counts()
top10_causes = cause_counts.head(10)
print(top10_causes)
v_cols = [c for c in mvc.columns if c.startswith("vehicle")]
vehicles = mvc[v_cols]
vehicles_1d = vehicles.stack()
vehicles_counts = vehicles_1d.value_counts()
top10_vehicles = vehicles_counts.head(10)
def summarize_missing():
v_missing_data = []
for v in range(1,6):
v_col = 'vehicle_{}'.format(v)
c_col = 'cause_vehicle_{}'.format(v)
v_missing = (mvc[v_col].isnull() & mvc[c_col].notnull()).sum()
c_missing = (mvc[c_col].isnull() & mvc[v_col].notnull()).sum()
v_missing_data.append([v, v_missing, c_missing])
col_labels = columns=["vehicle_number", "vehicle_missing", "cause_missing"]
return pd.DataFrame(v_missing_data, columns=col_labels)
summary_before = summarize_missing()
for v in range(1,6):
v_col = 'vehicle_{}'.format(v)
c_col = 'cause_vehicle_{}'.format(v)
# Count the number of rows where vehicle_{} is null and
# cause_vehicle_{} is not null
v_missing_mask = mvc[v_col].isnull() & mvc[c_col].notnull()
# do the reverse for cause_vehicle_{} null, vehicle_{} not null
c_missing_mask = mvc[c_col].isnull() & mvc[v_col].notnull()
mvc[v_col] = mvc[v_col].mask(v_missing_mask, "Unspecified")
mvc[c_col] = mvc[c_col].mask(c_missing_mask, "Unspecified")
summary_after = summarize_missing()
veh_cols = [c for c in mvc.columns if 'vehicle' in c]
plot_null_correlations(mvc[veh_cols])
loc_cols = ['borough', 'location', 'on_street', 'off_street', 'cross_street']
location_data = mvc[loc_cols]
print(location_data.isnull().sum())
plot_null_correlations(location_data)
sorted_location_data = location_data.sort_values(loc_cols)
# No plot_null_matrix method??
# problem with DataQuest - old seaborn version??
plot_null_matrix(sorted_location_data)
sns.heatmap(sorted_location_data.isnull(), cbar=False)
import missingno as msno
msno.matrix(sorted_location_data)
msno.heatmap(sorted_location_data)
sup_data = pd.read_csv('supplemental_data.csv')
location_cols = ['location', 'on_street', 'off_street', 'borough']
null_before = mvc[location_cols].isnull().sum()
for col in location_cols:
mvc[col] = mvc[col].mask(mvc[col].isnull(), sup_data[col])
null_after = mvc[location_cols].isnull().sum()
```
If you'd like to continue working with this data, you can:
- Drop the rows that had suspect values for injured and killed totals.
- Clean the values in the vehicle_1 through vehicle_5 columns by analyzing the different values and merging duplicates and near-duplicates.
- Analyze whether collisions are more likely in certain locations, at certain times, or for certain vehicle types.
| github_jupyter |
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_11_03_embedding.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 11: Natural Language Processing and Speech Recognition**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 11 Material
* Part 11.1: Getting Started with Spacy in Python [[Video]](https://www.youtube.com/watch?v=A5BtU9vXzu8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_11_01_spacy.ipynb)
* Part 11.2: Word2Vec and Text Classification [[Video]](https://www.youtube.com/watch?v=nWxtRlpObIs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_11_02_word2vec.ipynb)
* **Part 11.3: What are Embedding Layers in Keras** [[Video]](https://www.youtube.com/watch?v=OuNH5kT-aD0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_11_03_embedding.ipynb)
* Part 11.4: Natural Language Processing with Spacy and Keras [[Video]](https://www.youtube.com/watch?v=BKgwjhao5DU&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_11_04_text_nlp.ipynb)
* Part 11.5: Learning English from Scratch with Keras and TensorFlow [[Video]](https://www.youtube.com/watch?v=Y1khuuSjZzc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN&index=58) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_11_05_english_scratch.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow.
```
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
```
# Part 11.3: What are Embedding Layers in Keras
[Embedding Layers](https://keras.io/layers/embeddings/) are a handy feature of Keras that allows the program to automatically insert additional information into the data flow of your neural network. In the previous section, you saw that Word2Vec could expand words to a 300 dimension vector. An embedding layer would allow you to insert these 300-dimension vectors in the place of word-indexes automatically.
Programmers often use embedding layers with Natural Language Processing (NLP); however, they can be used in any instance where you wish to insert a lengthier vector in an index value place. In some ways, you can think of an embedding layer as dimension expansion. However, the hope is that these additional dimensions provide more information to the model and provide a better score.
### Simple Embedding Layer Example
* **input_dim** = How large is the vocabulary? How many categories are you encoding? This parameter is the number of items in your "lookup table."
* **output_dim** = How many numbers in the vector that you wish to return.
* **input_length** = How many items are in the input feature vector that you need to transform?
Now we create a neural network with a vocabulary size of 10, which will reduce those values between 0-9 to 4 number vectors. Each feature vector coming in will have two such features. This neural network does nothing more than pass the embedding on to the output. But it does let us see what the embedding is doing.
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding
import numpy as np
model = Sequential()
embedding_layer = Embedding(input_dim=10, output_dim=4, input_length=2)
model.add(embedding_layer)
model.compile('adam', 'mse')
```
Let's take a look at the structure of this neural network so that we can see what is happening inside it.
```
model.summary()
```
For this neural network, which is just an embedding layer, the input is a vector of size 2. These two inputs are integer numbers from 0 to 9 (corresponding to the requested input_dim quantity of 10 values). Looking at the summary above, we see that the embedding layer has 40 parameters. This value comes from the embedded lookup table that contains four amounts (output_dim) for each of the 10 (input_dim) possible integer values for the two inputs. The output is 2 (input_length) length 4 (output_dim) vectors, resulting in a total output size of 8, which corresponds to the Output Shape given in the summary above.
Now, let us query the neural network with two rows. The input is two integer values, as was specified when we created the neural network.
```
input_data = np.array([
[1,2]
])
pred = model.predict(input_data)
print(input_data.shape)
print(pred)
```
Here we see two length-4 vectors that Keras looked up for each of the input integers. Recall that Python arrays are zero-based. Keras replaced the value of 1 with the second row of the 10 x 4 lookup matrix. Similarly, Keras replaced the value of 2 by the third row of the lookup matrix. The following code displays the lookup matrix in its entirety. The embedding layer performs no mathematical operations other than inserting the correct row from the lookup table.
```
embedding_layer.get_weights()
```
The values above are random parameters that Keras generated as starting points. Generally, we will either transfer an embedding or train these random values into something useful. The next section demonstrates how to embed a hand-coded embedding.
### Transferring An Embedding
Now, we see how to hard-code an embedding lookup that performs a simple one-hot encoding. One-hot encoding would transform the input integer values of 0, 1, and 2 to the vectors $[1,0,0]$, $[0,1,0]$, and $[0,0,1]$ respectively. The following code replaced the random lookup values in the embedding layer with this one-hot coding inspired lookup table.
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding
import numpy as np
embedding_lookup = np.array([
[1,0,0],
[0,1,0],
[0,0,1]
])
model = Sequential()
embedding_layer = Embedding(input_dim=3, output_dim=3, input_length=2)
model.add(embedding_layer)
model.compile('adam', 'mse')
embedding_layer.set_weights([embedding_lookup])
```
We have the following parameters to the Embedding layer:
* input_dim=3 - There are three different integer categorical values allowed.
* output_dim=3 - Per one-hot encoding, three columns represent a categorical value with three possible values.
* input_length=2 - The input vector has two of these categorical values.
Now we query the neural network with two categorical values to see the lookup performed.
```
input_data = np.array([
[0,1]
])
pred = model.predict(input_data)
print(input_data.shape)
print(pred)
```
The given output shows that we provided the program with two rows from the one-hot encoding table. This encoding is a correct one-hot encoding for the values 0 and 1, where there are up to 3 unique values possible.
The next section demonstrates how to train this embedding lookup table.
### Training an Embedding
First, we make use of the following imports.
```
from numpy import array
from tensorflow.keras.preprocessing.text import one_hot
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Embedding, Dense
```
We create a neural network that classifies restaurant reviews according to positive or negative. This neural network can accept strings as input, such as given here. This code also includes positive or negative labels for each review.
```
# Define 10 resturant reviews.
reviews = [
'Never coming back!',
'Horrible service',
'Rude waitress',
'Cold food.',
'Horrible food!',
'Awesome',
'Awesome service!',
'Rocks!',
'poor work',
'Couldn\'t have done better']
# Define labels (1=negative, 0=positive)
labels = array([1,1,1,1,1,0,0,0,0,0])
```
Notice that the second to the last label is incorrect. Errors such as this are not too out of the ordinary, as most training data could have some noise.
We define a vocabulary size of 50 words. Though we do not have 50 words, it is okay to use a value larger than needed. If there are more than 50 words, the least frequently used words in the training set are automatically dropped by the embedding layer during training. For input, we one-hot encode the strings. Note that we use the TensorFlow one-hot encoding method here, rather than Scikit-Learn. Scikit-learn would expand these strings to the 0's and 1's as we would typically see for dummy variables. TensorFlow translates all of the words to index values and replaces each word with that index.
```
VOCAB_SIZE = 50
encoded_reviews = [one_hot(d, VOCAB_SIZE) for d in reviews]
print(f"Encoded reviews: {encoded_reviews}")
```
The program one-hot encodes these reviews to word indexes; however, their lengths are different. We pad these reviews to 4 words and truncate any words beyond the fourth word.
```
MAX_LENGTH = 4
padded_reviews = pad_sequences(encoded_reviews, maxlen=MAX_LENGTH, \
padding='post')
print(padded_reviews)
```
Each review is padded by appending zeros at the end, as specified by the padding=post setting.
Next, we create a neural network to learn to classify these reviews.
```
model = Sequential()
embedding_layer = Embedding(VOCAB_SIZE, 8, input_length=MAX_LENGTH)
model.add(embedding_layer)
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
print(model.summary())
```
This network accepts four integer inputs that specify the indexes of a padded movie review. The first embedding layer converts these four indexes into four vectors of length 8. These vectors come from the lookup table that contains 50 (VOCAB_SIZE) rows of vectors of length 8. This encoding is evident by the 400 (8 times 50) parameters in the embedding layer. The size of the output from the embedding layer is 32 (4 words expressed as 8-number embedded vectors). A single output neuron is connected to the embedding layer by 33 weights (32 from the embedding layer and a single bias neuron). Because this is a single-class classification network, we use the sigmoid activation function and binary_crossentropy.
The program now trains the neural network. Both the embedding lookup and dense 33 weights are updated to produce a better score.
```
# fit the model
model.fit(padded_reviews, labels, epochs=100, verbose=0)
```
We can see the learned embeddings. Think of each word's vector as a location in 8 dimension space where words associated with positive reviews are close to other words with positive reviews. Similarly, training places negative reviews close to each other. In addition to the training setting these embeddings, the 33 weights between the embedding layer and output neuron similarly learn to transform these embeddings into an actual prediction. You can see these embeddings here.
```
print(embedding_layer.get_weights()[0].shape)
print(embedding_layer.get_weights())
```
We can now evaluate this neural network's accuracy, including both the embeddings and the learned dense layer.
```
loss, accuracy = model.evaluate(padded_reviews, labels, verbose=0)
print(f'Accuracy: {accuracy}')
```
The accuracy is a perfect 1.0, indicating there is likely overfitting. For a more complex data set, it would be good to use early stopping to not overfit.
```
print(f'Log-loss: {loss}')
```
However, the loss is not perfect, meaning that even though the predicted probabilities indicated a correct prediction in every case, the program did not achieve absolute confidence in each correct answer. The lack of confidence was likely due to the small amount of noise (previously discussed) in the data set. Additionally, the fact that some words appeared in both positive and negative reviews contributed to this lack of absolute certainty.
| github_jupyter |
# Problem Set I.2
## \# 1
$Ax=0$ and $Ay=0$
$B=\begin{bmatrix}
x & y
\end{bmatrix}$ and $C=\begin{bmatrix}
0 & 0
\end{bmatrix}$
$AB=C \implies A\begin{bmatrix}
x & y
\end{bmatrix}=\begin{bmatrix}
0 & 0
\end{bmatrix}$
## \# 2
$a=a_1,a_2,\dots,a_m$ and $b=b_1,b_2,\dots,b_p$
$$
ab^T=\begin{bmatrix}
a_1 \\
a_2 \\
\vdots \\
a_m
\end{bmatrix}
\cdot
\begin{bmatrix}
b_1 & b_2 & \dots & b_p
\end{bmatrix}
=
\begin{bmatrix}
a_1 \cdot b_1 & a_1 \cdot b_2 & \dots & a_1 \cdot b_p \\
a_2 \cdot b_1 & a_2 \cdot b_2 & \dots & a_2 \cdot b_p \\
\vdots & \ddots & \dots & \vdots \\
a_m \cdot b_1 & a_m \cdot b_2 & \dots & a_m \cdot b_p \\
\end{bmatrix}_{m \times p}
$$
Row $i$, Col $j$ = $a_ib_j$
$$aa^T=
\begin{bmatrix}
a_1 \\
a_2 \\
\vdots \\
a_m
\end{bmatrix}
\cdot
\begin{bmatrix}
a_1 & a_2 & \dots & a_m
\end{bmatrix}
=$$
$$
\begin{bmatrix}
a_1^2 & a_1a_2 & \dots & a_1a_m \\
a_2a_1 & a_2^2 & \dots & a_2a_m \\
\vdots & \dots & \ddots & \vdots \\
a_ma_1 & a_ma_2 & \dots & a_m^2
\end{bmatrix}=S
$$
Symmetric matrix $S^T=S$
## \# 3
a) Sum of rank1 matrices
$$AB=
\begin{bmatrix}
| & & | \\
a_1 & \dots & a_n \\
| & & |
\end{bmatrix}
\cdot
\begin{bmatrix}
- & b_1^* & - \\
& \vdots & \\
- & b_n^* & -
\end{bmatrix}
=
a_1b_1^*+ \dots + a_nb_n^*
=C
$$
b) $c_{ij}=\sum_{k=1}^{n} a_{ik}b_{kj}$
Example: $c_{45}= a_{41}b_{15} + a_{42}b_{25} + \dots + a_{4n}b_{n5} = \sum_{k=1}^{n} a_{4k}b_{k5}$
## \# 4
$$AB=
\begin{bmatrix}
| & & | \\
a_1 & \dots & a_n \\
| & & |
\end{bmatrix}
\cdot
\begin{bmatrix}
b_1 \\
\vdots \\
b_n
\end{bmatrix}
=
a_1b_1+ \dots + a_nb_n
=C
$$
$c_{ij}=\sum_{k=1}^{n} a_{ik}b_{k}$
## \# 5
$$A=\begin{bmatrix}
1 & a \\
0 & 1
\end{bmatrix};
B=\begin{bmatrix}
b_1 & b_2 \\
b_3 & b_4
\end{bmatrix};
C=\begin{bmatrix}
1 & 0 \\
c & 1
\end{bmatrix};
$$
$(AB)C=A(BC)$
$(AB)C$:
$$AB=
\begin{bmatrix}
1 & a \\
0 & 1
\end{bmatrix}
\cdot
\begin{bmatrix}
b_1 & b_2 \\
b_3 & b_4
\end{bmatrix}
=
$$
$$=
\begin{bmatrix}
b_1 + ab_3 & b_2 + ab_4 \\
b_3 & b_4
\end{bmatrix}
$$
$$
(AB)C=
\begin{bmatrix}
b_1 + ab_3 & b_2 + ab_4 \\
b_3 & b_4
\end{bmatrix}
\cdot
\begin{bmatrix}
1 & 0 \\
c & 1
\end{bmatrix}
=
$$
$$=
\begin{bmatrix}
b_1 + ab_3 + c(b_2 + ab_4) & b_2 + ab_4 \\
b_3 + cb_4 & b_4
\end{bmatrix}
$$
$A(BC)$:
$$BC=\begin{bmatrix}
b_1 + b_2c & b_2 \\
b_3 + b_4c & b_4
\end{bmatrix}
$$
$$A(BC)=\begin{bmatrix}
1 & a \\
0 & 1
\end{bmatrix}
\cdot
\begin{bmatrix}
b_1 + b_2c & b_2 \\
b_3 + b_4c & b_4
\end{bmatrix}
=
$$
$$=
\begin{bmatrix}
b_1 + b_2c + a(b_3 + b_4c) & b_2 + ab_4 \\
b_3 + b_4c & b_4
\end{bmatrix}
$$
$b_1 + ab_3 + c(b_2 + ab_4)=b_1 + b_2c + a(b_3 + b_4c)$
$b_1 + ab_3 + cb_2 + cab_4=b_1 + cb_2 + ab_3 + acb_4$
So, $(AB)C=A(BC)$
## \# 6
$$AB=AI=
\begin{bmatrix}
| & | & | \\
a_1 & a_2 & a_3 \\
| & | & |
\end{bmatrix}
\cdot
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}
=
\begin{bmatrix}
| \\
a_1 \\
|
\end{bmatrix}
\cdot
\begin{bmatrix}
1 & 0 & 0
\end{bmatrix}
+
\begin{bmatrix}
| \\
a_2 \\
|
\end{bmatrix}
\cdot
\begin{bmatrix}
0 & 1 & 0
\end{bmatrix}
+
\begin{bmatrix}
| \\
a_3 \\
|
\end{bmatrix}
\cdot
\begin{bmatrix}
0 & 0 & 1
\end{bmatrix}
=
$$
$$
=
\begin{bmatrix}
| & | & | \\
a_1 & 0 & 0 \\
| & | & |
\end{bmatrix}
+
\begin{bmatrix}
| & | & | \\
0 & a_2 & 0 \\
| & | & |
\end{bmatrix}
+
\begin{bmatrix}
| & | & | \\
0 & 0 & a_3 \\
| & | & |
\end{bmatrix}
=
\begin{bmatrix}
| & | & | \\
a_1 & a_2 & a_3 \\
| & | & |
\end{bmatrix}
=
A
$$
$AI=A$
## \# 7
$A=\begin{bmatrix}
2 & 1 \\
1 & 1
\end{bmatrix}$,
$rank(A)=2$;
$B=\begin{bmatrix}
0 & 1 \\
0 & 0
\end{bmatrix}$;
$AB=\begin{bmatrix}
0 & 2 \\
0 & 1
\end{bmatrix}$
$rank(AB)=1$, $C(AB) < C(A)$
## \# 8
$c_{ij}=\sum_{k=1}^{n} a_{ik}b_{kj}$
$C=AB=(m \times n)(n \times p)$
Rows times cols
1) For $i=1$ to $m$
2) For $j=1$ to $p$
3) For $k=1$ to $n$
$C(i,j)=C(i,j)+A(i,k)*B(k,j)$
Cols times rows
1) For $i=1$ to $m$
2) For $j=1$ to $p$
3) For $k=1$ to $n$
$C(i,j)=C(i,j)+A(k,i)*B(j,k)$
| github_jupyter |
```
#cell-width control
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
#twdisc-embeddedinputmodel_class = "model_class_5ws"
model_num = "model_1"
#packages
import numpy
import tensorflow as tf
from tensorflow.core.example import example_pb2
#utils
import os
import random
import pickle
import struct
import time
from noise import *
#keras
import keras
from keras.preprocessing import text, sequence
from keras.preprocessing.text import Tokenizer
from keras.models import Model
from keras.models import load_model
from keras.layers import Dense, Dropout, Activation, Concatenate, Dot, Embedding, LSTM, Conv1D, MaxPooling1D, Input, Lambda
```
# CPU usage
```
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = ""
```
# Utils
```
numpy.random.seed(47)
def get_data(filename, nc_dist, replace, corr_sample, separate, band_width, noise_candidate_path, dgp,
GLOVE_DIR, val_share, maxlen_text, maxlen_summ,embedding_size,load_tok, tok_path):
"""
Args:
filename (string): path to data file holding clean datapoints, i.e. clean (text, summ) pairs
nc_dist ((float, float)): (clean_ratio, noise_ratio) tuple describing the desired noise-clean
distribution in the output dataset, sum(nc_dist) = 1
replace (bool): whether or not to sample with replacement from clean
corr_sample (bool): whether to compare some of the orig summs and the noise summs to check for
correspondence
separate (bool): whether to generate noise from a set of texts that is disjoint from the clean data
band_width (int): number of outputs of G per text
noise_candidate_path (string): path where the generated noise files are stored
dgp (string): which DGP is to be used for noise, e.g. "generator" or "random"
load_tok (bool): whether to load tokenizer or train from scratch
tok_path (string): path to stored tokenizer object
GLOVE_DIR (string): path to glove folder
val_share (float): share (< 1) of data that is to be used for validation
maxlen_text (int): max length of text after which to cut text
maxlen_summ (int): max length of summ after which to cut summ
Returns:
data ((numpy array, numpy array)): (clean, noise) tuple of Nx2 arrays of (text, summ) datapoints
with noise
"""
if dgp == "generator":
#get all the bad indices
print('Get all the bad indices...')
alarm_log = {}
alarm_count = 0
count = 4
filenum_old = 'XXXXXX'
name_old = 'XXXXXX'
for name_new in sorted(os.listdir(noise_candidates_path)):
filenum_new = name_new[0:6]
if filenum_new == filenum_old:
count += 1
if filenum_new != filenum_old:
if count != 4:
alarm_count += 1
alarm_log[name_old] = count #collect all the bad keys with count
count = 1
filenum_old = filenum_new
name_old = name_new
all_bad_indices = [int(filename[0:6]) for filename in alarm_log.keys()] #now have all the bad indices
print('...done!')
#read in clean data
print('Reading clean data...')
text_summ_pairs = []
with open(filename, 'r') as data:
text = data.readline()
summ = data.readline()
while summ:
if len(text) > 2 and len(summ) > 2:
text_summ_pairs.append([text[0:-1], summ[0:-1]])
text = data.readline()
summ = data.readline()
clean_2d = numpy.array(text_summ_pairs, dtype=object)
print('...done!')
#remove bad indices
print('Remove bad indices from clean data...')
mask = numpy.ones(clean_2d.shape[0], dtype='bool')
mask[all_bad_indices] = False
clean_2d = clean_2d[mask]
N_clean = clean_2d.shape[0]
print('...done!')
#pick indices of noise
print('Pick noise indices...')
clean_ratio, noise_ratio = nc_dist
if separate: #check whether to generate noise from texts disjoint from clean data
N_noise = int((N_clean*noise_ratio)) #calculate N_noise
#below make sure we have a separate set of indices for noise which we can delete later for the separate
#guarantee
noise_separate_indices = numpy.random.choice(N_clean, size = N_noise, replace=False)
noise_index_pool = numpy.copy(noise_separate_indices)
for i in range(1, band_width):
noise_index_pool = numpy.concatenate((noise_index_pool, noise_separate_indices + i))
assert noise_index_pool.shape[0] == N_noise*band_width, "noise index pool smaller than expected"
assert abs(((N_clean - N_noise)/N_clean) - clean_ratio) < 0.0001 \
and abs((N_noise/N_clean) - noise_ratio) < 0.0001 \
,"Something is wrong with N_noise" #check that you calculated N_noise correctly
else:
N_noise = int((N_clean - N_clean*clean_ratio)/clean_ratio) #calculate N_noise
noise_index_pool = numpy.arange(N_clean*band_width)
assert abs(N_clean/(N_clean + N_noise) - clean_ratio) < 0.0001 \
and abs(N_noise/(N_clean + N_noise) - noise_ratio) < 0.0001 \
,"Something is wrong with N_noise" #check that you calculated N_noise correctly
noise_summ_indices = numpy.random.choice(noise_index_pool, size=N_noise, replace=replace) #get indices \
#in the range of N_clean*(band_width of generator run, i.e. number of outputs of G per text)
assert N_noise == len(noise_summ_indices), "N_noise and len(selected_indices do not match)"
print('...done!')
#read in candidate noise points
print('Read in candidate noise points...')
candidate_noise = []
for filename in sorted(os.listdir(noise_candidates_path)):
if int(filename[0:6]) not in all_bad_indices:
with open(noise_candidates_path+filename, 'r') as file:
candidate_noise.append(file.read().replace('\n', ' ')) #read file, trim \n and add to cand. list
assert len(candidate_noise) == band_width*clean_2d.shape[0], "less candidates than expected"
print('...done!')
#preprocess clean data, i.e. remove <s> and </s>
print('Preprocess clean data, i.e. remove <s> and </s>...')
for i in range(N_clean):
clean_2d[i,1] = clean_2d[i,1].replace('<s> ', '')
clean_2d[i,1] = clean_2d[i,1].replace(' </s>', '')
print('...done!')
if corr_sample: #take some samples to sanity check that refs and generated summs correspond
print('Sanity check some examples..')
idx = numpy.random.choice(N_noise, size=10, replace=False)
for i in idx:
print('### '+str(i)+' ###')
print('reference summary '+':\n'+clean_2d[(noise_summ_indices[i] // 4),1])
print('generated summary'+':\n'+candidate_noise[i])
print('...done!')
#put data together
print('Put data together...')
noise_texts = clean_2d[(noise_summ_indices // 4),0]
print('noise_texts.shape[0]', noise_texts.shape[0])
noise_summs = numpy.array(candidate_noise)[noise_summ_indices]
print('noise_summs.shape[0]', noise_summs.shape[0])
noise_2d = numpy.stack((noise_texts,noise_summs), axis=-1)
assert noise_2d.shape[0] == N_noise and noise_2d.shape[1] == 2, "the noise_2d shape does not check out"
print('...done!')
#remove noise source texts from clean data if separate was selected
if separate: #delete inputs for noise from clean data
print('prior', clean_2d.shape)
clean_2d = numpy.delete(clean_2d, noise_separate_indices, axis=0)
print('after', clean_2d.shape)
#TODO: this might skrew up the dist in the case were an index appears multiple times in sel_ind
return clean_2d, noise_2d
elif dgp == 'random':
#read in clean data
print('Reading clean data...')
text_summ_pairs = []
with open(filename, 'r') as data:
text = data.readline()
summ = data.readline()
while summ:
if len(text) > 2 and len(summ) > 2:
text_summ_pairs.append([text[0:-1], summ[0:-1]])
text = data.readline()
summ = data.readline()
clean_2d = numpy.array(text_summ_pairs, dtype=object)
print('...done!')
#create and fit tokenizer
print('Create and fit tokenizer...')
if load_tok:
with open(tok_path, 'rb') as handle:
tokenizer = pickle.load(handle)
else:
tokenizer = Tokenizer(num_words=max_features,
filters='#$%&()*+-/:;<=>@[\\]^_{|}~\t\n',
lower=True,
split=" ",
char_level=False)
tokenizer.fit_on_texts(numpy.append(clean_2d[:,0], clean_2d[:,1]))
print('...done!')
#read in candidate noise points
#print('Read in candidate noise points...')
#fractions = {"switch-pairs":0.25,"sentence-switch-entire-bank":0.25,\
#"sentence-switch-same-text-bank":0.25,"word-switch-entire-bank":0.25}
#clean_2d, noise_2d = noise_randomDGP(clean_2d, fractions, separate, nc_dist)
#print('...done!')
#preprocess clean data, i.e. remove <s> and </s>
print('Preprocess clean data, i.e. remove <s> and </s>...')
for i in range(clean_2d.shape[0]):
clean_2d[i,1] = clean_2d[i,1].replace('<s> ', '')
clean_2d[i,1] = clean_2d[i,1].replace(' </s>', '')
print('...done!')
#make embedding matrix
print('Make embedding matrix...')
embeddings_index = {}
f = open(os.path.join(GLOVE_DIR, 'glove.6B.100d.txt'))
for line in f:
values = line.split()
word = values[0]
coefs = numpy.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
word_index = tokenizer.word_index
embedding_matrix = numpy.zeros((len(word_index) + 1, embedding_size))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
print('...done!')
#split data into train & val
print('split data into train & val...')
#split texts and summs
texts = clean_2d[:,0]
summs = clean_2d[:,1]
#get targets
N_clean = clean_2d.shape[0]
#N_noise = noise_2d.shape[0]
targets = [1]*N_clean
#permute targets and data in the same way
#indices = numpy.random.choice(N_clean+N_noise, size=N_clean+N_noise, replace=False)
#assert len(indices) == N_clean+N_noise, "indices are less N_clean + N_noise"
#texts = texts[indices]
#summs = summs[indices]
#targets = targets[indices]
#split data into train and test
#split = int((N_clean+N_noise)*val_share)
#texts_train = texts[split:]
#summs_train = summs[split:]
#targets_train = targets[split:]
#texts_val = texts[:split]
#summs_val = summs[:split]
#targets_val = targets[:split]
#print('train dist: ', numpy.mean(targets_train)) #just checking what the dists are after permute
#print('val dist: ', numpy.mean(targets_val)) #just checking what the dists are after permute
#print('...done!')
#sequentialize data
#print('sequentialize data...')
#texts_train_seq = tokenizer.texts_to_sequences(texts_train)
#summs_train_seq = tokenizer.texts_to_sequences(summs_train)
#texts_val_seq = tokenizer.texts_to_sequences(texts_val)
#summs_val_seq = tokenizer.texts_to_sequences(summs_val)
#pad data
#texts_train_seq = sequence.pad_sequences(texts_train_seq, maxlen=maxlen_text)
#summs_train_seq = sequence.pad_sequences(summs_train_seq, maxlen=maxlen_summ)
#texts_val_seq = sequence.pad_sequences(texts_val_seq, maxlen=maxlen_text)
#summs_val_seq = sequence.pad_sequences(summs_val_seq, maxlen=maxlen_summ)
#print('...done!')
partition = {}
partition['train'] = []
partition['validation'] = []
labels = {}
id_counter = 1
#go through train data to write to: embed, write to file with id, add id to train partition, add id and label to label dict
print('write string data to file...')
for i in range(N_clean):
store_string = texts[i]+'\n'+summs[i]
id_name = 'id-'+str(id_counter)
with open('/media/oala/4TB/data-test/'+id_name, 'w') as file:
file.write(store_string)
partition['train'] += [id_name]
labels[id_name] = targets[i]
id_counter += 1
print('...done!')
#store embedding matrix
print('store embedding matrix...')
numpy.save('/media/oala/4TB/data-test/'+'embedding-matrix', embedding_matrix)
print('...done!')
#store label dict
print('store label dict...')
with open('/media/oala/4TB/data-test/'+'labels.pickle', 'wb') as handle:
pickle.dump(labels, handle, protocol=pickle.HIGHEST_PROTOCOL)
print('...done!')
#store tokenizer
print('store tokenizer...')
with open('/media/oala/4TB/data-test/'+'tokenizer.pickle', 'wb') as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
print('...done!')
#store partition dict
print('partition dict...')
with open('/media/oala/4TB/data-test/'+'partition.pickle', 'wb') as handle:
pickle.dump(partition, handle, protocol=pickle.HIGHEST_PROTOCOL)
print('...done!')
#return clean_2d, noise_2d
return partition, labels, tokenizer, embedding_matrix
else:
print('error: no valid DGP was selected')
def prep_data(clean_2d, noise_2d, max_features, val_share, maxlen_text, maxlen_summ,
load_tok=False, tok_path=None):
"""
Args:
clean_2d (numpy array): Nx2 array of text summ tuples with clean points
noise_2d (numpy array): Nx2 array of text summ tuples with noise points
max_features (int): max number of words for tokenizer
val_share (float): share (< 1) of data that is to be used for validation
load_tok (bool): whether to load tokenizer or train from scratch
tok_path (string): path to stored tokenizer object
maxlen_text (int): max length of text after which to cut text
maxlen_summ (int): max length of summ after which to cut summ
Returns:
texts_train_seq (array): (N_clean+N_noise)*(1-val_share)xmaxlen_text array of seq textgot multiple values for argument
summs_train_seq (array): (N_clean+N_noise)*(1-val_share)xmaxlen_summ array of seq summ
text_val_seq (array): (N_clean+N_noise)*(val_share)xmaxlen_text array of seq text
summs_val_seq (array): (N_clean+N_noise)*(val_share)xmaxlen_summ array of seq summ
tokenizer (tokenizer object): tokenizer object
"""
#split texts and summs
texts = numpy.append(clean_2d[:,0], noise_2d[:,0])
summs = numpy.append(clean_2d[:,1], noise_2d[:,1])
#get targets
N_clean = clean_2d.shape[0]
N_noise = noise_2d.shape[0]
targets = numpy.append([0]*N_clean, [1]*N_noise)
#permute targets and data in the same way
indices = numpy.random.choice(N_clean+N_noise, size=N_clean+N_noise, replace=False)
assert len(indices) == N_clean+N_noise, "indices are less N_clean + N_noise"
texts = texts[indices]
summs = summs[indices]
targets = targets[indices]
#split data into train and test
split = int((N_clean+N_noise)*val_share)
texts_train = texts[split:]
summs_train = summs[split:]
targets_train = targets[split:]
texts_val = texts[:split]
summs_val = summs[:split]
targets_val = targets[:split]
print('train dist: ', numpy.mean(targets_train)) #just checking what the dists are after permute
print('val dist: ', numpy.mean(targets_val)) #just checking what the dists are after permute
#train tokenizer
if load_tok:
with open(tok_path, 'rb') as handle:
tokenizer = pickle.load(handle)
else:
tokenizer = Tokenizer(num_words=max_features,
filters='#$%&()*+-/:;<=>@[\\]^_{|}~\t\n',
lower=True,
split=" ",
char_level=False)
tokenizer.fit_on_texts(numpy.append(texts_train, summs_train))
#sequentialize data
texts_train_seq = tokenizer.texts_to_sequences(texts_train)
summs_train_seq = tokenizer.texts_to_sequences(summs_train)
texts_val_seq = tokenizer.texts_to_sequences(texts_val)
summs_val_seq = tokenizer.texts_to_sequences(summs_val)
#pad data
texts_train_seq = sequence.pad_sequences(texts_train_seq, maxlen=maxlen_text)
summs_train_seq = sequence.pad_sequences(summs_train_seq, maxlen=maxlen_summ)
texts_val_seq = sequence.pad_sequences(texts_val_seq, maxlen=maxlen_text)
summs_val_seq = sequence.pad_sequences(summs_val_seq, maxlen=maxlen_summ)
return texts_train_seq, summs_train_seq, targets_train, texts_val_seq, summs_val_seq, targets_val, tokenizer
class DataGenerator(keras.utils.Sequence):
'Generates data for Keras'
def __init__(self, list_IDs, labels, tokenizer, embedding_matrix, maxlen_text, maxlen_summ, batch_size=32, dim=(1,1), n_channels=1,
n_classes=2, shuffle=True):
'Initialization'
self.dim = dim
self.batch_size = batch_size
self.labels = labels
self.list_IDs = list_IDs
self.n_channels = n_channels
self.n_classes = n_classes
self.shuffle = shuffle
self.on_epoch_end()
self.tokenizer = tokenizer
self.embedding_matrix = embedding_matrix
self.maxlen_text = maxlen_text
self.maxlen_summ = maxlen_summ
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.list_IDs) / self.batch_size))
def __getitem__(self, index):
'Generate one batch of data'
# Generate indexes of the batch
indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
# Find list of IDs
list_IDs_temp = [self.list_IDs[k] for k in indexes]
# Generate data
X, y = self.__data_generation(list_IDs_temp)
return X, y
def on_epoch_end(self):
'Updates indexes after each epoch'
self.indexes = np.arange(len(self.list_IDs))
if self.shuffle == True:
np.random.shuffle(self.indexes)
def __data_generation(self, list_IDs_temp):
'Generates data containing batch_size samples' # X : (n_samples, *dim, n_channels)
# Initialization
X_one = np.empty((self.batch_size, *self.dim[0]))
X_two = np.empty((self.batch_size, *self.dim[1]))
y = np.empty((self.batch_size), dtype=int)
# Generate data
for i, ID in enumerate(list_IDs_temp):
# Store sample
with open('/media/oala/4TB/data-test/' + ID, 'r') as file:
data_point = file.read()
#text = data.readline()
#summ = data.readline()
#data_point = np.load('/media/oala/4TB/data-test/' + ID)
text, summ = data_point.split('\n')
#print('text',text)
#print('summ',summ)
text = self.tokenizer.texts_to_sequences(numpy.array([text], dtype=object))
summ = self.tokenizer.texts_to_sequences(numpy.array([summ], dtype=object))
text = sequence.pad_sequences(text, maxlen=self.maxlen_text, truncating = 'post', padding = 'pre')
summ = sequence.pad_sequences(summ, maxlen=self.maxlen_summ, truncating = 'post', padding = 'post')
#print('X_one', X_one.shape)
#print('X_one[i]',X_one[i].shape)
#print('embedding_matrix[text[numpy.newaxis,:,:]]',embedding_matrix[text[numpy.newaxis,:,:]].shape)
#print('text', text.shape)
#print('summ', summ.shape)
#print('embedding_matrix', self.embedding_matrix.shape)
X_one[i] = self.embedding_matrix[text[numpy.newaxis,:,:]]
X_two[i] = self.embedding_matrix[summ[numpy.newaxis,:,:]]
# Store class
y[i] = self.labels[ID]
return [X_one, X_two], y#keras.utils.to_categorical(y, num_classes=self.n_classes)
def get_embedded_input(embedding_matrix, word_index, embedding_size, maxlen_text, maxlen_summ,text_input, summ_input):
"""
Args:
embedding_matrix (numpy.array): array where 0-axis indices map to word-vectors
word_index (dict): index to word dict
embedding_size (int): dimension of embeddings
maxlen_text (int): maximum length of text
maxlen_summ (int): maximum length of summaries
text_input (numpy.array): Nxmaxlen_text index representation of texts
summ_input (numpy.array): Nxmaxlen_summ index representation of summs
Returns:
text_embed (numpy.array): N x maxlen_text x embed_dim rep of texts
summ_embed (numpy.array): N x maxlen_summ x embed_dim rep of summs
"""
#define custom embeddings layer
embedding_layer_text = Embedding(len(word_index) + 1,
embedding_size,
weights=[embedding_matrix],
input_length= maxlen_text,
trainable=False)
embedding_layer_summ = Embedding(len(word_index) + 1,
embedding_size,
weights=[embedding_matrix],
input_length= maxlen_summ,
trainable=False)
#2way input
text_input = Input(shape=(maxlen_text,), dtype='int32')
summ_input = Input(shape=(maxlen_summ,), dtype='int32')
#2way embeddings
text_route = embedding_layer_text(text_input)
summ_route = embedding_layer_summ(summ_input)
text_route = Activation('linear')(text_route)
summ_route = Activation('linear')(summ_route)
#put together model
model = Model(inputs=[text_input, summ_input], outputs=[text_route, summ_route])
#model.compile(loss='binary_crossentropy',
#optimizer='adam',
#metrics=['accuracy'])
#text_embed, summ_embed = model.predict([text_input, summ_input])
return model.predict([text_input, summ_input]) #text_embed, summ_embed
#read in clean data
print('Reading clean data...')
text_summ_pairs = []
with open(filename, 'r') as data:
text = data.readline()
summ = data.readline()
while summ:
if len(text) > 2 and len(summ) > 2:
text_summ_pairs.append([text[0:-1], summ[0:-1]])
text = data.readline()
summ = data.readline()
clean_2d = numpy.array(text_summ_pairs, dtype=object)
print('...done!')
#preprocess clean data, i.e. remove <s> and </s>
print('Preprocess clean data, i.e. remove <s> and </s>...')
for i in range(clean_2d.shape[0]):
clean_2d[i,1] = clean_2d[i,1].replace('<s> ', '')
clean_2d[i,1] = clean_2d[i,1].replace(' </s>', '')
print('...done!')
#create and fit tokenizer
print('Create and fit tokenizer...')
tokenizer = Tokenizer(num_words=max_features,
filters='#$%&()*+-/:;<=>@[\\]^_{|}~\t\n',
lower=True,
split=" ",
char_level=False)
tokenizer.fit_on_texts(numpy.append(clean_2d[:,0], clean_2d[:,1]))
print('...done!')
#read in candidate noise points
#print('Read in candidate noise points...')
#fractions = {"switch-pairs":0.25,"sentence-switch-entire-bank":0.25,\
#"sentence-switch-same-text-bank":0.25,"word-switch-entire-bank":0.25}
#clean_2d, noise_2d = noise_randomDGP(clean_2d, fractions, separate, nc_dist)
#print('...done!')
#make embedding matrix
print('Make embedding matrix...')
embeddings_index = {}
f = open(os.path.join(GLOVE_DIR, 'glove.6B.100d.txt'))
for line in f:
values = line.split()
word = values[0]
coefs = numpy.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
word_index = tokenizer.word_index
embedding_matrix = numpy.zeros((len(word_index) + 1, embedding_size))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
print('...done!')
#split data into train & val
print('split data into train & val...')
#split texts and summs
texts = clean_2d[:,0]
summs = clean_2d[:,1]
#get targets
N_clean = clean_2d.shape[0]
#N_noise = noise_2d.shape[0]
targets = [1]*N_clean
#permute targets and data in the same way
#indices = numpy.random.choice(N_clean+N_noise, size=N_clean+N_noise, replace=False)
#assert len(indices) == N_clean+N_noise, "indices are less N_clean + N_noise"
#texts = texts[indices]
#summs = summs[indices]
#targets = targets[indices]
#split data into train and test
#split = int((N_clean+N_noise)*val_share)
#texts_train = texts[split:]
#summs_train = summs[split:]
#targets_train = targets[split:]
#texts_val = texts[:split]
#summs_val = summs[:split]
#targets_val = targets[:split]
#print('train dist: ', numpy.mean(targets_train)) #just checking what the dists are after permute
#print('val dist: ', numpy.mean(targets_val)) #just checking what the dists are after permute
#print('...done!')
#sequentialize data
print('sequentialize data...')
texts_train_seq = tokenizer.texts_to_sequences(texts)
summs_train_seq = tokenizer.texts_to_sequences(summs)
#texts_val_seq = tokenizer.texts_to_sequences(texts_val)
#summs_val_seq = tokenizer.texts_to_sequences(summs_val)
#pad data
texts_train_seq = sequence.pad_sequences(texts_train_seq, maxlen=maxlen_text)
summs_train_seq = sequence.pad_sequences(summs_train_seq, maxlen=maxlen_summ)
#texts_val_seq = sequence.pad_sequences(texts_val_seq, maxlen=maxlen_text)
#summs_val_seq = sequence.pad_sequences(summs_val_seq, maxlen=maxlen_summ)
#print('...done!')
all_seq = numpy.concatenate((texts_train_seq, summs_train_seq), axis=1)
N,D = all_seq.shape
all_seq_embedded = numpy.zeros((N,D,embedding_size))
all_seq_embedded = embedding_matrix[all_seq]
all_seq_embedded = numpy.reshape(all_seq_embedded,(N,D*embedding_size))
numpy.save('/mnt/disks/500gb/stats-and-meta-data/400000/embedding_matrix',embedding_matrix)
with open('/mnt/disks/500gb/stats-and-meta-data/400000/tokenizer.pickle', 'wb') as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
print(all_seq_embedded.shape)
#save all_seq_embedded in ten portions
for i in range(10):
numpy.save('/mnt/disks/500gb/stats-and-meta-data/400000/training-chunks/%d'%i,
all_seq_embedded[:,4800*i:4800*i + 4800])
```
### Text and summ stats
```
#load and collect statistics from portions
mean_list = []
std_list = []
mini_list = []
maxi_list = []
for i in range(10):
print(i)
a = numpy.load('/mnt/disks/500gb/stats-and-meta-data/400000/training-chunks/%d.npy'%i)
mean_list.append(numpy.mean(a,axis=0, keepdims=True))
std_list.append(numpy.var(a,axis=0, keepdims=True)**0.5)
mini_list.append(numpy.amin(a, axis=0, keepdims=True))
maxi_list.append(numpy.amax(a, axis=0, keepdims=True))
#concatenate portion results
mean = numpy.concatenate(mean_list,axis=1)
std = numpy.concatenate(std_list, axis=1)
mini = numpy.concatenate(mini_list, axis=1)
maxi = numpy.concatenate(maxi_list, axis=1)
#print to file
numpy.save('/mnt/disks/500gb/stats-and-meta-data/400000/training-stats-all/mean',mean[0])
numpy.save('/mnt/disks/500gb/stats-and-meta-data/400000/training-stats-all/std',std[0])
numpy.save('/mnt/disks/500gb/stats-and-meta-data/400000/training-stats-all/mini',mini[0])
numpy.save('/mnt/disks/500gb/stats-and-meta-data/400000/training-stats-all/maxi',maxi[0])
print(mean[0].shape)
print(std[0].shape)
print(mini[0].shape)
print(maxi[0].shape)
```
### Only summ stats
```
#load and collect statistics from portions
a = numpy.load('/mnt/disks/500gb/stats-and-meta-data/400000/training-chunks/8.npy')
b = numpy.load('/mnt/disks/500gb/stats-and-meta-data/400000/training-chunks/9.npy')
c = numpy.concatenate((a,b), axis=1)
print(c.shape)
d = c[:,1600:]
print(d.shape)
mean = numpy.mean(d,axis=0)
std = numpy.var(d,axis=0)**0.5
mini = numpy.amin(d, axis=0)
maxi = numpy.amax(d, axis=0)
print(mean.shape)
print(std.shape)
print(mini.shape)
print(maxi.shape)
#print to file
numpy.save('/mnt/disks/500gb/stats-and-meta-data/400000/training-stats-summ/mean',mean)
numpy.save('/mnt/disks/500gb/stats-and-meta-data/400000/training-stats-summ/std',std)
numpy.save('/mnt/disks/500gb/stats-and-meta-data/400000/training-stats-summ/mini',mini)
numpy.save('/mnt/disks/500gb/stats-and-meta-data/400000/training-stats-summ/maxi',maxi)
```
# Global parameters
```
# Embedding
max_features = 400000
maxlen_text = 400
maxlen_summ = 80
embedding_size = 100 #128
# Convolution
kernel_size = 5
filters = 64
pool_size = 4
# LSTM
lstm_output_size = 70
# Training
batch_size = 30
epochs = 50
#Saving?
save = True
```
# Load data
```
#filename = "/home/oala/Documents/MT/data/datasets/finished_files/train.bin"
#noise_candidates_path = '/home/oala/Documents/MT/noising/4-beam-PGC-noise-on-train/pretrained_model_tf1.2.1/decode_train_400maxenc_4beam_35mindec_120maxdec_ckpt-238410/decoded/'
filename = "/home/donald/documents/MT/data/data-essentials-mini/finished_files/train.bin"
noise_candidate_path = ' '
nc_dist = (0.5,0.5)
replace = False
corr_sample = False
separate = False
band_width = 4
dgp = "random"
#GLOVE_DIR = "/home/oala/Documents/MT/data/datasets/glove.6B/"
GLOVE_DIR = "/home/donald/documents/MT/data/data-essentials-mini/glove.6B/"
val_share = 0.2
#partition, labels, tokenizer, embedding_matrix = get_data(filename, nc_dist, replace, corr_sample, separate, band_width, noise_candidate_path, dgp,GLOVE_DIR, val_share, maxlen_text, maxlen_summ,embedding_size,load_tok=False, tok_path=None)#
print(clean_2d.shape)
print(noise_2d.shape)
print(clean_2d[100])
print(noise_2d[100])
val_share = 0.2
texts_train, summs_train, targets_train, texts_val, summs_val, targets_val, tokenizer = \
prep_data(clean_2d, noise_2d, max_features, val_share, maxlen_text, maxlen_summ,
load_tok=False, tok_path=None)
```
# Load embeddings
```
GLOVE_DIR = "/home/oala/Documents/MT/data/datasets/glove.6B/"
#GLOVE_DIR = "/home/donald/documents/MT/data/data-essentials-mini/glove.6B/"
embeddings_index = {}
f = open(os.path.join(GLOVE_DIR, 'glove.6B.100d.txt'))
for line in f:
values = line.split()
word = values[0]
coefs = numpy.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors.' % len(embeddings_index))
word_index = tokenizer.word_index
embedding_matrix = numpy.zeros((len(word_index) + 1, embedding_size))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
#define custom embeddings layer
embedding_layer_text = Embedding(len(word_index) + 1,
embedding_size,
weights=[embedding_matrix],
input_length= maxlen_text,
trainable=False)
embedding_layer_summ = Embedding(len(word_index) + 1,
embedding_size,
weights=[embedding_matrix],
input_length= maxlen_summ,
trainable=False)
```
# Make input data embedded
```
#texts_train
print(texts_train.shape)
print(summs_train.shape)
N,_ =texts_train.shape
texts_train_embedded = numpy.zeros((N,maxlen_text,embedding_size))
texts_train_embedded = embedding_matrix[texts_train]
summs_train_embedded = numpy.zeros((N,maxlen_summ,embedding_size))
summs_train_embedded = embedding_matrix[summs_train]
texts_val_embedded = numpy.zeros((N,maxlen_text,embedding_size))
texts_val_embedded = embedding_matrix[texts_val]
summs_val_embedded = numpy.zeros((N,maxlen_summ,embedding_size))
summs_val_embedded = embedding_matrix[summs_val]
#print(texts_train_embedded.shape)
#print(texts_train[0,10])
#inv_map = {v: k for k, v in word_index.items()}
#print(inv_map[texts_train[0,10]])
#print(embeddings_index[inv_map[texts_train[0,10]]])
#print(texts_train_embedded[0,10])
print(texts_train_embedded.shape)
print(summs_train_embedded.shape)
print(texts_val_embedded.shape)
print(summs_val_embedded.shape)
```
# Build model
```
#2way input
text_input = Input(shape=(maxlen_text,embedding_size), dtype='float32')
summ_input = Input(shape=(maxlen_summ,embedding_size), dtype='float32')
#2way dropout
text_route = Dropout(0.25)(text_input)
summ_route = Dropout(0.25)(summ_input)
#2way conv
text_route = Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1)(text_route)
summ_route = Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1)(summ_route)
#2way max pool
text_route = MaxPooling1D(pool_size=pool_size)(text_route)
summ_route = MaxPooling1D(pool_size=pool_size)(summ_route)
#2way lstm
text_route = LSTM(lstm_output_size)(text_route)
summ_route = LSTM(lstm_output_size)(summ_route)
#merge both routes
#merged = keras.layers.concatenate((text_route, summ_route), axis=-1)
#merged = Concatenate(axis=-1)([text_route, summ_route])
merged = Dot(axes=1,normalize=True)([text_route, summ_route])
#output
output = Dense(1, activation='sigmoid')(merged)
#define model
model = Model(inputs=[text_input, summ_input], outputs=[output])
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
```
# Train model
```
np = numpy
from keras.models import Sequential
# Parameters
params = {'dim': [(maxlen_text,embedding_size),(maxlen_summ,embedding_size)],
'batch_size': batch_size,
'n_classes': 2,
'n_channels': 1,
'shuffle': False}
# Generators
training_generator = DataGenerator(partition['train'], labels, tokenizer, embedding_matrix, maxlen_text, maxlen_summ, **params)
validation_generator = DataGenerator(partition['validation'], labels, tokenizer, embedding_matrix, maxlen_text, maxlen_summ, **params)
# Train model on dataset
model.fit_generator(generator=training_generator,
validation_data=validation_generator,
use_multiprocessing=True,
workers=4,
epochs=epochs)
#print('Train...')
#model.fit([texts_train_embedded, summs_train_embedded], targets_train,
#batch_size=batch_size,
#epochs=epochs,
#validation_data=([texts_val_embedded, summs_val_embedded], targets_val))
```
# Interact with model
```
text_string = "gary d. cohn , president trump’s top economic adviser , said on tuesday that he would resign , becoming the latest in a series of high-profile departures from the trump administration . white house officials insisted that there was no single factor behind the departure of mr. cohn , who heads the national economic council . but his decision to leave came as he seemed poised to lose an internal struggle over mr. trump’s plan to impose large tariffs on steel and aluminum imports. mr. cohn had warned last week that he might resign if mr. trump followed through with the tariffs, which mr. cohn had lobbied against internally . “ gary has been my chief economic adviser and did a superb job in driving our agenda , helping to deliver historic tax cuts and reforms and unleashing the american economy once again , ” mr. trump said in a statement to the new york times . “ he is a rare talent , and i thank him for his dedicated service to the american people . ” mr. cohn is expected to leave in the coming weeks. he will join a string of recent departures by senior white house officials, including mr. trump’s communications director and a powerful staff secretary. yet the departure of mr. cohn , a free-trade-oriented democrat who fended off a number of nationalist-minded policies during his year in the trump administration , could have a ripple effect on the president’s economic decisions and on the financial industry . it leaves mr. trump surrounded primarily by advisers with strong protectionist views who advocate the types of aggressive trade measures , like tariffs , that mr. trump campaigned on but that mr. cohn fought inside the white house . mr. cohn was viewed by republican lawmakers as the steady hand who could prevent mr. trump from engaging in activities that could trigger a trade war. even the mere threat , last august , that mr. cohn might leave sent the financial markets tumbling. on tuesday , mr. cohn’s announcement rattled markets , and trading in futures pointed to a decline in the united states stock market when it opened on wednesday . in a statement , mr. cohn said he had been pleased to work on “pro-growth economic policies to benefit the american people , in particular the passage of historic tax reform . ” white house officials said that mr. cohn was leaving on cordial terms with the president and that they planned to discuss policy even after his departure . mr. cohn’s departure comes as the white house has been buffeted by turnover , uncertainty and internal divisions and as the president lashes out at the special counsel investigation that seems to be bearing down on his team . a host of top aides have been streaming out the white house door or are considering a departure . rob porter , the white house staff secretary and a member of the inner circle , resigned after spousal abuse allegations . hope hicks , the president’s communications director and confidante , announced that she would leave soon . in recent days , the president has lost a speechwriter , an associate attorney general and the north korea negotiator . others are perpetually seen as on the way out . john f. kelly , the chief of staff , at one point broached resigning over the handling of mr. porter’s case . lt. gen. h. r. mcmaster , the national security adviser , has been reported to be preparing to leave . and many officials wonder if jared kushner , the president’s son-in-law and senior adviser , will stay now that he has lost his top-secret security clearance; the departure of mr. cohn further shrinks the number of allies mr. kushner and his wife , ivanka trump , have in the white house . more than one in three top white house officials left by the end of mr. trump’s first year and fewer than half of the 12 positions closest to the president are still occupied by the same people as when he came into office , according to a brookings institution study . mr. cohn’s departure will bring the turnover number to 43 percent , according to updated figures compiled by kathryn dunn tenpas of the brookings institution . for all the swings of the west wing revolving door over the last year , mr. cohn’s decision to leave struck a different chord for people . he is among the most senior officials to resign to date ."
summ_string = "gary d. cohn , president trump’s top economic adviser , said on tuesday that he would resign . more than one in three top white house officials left by the end of mr. trump’s first year ."
summ_string = "angela merkel visited president trump last wednesday . they talked about the iran deal and trade . president trump will travel on to europe next week ."
predict_text = sequence.pad_sequences(tokenizer.texts_to_sequences([text_string]), maxlen=maxlen_text)
predict_summ = sequence.pad_sequences(tokenizer.texts_to_sequences([summ_string]), maxlen=maxlen_summ)
print(model.predict([predict_text, predict_summ], batch_size=1))
numpy.mean(targets_val)
```
# Filter outputs
```
with open(tok_path, 'rb') as handle:
tokenizer = pickle.load(handle)
from os import scandir, listdir
#read in article texts, baseline summs, m1 summs, m2 summs and ref summs
article_dir = "/home/oala/Documents/MT/data/datasets/finished_files/test_output/articles/"
reference_dir = "/home/oala/Documents/MT/data/datasets/finished_files/test_output/reference/"
baseline_dir = "/home/oala/Documents/MT/data/datasets/finished_files/test_output/baseline/"
pointergen_dir = "/home/oala/Documents/MT/data/datasets/finished_files/test_output/pointer-gen/"
pointergencov_dir = "/home/oala/Documents/MT/data/datasets/finished_files/test_output/pointer-gen-cov/"
article_files = listdir(article_dir)
article_files.sort()
reference_files = listdir(reference_dir)
reference_files.sort()
baseline_files = listdir(baseline_dir)
baseline_files.sort()
pointergen_files = listdir(pointergen_dir)
pointergen_files.sort()
pointergencov_files = listdir(pointergencov_dir)
pointergencov_files.sort()
#read in texts
texts = []
for txt_file in article_files:
with open(article_dir+txt_file,'r',encoding='utf-8', errors='ignore') as txt:
text = txt.read()
text = text.replace('(', '-lrb-')
text = text.replace(')', '-rrb-')
text = text.replace('[', '-lsb-')
text = text.replace(']', '-rsb-')
text = text.replace('{', '-lcb-')
text = text.replace('}', '-rcb-')
texts.append(text)
texts = numpy.array(texts)
texts_text = numpy.copy(texts)
texts = tokenizer.texts_to_sequences(texts)
texts = sequence.pad_sequences(texts, maxlen=maxlen_text)
#helper functions for summs
def summ_dir2array(dir_name, file_list):
summs = []
for txt_file in file_list:
with open(dir_name+txt_file,'r',encoding='utf-8', errors='ignore') as txt:
summ = ""
line = txt.readline()
while line:
line = line.replace('\n', ' ')
line = line.replace('(', '-lrb-')
line = line.replace(')', '-rrb-')
line = line.replace('[', '-lsb-')
line = line.replace(']', '-rsb-')
line = line.replace('{', '-lcb-')
line = line.replace('}', '-rcb-')
summ += line
line = txt.readline()
summs.append(summ)
summs = numpy.array(summs)
summs_text = numpy.copy(summs)
summs = tokenizer.texts_to_sequences(summs)
summs = sequence.pad_sequences(summs, maxlen=maxlen_summ)
return summs, summs_text
#reference summs
reference_summs, reference_summs_text = summ_dir2array(reference_dir, reference_files)
#baseline summs
baseline_summs, baseline_summs_text = summ_dir2array(baseline_dir, baseline_files)
#pointergen summs
pointergen_summs, pointergen_summs_text = summ_dir2array(pointergen_dir, pointergen_files)
#pointergencov summs
pointergencov_summs, pointergencov_summs_text = summ_dir2array(pointergencov_dir, pointergencov_files)
N = len(texts_text)
#reference
reference_preds = model.predict([texts, reference_summs], batch_size=batch_size, verbose=1)
print(model.evaluate([texts, reference_summs],[0]*texts.shape[0], batch_size=batch_size))
reference_preds_flat = numpy.ndarray.flatten(reference_preds)
reference_preds_onehot = numpy.copy(reference_preds_flat)
reference_preds_onehot[reference_preds_onehot<0.5]=0
reference_preds_onehot[reference_preds_onehot>=0.5]=1
#reference_preds_onehot[reference_preds_onehot<0.02]=0
#reference_preds_onehot[reference_preds_onehot>=0.02]=1
print(sum(reference_preds_onehot)/reference_preds_onehot.shape[0])
#baseline
baseline_preds = model.predict([texts, baseline_summs], batch_size=batch_size, verbose=1)
baseline_preds_flat = numpy.ndarray.flatten(baseline_preds)
baseline_preds_onehot = numpy.copy(baseline_preds_flat)
baseline_preds_onehot[baseline_preds_onehot<0.5]=0
baseline_preds_onehot[baseline_preds_onehot>=0.5]=1
#baseline_preds_onehot[baseline_preds_onehot<0.02]=0
#baseline_preds_onehot[baseline_preds_onehot>=0.02]=1
print(sum(baseline_preds_onehot)/baseline_preds_onehot.shape[0])
#pointergen
pointergen_preds = model.predict([texts, pointergen_summs], batch_size=batch_size, verbose=1)
pointergen_preds_flat = numpy.ndarray.flatten(pointergen_preds)
pointergen_preds_onehot = numpy.copy(pointergen_preds_flat)
pointergen_preds_onehot[pointergen_preds_onehot<0.5]=0
pointergen_preds_onehot[pointergen_preds_onehot>=0.5]=1
#pointergen_preds_onehot[pointergen_preds_onehot<0.02]=0
#pointergen_preds_onehot[pointergen_preds_onehot>=0.02]=1
print(sum(pointergen_preds_onehot)/pointergen_preds_onehot.shape[0])
#pointergencov
pointergencov_preds = model.predict([texts, pointergencov_summs], batch_size=batch_size, verbose=1)
pointergencov_preds_flat = numpy.ndarray.flatten(pointergencov_preds)
pointergencov_preds_onehot = numpy.copy(pointergencov_preds_flat)
pointergencov_preds_onehot[pointergencov_preds_onehot<0.5]=0
pointergencov_preds_onehot[pointergencov_preds_onehot>=0.5]=1
#pointergencov_preds_onehot[pointergencov_preds_onehot<0.02]=0
#pointergencov_preds_onehot[pointergencov_preds_onehot>=0.02]=1
print(sum(pointergencov_preds_onehot)/pointergencov_preds_onehot.shape[0])
if save:
model.save('%s_%s.h5' % (model_class,model_num))
with open('%s_%s_TOKENIZER.pickle' % (model_class,model_num), 'wb') as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
print(reference_summs_text[500])
print(texts_text[500])
print(clean_2d[500,0])
print(clean_2d[500,1])
```
# Test model on actual testing data
```
#load model
model = load_model('/home/oala/Documents/MT/data/model-params/exciting-crazy/%s/%s/%s_%s.h5' % (model_class,model_num,model_class, model_num))
tok_path= '/home/oala/Documents/MT/data/model-params/exciting-crazy/%s/%s/%s_%s_TOKENIZER.pickle' % (model_class,model_num,model_class,model_num)
#100/0 (clean/noise)
filename = "/home/oala/Documents/MT/data/datasets/finished_files/test.bin"
noise_candidates_path = '/home/oala/Documents/MT/noising/4-beam-PGC-noise-on-train/pretrained_model_tf1.2.1/decode_train_400maxenc_4beam_35mindec_120maxdec_ckpt-238410/decoded/'
nc_dist = (0.5,0.5)
replace = False
corr_sample = False
separate = False
band_width = 4
dgp = "random"
clean_2d, noise_2d = get_data(filename, nc_dist, replace, corr_sample, separate, band_width,noise_candidates_path, dgp)
print(clean_2d.shape)
print(noise_2d.shape)
#clean_2d = clean_2d[0:2] #here you reduce clean to 2 datapoints to get only noise!
noise_2d = noise_2d[0:2]
val_share = 0.2
texts_train, summs_train, targets_train, texts_val, summs_val, targets_val, tokenizer = \
prep_data(clean_2d, noise_2d, max_features, val_share, maxlen_text, maxlen_summ,
load_tok=True, tok_path=tok_path)
#concat the splits to get all data
predict_text = numpy.concatenate((texts_train, texts_val))
predict_summ = numpy.concatenate((summs_train, summs_val))
predict_y = numpy.concatenate((targets_train, targets_val))
model.evaluate([predict_text, predict_summ], predict_y, batch_size=batch_size)
#50/50 (clean/noise)
filename = "/home/oala/Documents/MT/data/datasets/finished_files/test.bin"
noise_candidates_path = '/home/oala/Documents/MT/noising/4-beam-PGC-noise-on-train/pretrained_model_tf1.2.1/decode_train_400maxenc_4beam_35mindec_120maxdec_ckpt-238410/decoded/'
nc_dist = (0.5,0.5)
replace = False
corr_sample = False
separate = False
band_width = 4
dgp = "random"
clean_2d, noise_2d = get_data(filename, nc_dist, replace, corr_sample, separate, band_width,noise_candidates_path, dgp)
print(clean_2d.shape)
print(noise_2d.shape)
#noise_2d = noise_2d[0:2] #here you reduce clean to 2 datapoints to get only noise!
val_share = 0.2
texts_train, summs_train, targets_train, texts_val, summs_val, targets_val, tokenizer = \
prep_data(clean_2d, noise_2d, max_features, val_share, maxlen_text, maxlen_summ,
load_tok=True, tok_path=tok_path)
#concat the splits to get all data
predict_text = numpy.concatenate((texts_train, texts_val))
predict_summ = numpy.concatenate((summs_train, summs_val))
predict_y = numpy.concatenate((targets_train, targets_val))
model.evaluate([predict_text, predict_summ], predict_y, batch_size=batch_size)
#0/100 (clean/noise)
filename = "/home/oala/Documents/MT/data/datasets/finished_files/test.bin"
noise_candidates_path = '/home/oala/Documents/MT/noising/4-beam-PGC-noise-on-train/pretrained_model_tf1.2.1/decode_train_400maxenc_4beam_35mindec_120maxdec_ckpt-238410/decoded/'
nc_dist = (0.5,0.5)
replace = False
corr_sample = False
separate = False
band_width = 4
dgp = "random"
clean_2d, noise_2d = get_data(filename, nc_dist, replace, corr_sample, separate, band_width,noise_candidates_path, dgp)
print(clean_2d.shape)
print(noise_2d.shape)
clean_2d = clean_2d[0:2] #here you reduce clean to 2 datapoints to get only noise!
val_share = 0.2
texts_train, summs_train, targets_train, texts_val, summs_val, targets_val, tokenizer = \
prep_data(clean_2d, noise_2d, max_features, val_share, maxlen_text, maxlen_summ,
load_tok=True, tok_path=tok_path)
#concat the splits to get all data
predict_text = numpy.concatenate((texts_train, texts_val))
predict_summ = numpy.concatenate((summs_train, summs_val))
predict_y = numpy.concatenate((targets_train, targets_val))
model.evaluate([predict_text, predict_summ], predict_y, batch_size=batch_size)
```
# F1 comparisons
```
pure_clean_preds = numpy.ndarray((reference_preds.shape[0],1), dtype='int')
pure_clean_preds[reference_preds<0.5] = 0
pure_clean_preds[reference_preds>=0.5] = 1
pure_clean_preds_one_hot = numpy.zeros((pure_clean_preds.shape[0],2))
pure_clean_preds_one_hot[numpy.arange(reference_preds.shape[0]),pure_clean_preds[:,0]]=1
pure_clean_targets_one_hot = numpy.ones((pure_clean_preds.shape[0],2))
pure_clean_targets_one_hot[:,1]=0
one = model.predict([predict_text, predict_summ], batch_size=batch_size)
pure_noise_preds = numpy.ndarray((one.shape[0],1), dtype='int')
pure_noise_preds[one<0.5] = 0
pure_noise_preds[one>=0.5] = 1
pure_noise_preds_one_hot = numpy.zeros((pure_noise_preds.shape[0],2))
pure_noise_preds_one_hot[numpy.arange(one.shape[0]),pure_noise_preds[:,0]]=1
pure_noise_targets_one_hot = numpy.ones((pure_noise_preds.shape[0],2))
pure_noise_targets_one_hot[:,0] = 0
pure_clean_preds_one_hot.T @ pure_clean_targets_one_hot
pure_noise_preds_one_hot.T @ pure_noise_targets_one_hot
model.predict([predict_text, predict_summ], batch_size=batch_size)
```
| github_jupyter |
```
import distutils.util
import numpy as np
import matplotlib.pyplot as plt
import sys
import os
import math
import time
# assuming running from raycasting-simulation/Automator
sys.path.append("../PycastWorld")
from math import acos, asin, atan, cos, sin, tan, pi
from math import floor
from math import radians
from pycaster import PycastWorld, Turn, Walk
from numpy.random import default_rng
rng = default_rng()
# NOISE CONTROL
# the standard deviation of the Gaussian that random angles are drawn from
rand_angle_scale = pi / 36 # 5 degree s.d.
# the minimum of the uniform distribution that random distances (to move) are drawn from
rand_step_scale = 0.4
rand_pertrusion_scale = 0.05
enws = {"Dir.EAST": 0, "Dir.NORTH": 90, "Dir.WEST": 180, "Dir.SOUTH": 270}
def in_targ_cell(base_dir, c_targ_x, c_targ_y, x, y):
if base_dir == 0 or base_dir == 180:
if abs(c_targ_x - x) < 0.4:
return True
else:
if abs(c_targ_y - y) < 0.4:
return True
return False
class Driver:
def __init__(
self, c_targ_x, c_targ_y, base_dir, targ_dir, world, img_dir=None, show_freq=0,
):
self.c_targ_x = c_targ_x
self.c_targ_y = c_targ_y
self.base_dir = base_dir
self.targ_dir = targ_dir
self.world = world
self.curr_x = self.world.getX()
self.curr_y = self.world.getY()
self.direction = 0
self.update_direction()
self.dist = math.inf
self.update_dist()
self.angle = 0
self.step = math.inf
self.pertrusion = 0
self.all_angles = np.array([])
self.img_dir = img_dir
if self.img_dir != None:
stack_conds = []
stack_conds.append(os.path.isdir(os.path.join(img_dir, "left")))
stack_conds.append(os.path.isdir(os.path.join(img_dir, "right")))
stack_conds.append(os.path.isdir(os.path.join(img_dir, "straight")))
# if subdirectories exist, then stacking method not used
if all(stack_conds):
self.img_num_l = len(os.listdir(os.path.join(img_dir, "left")))
self.img_num_r = len(os.listdir(os.path.join(img_dir, "right")))
self.img_num_s = len(os.listdir(os.path.join(img_dir, "straight")))
self.stack_dir = False
else:
self.img_num = len(os.listdir(img_dir))
self.stack_dir = True
self.show_freq = show_freq
def update_dist(self):
self.dist = math.sqrt(
(self.c_targ_x - self.world.getX()) ** 2
+ (self.c_targ_y - self.world.getY()) ** 2
)
def update_direction(self):
if not -1 <= self.world.get_dir_x() <= 1:
dir_x = round(self.world.get_dir_x())
else:
dir_x = self.world.get_dir_x()
if not -1 <= self.world.get_dir_y() <= 1:
dir_y = round(self.world.get_dir_y())
else:
dir_y = self.world.get_dir_y()
if dir_x > 0 and dir_y >= 0:
dir = acos(dir_x)
elif dir_x <= 0 and dir_y >= 0:
dir = acos(dir_x)
elif dir_x < 0 and dir_y < 0:
dir = pi - asin(dir_y)
elif dir_x >= 0 and dir_y < 0:
dir = asin(dir_y)
self.direction = dir % (2 * pi)
# adjust for smoother path
def modified_targ(self, delta):
if self.base_dir == 0 or self.base_dir == 180:
if self.targ_dir == 90:
return self.c_targ_x, self.c_targ_y + delta
elif self.targ_dir == 270:
return self.c_targ_x, self.c_targ_y - delta
elif self.base_dir == 90 or self.base_dir == 270:
if self.targ_dir == 0:
return self.c_targ_x + delta, self.c_targ_y
elif self.targ_dir == 180:
return self.c_targ_x - delta, self.c_targ_y
return self.c_targ_x, self.c_targ_y
def get_angle(self):
mod_x, mod_y = self.modified_targ(0.15)
if self.curr_x <= mod_x and self.curr_y <= mod_y:
if mod_x == self.curr_x:
theta = pi / 2
else:
theta = (atan((mod_y - self.curr_y) / (mod_x - self.curr_x))) % (2 * pi)
# case where target pos is up and to the left
elif self.curr_x > mod_x and self.curr_y <= mod_y:
if mod_y == self.curr_y:
theta = pi
else:
theta = (atan((self.curr_x - mod_x) / (mod_y - self.curr_y))) % (
2 * pi
) + pi / 2
# case where target pos is down and to the left
elif self.curr_x > mod_x and self.curr_y > mod_y:
if mod_x == self.curr_x:
theta = 3 * pi / 2
else:
theta = (atan((self.curr_y - mod_y) / (self.curr_x - mod_x))) % (
2 * pi
) + pi
# case where target pos is down and to the right
else:
if self.curr_y == mod_y:
theta = 0
else:
theta = (atan((mod_x - self.curr_x) / (self.curr_y - mod_y))) % (
2 * pi
) + 3 * pi / 2
return theta
def set_rand_angle(self):
theta = self.get_angle()
self.angle = rng.normal(loc=theta, scale=rand_angle_scale) % (2 * pi)
def set_rand_step(self):
self.step = rng.uniform(rand_step_scale, self.dist_to_wall())
def abs_angle_diff(self, angle):
abs_diff = abs(self.direction - angle)
return abs_diff % (2 * pi)
def turn_right(self, angle):
if self.direction > angle:
if self.direction - angle > pi:
return False
else:
return True
else:
if angle - self.direction > pi:
return True
else:
return False
def turn_to_angle(self):
self.world.walk(Walk.Stop)
i = 0
prev_turn = None
while self.abs_angle_diff(self.angle) > 0.1:
if self.turn_right(self.angle):
if prev_turn == "left":
print("no left to right allowed")
break
# save image right
self.all_angles = np.append(self.all_angles, self.angle)
if self.img_dir != None:
if self.stack_dir:
self.world.save_png(
os.path.join(self.img_dir, f"{self.img_num:05}_right.png")
)
self.img_num += 1
else:
self.world.save_png(
os.path.join(
self.img_dir, "right", f"{self.img_num_r:05}.png"
)
)
# image_data = np.array(self.world)
# plt.imshow(image_data)
# plt.show()
self.img_num_r += 1
self.world.turn(Turn.Right)
self.world.update()
prev_turn = "right"
else:
if prev_turn == "right":
print("no right to left allowed")
break
# save image left
self.all_angles = np.append(self.all_angles, self.angle)
if self.img_dir != None:
if self.stack_dir:
self.world.save_png(
os.path.join(self.img_dir, f"{self.img_num:05}_left.png")
)
self.img_num += 1
else:
self.world.save_png(
os.path.join(
self.img_dir, "left", f"{self.img_num_l:05}.png"
)
)
self.img_num_l += 1
self.world.turn(Turn.Left)
self.world.update()
prev_turn = "left"
if self.show_freq != 0:
if i % self.show_freq == 0:
image_data = np.array(self.world)
plt.imshow(image_data)
plt.show()
i += 1
self.update_direction()
self.world.turn(Turn.Stop)
@staticmethod
def solve_triangle(theta, a):
b = a * tan(theta)
c = a / cos(theta)
return b, c
def dist_to_wall(self):
# print(self.targ_dir)
# Looking East
if self.targ_dir == 0:
if (3 * pi / 2) <= self.direction <= (2 * pi):
a = self.world.getY() - (self.c_targ_y - 0.5)
theta = self.direction - (3 * pi / 2)
else:
a = (self.c_targ_y + 0.5) - self.world.getY()
theta = self.direction
# Looking North
elif self.targ_dir == 90:
if 0 <= self.direction <= (pi / 2):
a = (self.c_targ_x + 0.5) - self.world.getX()
theta = self.direction
else:
a = self.world.getX() - (self.c_targ_x - 0.5)
theta = pi - self.direction
# Looking West
elif self.targ_dir == 180:
if (pi / 2) <= self.direction <= pi:
a = (self.c_targ_y + 0.5) - self.world.getY()
theta = self.direction - (pi / 2)
else:
a = self.world.getY() - (self.c_targ_y - 0.5)
theta = (3 * pi / 2) - self.direction
# Looking South
elif self.targ_dir == 270:
if pi <= self.direction <= 3 * pi / 2:
a = self.world.getX() - (self.c_targ_x - 0.5)
theta = self.direction - pi
else:
a = (self.c_targ_x + 0.5) - self.world.getX()
theta = (2 * pi) - self.direction
b, c = self.solve_triangle(theta, a)
if b < self.dist:
return c
else:
return b
def get_east_west_distance(self):
west_dist = 0
east_dist = 0
# get west
if (pi / 2) <= self.direction <= pi:
a = (self.c_targ_y + 0.5) - self.world.getY()
theta = self.direction - (pi / 2)
else:
a = self.world.getY() - (self.c_targ_y - 0.5)
theta = (3 * pi / 2) - self.direction
b, c = self.solve_triangle(theta, a)
if b < self.dist:
west_dist = c
else:
west_dist = b
# get east
if (3 * pi / 2) <= self.direction <= (2 * pi):
a = self.world.getY() - (self.c_targ_y - 0.5)
theta = self.direction - (3 * pi / 2)
else:
a = (self.c_targ_y + 0.5) - self.world.getY()
theta = self.direction
b, c = self.solve_triangle(theta, a)
if b < self.dist:
east_dist = c
else:
east_dist = b
return min(west_dist, east_dist)
def get_north_south_distance(self):
north_dist = 0
sout_dist = 0
# get north
if 0 <= self.direction <= (pi / 2):
a = (self.c_targ_x + 0.5) - self.world.getX()
theta = self.direction
else:
a = self.world.getX() - (self.c_targ_x - 0.5)
theta = pi - self.direction
b, c = self.solve_triangle(theta, a)
if b < self.dist:
north_dist = c
else:
north_dist = b
# get south
if pi <= self.direction <= 3 * pi / 2:
a = self.world.getX() - (self.c_targ_x - 0.5)
theta = self.direction - pi
else:
a = (self.c_targ_x + 0.5) - self.world.getX()
theta = (2 * pi) - self.direction
b, c = self.solve_triangle(theta, a)
if b < self.dist:
south_dist = c
else:
south_dist = b
return min(north_dist, south_dist)
def dist_to_perpendicular_wall(self):
# print(self.targ_dir)
# Looking East
if self.targ_dir == 0:
return self.get_north_south_distance()
# Looking North
elif self.targ_dir == 90:
# get distance to west wall:
return self.get_east_west_distance()
# Looking West
elif self.targ_dir == 180:
return self.get_north_south_distance()
# Looking South
elif self.targ_dir == 270:
return self.get_east_west_distance()
def move_to_step(self):
self.world.turn(Turn.Stop)
i = 0
while (
not in_targ_cell(
self.base_dir, self.c_targ_x, self.c_targ_y, self.curr_x, self.curr_y
)
and self.step > 0.1
):
self.all_angles = np.append(self.all_angles, self.angle)
if self.img_dir != None:
if self.stack_dir:
self.world.save_png(
os.path.join(self.img_dir, f"{self.img_num:05}_straight.png")
)
self.img_num += 1
else:
self.world.position(self.curr_x, self.curr_y - self.pertrusion, 0)
self.world.save_png(
os.path.join(
self.img_dir, "straight", f"{self.img_num_s:05}.png"
)
)
self.img_num_s += 1
# time.sleep(1)
self.world.position(self.curr_x, self.curr_y + self.pertrusion, 0)
self.world.save_png(
os.path.join(
self.img_dir, "straight", f"{self.img_num_s:05}.png"
)
)
self.img_num_s += 1
# time.sleep(1)
self.world.position(self.curr_x, self.curr_y, 0)
self.world.save_png(
os.path.join(
self.img_dir, "straight", f"{self.img_num_s:05}.png"
)
)
# time.sleep(1)
self.img_num_s += 1
self.world.walk(Walk.Forward)
self.world.update()
self.curr_x = self.world.getX()
self.curr_y = self.world.getY()
if self.show_freq != 0:
if i % self.show_freq == 0:
image_data = np.array(self.world)
plt.imshow(image_data)
plt.show()
i += 1
self.step -= self.world.walk_speed()
self.update_dist()
self.world.walk(Walk.Stop)
def set_rand_pertrusion(self):
perp_dist = self.dist_to_perpendicular_wall()
# print(f"perpendicular distances: {perp_dist}")
self.pertrusion = rng.uniform(rand_step_scale, .2)
return self.pertrusion
def pertrude(self):
i = 0
self.world.turn(Turn.Stop)
self.pertrusion = self.set_rand_pertrusion()
class Navigator:
def __init__(self, maze, img_dir=None):
self.world = PycastWorld(320, 240, maze)
self.img_dir = img_dir
# getting directions
with open(maze, "r") as in_file:
png_count = int(in_file.readline())
for _ in range(png_count):
in_file.readline()
_, dim_y = in_file.readline().split()
for _ in range(int(dim_y)):
in_file.readline()
self.directions = in_file.readlines()
self.num_directions = len(self.directions)
self.angles = np.array([])
self.dirs = []
def navigate(self, index, show_dir=False, show_freq=0):
_, _, s_base_dir = self.directions[index].split()
targ_x, targ_y, s_targ_dir = self.directions[index + 1].split()
targ_x, targ_y = int(targ_x), int(targ_y)
# convert from string
base_dir = enws[s_base_dir]
targ_dir = enws[s_targ_dir]
if show_dir:
print(f"Directions: {targ_x}, {targ_y}, {s_targ_dir}")
# center of target cell
c_targ_x = targ_x + 0.5
c_targ_y = targ_y + 0.5
driver = Driver(
c_targ_x, c_targ_y, base_dir, targ_dir, self.world, self.img_dir, show_freq
)
while not in_targ_cell(
base_dir, c_targ_x, c_targ_y, driver.curr_x, driver.curr_y
):
# obs = np.array(driver.world)
# Actually Navigate:
driver.pertrude()
driver.set_rand_angle()
driver.turn_to_angle()
# driver.set_rand_step()
driver.move_to_step()
self.angles = np.append(self.angles, driver.all_angles)
# plt.imshow(obs)
def plot_angles(self):
plt.plot(self.angles)
plt.show()
def plot_directions(self):
plt.plot(self.dirs)
plt.show()
def plot_label_dir(self):
plt.plot(self.directions)
plt.show()
maze = "../Mazes/maze01.txt"
show_freq = 0 # frequency to show frames
img_dir = "/raid/Images/test" # directory to save images to
show_dir = False
navigator = Navigator(maze, img_dir)
j = 0
while j < navigator.num_directions - 1:
navigator.navigate(j, show_dir=show_dir, show_freq=show_freq)
j += 1
```
| github_jupyter |
# The data block API
```
from fastai.gen_doc.nbdoc import *
from fastai.basics import *
np.random.seed(42)
```
The data block API lets you customize the creation of a [`DataBunch`](/basic_data.html#DataBunch) by isolating the underlying parts of that process in separate blocks, mainly:
1. Where are the inputs and how to create them?
1. How to split the data into a training and validation sets?
1. How to label the inputs?
1. What transforms to apply?
1. How to add a test set?
1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.html#DataBunch)?
Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.html#DataBunch) (batch size, collate function...)
The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.html#DataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.html#DataBunch) are great for beginners but you can't always make your data fit in the tracks they require.
<img src="imgs/mix_match.png" alt="Mix and match" width="200">
As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts.
## Examples of use
Let's begin with our traditional MNIST example.
```
from fastai.vision import *
path = untar_data(URLs.MNIST_TINY)
tfms = get_transforms(do_flip=False)
path.ls()
(path/'train').ls()
```
In [`vision.data`](/vision.data.html#vision.data), we create an easy [`DataBunch`](/basic_data.html#DataBunch) suitable for classification by simply typing:
```
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24)
```
This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.html#train) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this:
```
data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders
.split_by_folder() #How to split in train/valid? -> use the folders
.label_from_folder() #How to label? -> depending on the folder of the filenames
.add_test_folder() #Optionally add a test set (here default name is test)
.transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64
.databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch
data.show_batch(3, figsize=(6,6), hide_axis=False)
```
Let's look at another example from [`vision.data`](/vision.data.html#vision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is:
```
planet = untar_data(URLs.PLANET_TINY)
planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)
data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', label_delim = ' ', ds_tfms=planet_tfms)
```
With the data block API we can rewrite this like that:
```
data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg')
#Where to find the data? -> in planet 'train' folder
.random_split_by_pct()
#How to split in train/valid? -> randomly with the default 20% in valid
.label_from_df(label_delim=' ')
#How to label? -> use the csv file
.transform(planet_tfms, size=128)
#Data augmentation? -> use tfms with a size of 128
.databunch())
#Finally -> use the defaults for conversion to databunch
data.show_batch(rows=2, figsize=(9,7))
```
The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.html#ImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.html#DataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder.
```
camvid = untar_data(URLs.CAMVID_TINY)
path_lbl = camvid/'labels'
path_img = camvid/'images'
```
We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...)
```
codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes
```
And we define the following function that infers the mask filename from the image filename.
```
get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}'
```
Then we can easily define a [`DataBunch`](/basic_data.html#DataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image.
```
data = (SegmentationItemList.from_folder(path_img)
.random_split_by_pct()
.label_from_func(get_y_fn, classes=codes)
.transform(get_transforms(), tfm_y=True, size=128)
.databunch())
data.show_batch(rows=2, figsize=(7,5))
```
Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/#home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename.
```
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
img2bbox = dict(zip(images, lbl_bbox))
get_y_func = lambda o:img2bbox[o.name]
```
The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes.
```
data = (ObjectItemList.from_folder(coco)
#Where are the images? -> in coco
.random_split_by_pct()
#How to split in train/valid? -> randomly with the default 20% in valid
.label_from_func(get_y_func)
#How to find the labels? -> use get_y_func
.transform(get_transforms(), tfm_y=True)
#Data augmentation? -> Standard transforms with tfm_y=True
.databunch(bs=16, collate_fn=bb_pad_collate))
#Finally we convert to a DataBunch and we use bb_pad_collate
data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6))
```
But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model.
```
from fastai.text import *
imdb = untar_data(URLs.IMDB_SAMPLE)
data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text')
#Where are the inputs? Column 'text' of this csv
.random_split_by_pct()
#How to split it? Randomly with the default 20%
.label_for_lm()
#Label it for a language model
.databunch())
data_lm.show_batch()
```
For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`.
```
data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text')
.split_from_df(col='is_valid')
.label_from_df(cols='label')
.databunch())
data_clas.show_batch()
```
Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.html#PreProcessor)s that are going to be applied to our data once the splitting and labelling is done.
```
from fastai.tabular import *
adult = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(adult/'adult.csv')
dep_var = 'salary'
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country']
cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain']
procs = [FillMissing, Categorify, Normalize]
data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs)
.split_by_idx(valid_idx=range(800,1000))
.label_from_df(cols=dep_var)
.databunch())
data.show_batch()
```
## Step 1: Provide inputs
The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.html#ItemList)).
```
show_doc(ItemList, title_level=3)
```
This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling.
It has multiple subclasses depending on the type of data you're handling. Here is a quick list:
- [`CategoryList`](/data_block.html#CategoryList) for labels in classification
- [`MultiCategoryList`](/data_block.html#MultiCategoryList) for labels in a multi classification problem
- [`FloatList`](/data_block.html#FloatList) for float labels in a regression problem
- [`ImageItemList`](/vision.data.html#ImageItemList) for data that are images
- [`SegmentationItemList`](/vision.data.html#SegmentationItemList) like [`ImageItemList`](/vision.data.html#ImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.html#SegmentationLabelList)
- [`SegmentationLabelList`](/vision.data.html#SegmentationLabelList) for segmentation masks
- [`ObjectItemList`](/vision.data.html#ObjectItemList) like [`ImageItemList`](/vision.data.html#ImageItemList) but will default labels to `ObjectLabelList`
- `ObjectLabelList` for object detection
- [`PointsItemList`](/vision.data.html#PointsItemList) for points (of the type [`ImagePoints`](/vision.image.html#ImagePoints))
- [`ImageImageList`](/vision.data.html#ImageImageList) for image to image tasks
- [`TextList`](/text.data.html#TextList) for text data
- [`TextFilesList`](/text.data.html#TextFilesList) for text data stored in files
- [`TabularList`](/tabular.data.html#TabularList) for tabular data
- [`CollabList`](/collab.html#CollabList) for collaborative filtering
Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods
```
show_doc(ItemList.from_folder)
show_doc(ItemList.from_df)
show_doc(ItemList.from_csv)
```
### Optional step: filter your data
The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods.
```
show_doc(ItemList.filter_by_func)
show_doc(ItemList.filter_by_folder)
show_doc(ItemList.filter_by_rand)
show_doc(ItemList.to_text)
show_doc(ItemList.use_partial_data)
```
### Writing your own [`ItemList`](/data_block.html#ItemList)
First check if you can't easily customize one of the existing subclass by:
- subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images)
- applying a custom `processor` (see step 4)
- changing the default `label_cls` for the label creation
- adding a default [`PreProcessor`](/data_block.html#PreProcessor) with the `_processor` class variable
If this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed.
```
show_doc(ItemList.analyze_pred)
show_doc(ItemList.get)
show_doc(ItemList.new)
```
You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`.
```
show_doc(ItemList.reconstruct)
```
## Step 2: Split the data between the training and the validation set
This step is normally straightforward, you just have to pick oe of the following functions depending on what you need.
```
show_doc(ItemList.no_split)
show_doc(ItemList.random_split_by_pct)
show_doc(ItemList.split_by_files)
show_doc(ItemList.split_by_fname_file)
show_doc(ItemList.split_by_folder)
jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.")
show_doc(ItemList.split_by_idx)
show_doc(ItemList.split_by_idxs)
show_doc(ItemList.split_by_list)
show_doc(ItemList.split_by_valid_func)
show_doc(ItemList.split_from_df)
jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.")
```
## Step 3: Label the inputs
To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.html#ItemList), and if there is none, it will go to [`CategoryList`](/data_block.html#CategoryList), [`MultiCategoryList`](/data_block.html#MultiCategoryList) or [`FloatList`](/data_block.html#FloatList) depending on the type of the labels). This is implemented in the following function:
```
show_doc(ItemList.get_label_cls)
```
The first example in these docs created labels as follows:
```
path = untar_data(URLs.MNIST_TINY)
ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train
```
If you want to save the data necessary to recreate your [`LabelList`](/data_block.html#LabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`:
```python
ll.train.to_csv('tmp.csv')
```
Or just grab a `pd.DataFrame` directly:
```
ll.to_df().head()
show_doc(ItemList.label_empty)
show_doc(ItemList.label_from_list)
show_doc(ItemList.label_from_df)
jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.")
show_doc(ItemList.label_const)
show_doc(ItemList.label_from_folder)
jekyll_note("This method looks at the last subfolder in the path to determine the classes.")
show_doc(ItemList.label_from_func)
show_doc(ItemList.label_from_re)
show_doc(CategoryList, title_level=3)
```
[`ItemList`](/data_block.html#ItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.html#CategoryProcessor).
```
show_doc(MultiCategoryList, title_level=3)
```
It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags.
If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels).
```
show_doc(FloatList, title_level=3)
show_doc(EmptyLabelList, title_level=3)
```
## Invisible step: preprocessing
This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.html#ItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.html#PreProcessor) classes).
A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.
Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.html#PreProcessor) and applied on the validation set.
This is the generic class for all processors.
```
show_doc(PreProcessor, title_level=3)
show_doc(PreProcessor.process_one)
```
Process one `item`. This method needs to be written in any subclass.
```
show_doc(PreProcessor.process)
```
Process a dataset. This default to apply `process_one` on every `item` of `ds`.
```
show_doc(CategoryProcessor, title_level=3)
show_doc(CategoryProcessor.generate_classes)
show_doc(MultiCategoryProcessor, title_level=3)
show_doc(MultiCategoryProcessor.generate_classes)
```
## Optional steps
### Add transforms
Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms.
```
show_doc(LabelLists.transform)
```
This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target.
### Add a test set
To add a test set, you can use one of the two following methods.
```
show_doc(LabelLists.add_test)
jekyll_note("Here `items` can be an `ItemList` or a collection.")
show_doc(LabelLists.add_test_folder)
```
**Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset.
In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted.
If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in:
```
data_test = (ImageItemList.from_folder(path)
.split_by_folder(train='train', valid='test')
.label_from_folder()
...)
```
Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this:
```
tfms = []
path = Path('data').resolve()
data = (ImageItemList.from_folder(path)
.split_by_pct()
.label_from_folder()
.transform(tfms)
.databunch()
.normalize() )
learn = create_cnn(data, models.resnet50, metrics=accuracy)
learn.fit_one_cycle(5,1e-2)
# now replace the validation dataset entry with the test dataset as a new validation dataset:
# everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder`
# (or perhaps you were already using the latter, so simply switch to valid='test')
data_test = (ImageItemList.from_folder(path)
.split_by_folder(train='train', valid='test')
.label_from_folder()
.transform(tfms)
.databunch()
.normalize()
)
learn.data = data_test
learn.validate()
```
Of course, your data block can be totally different, this is just an example.
## Step 4: convert to a [`DataBunch`](/basic_data.html#DataBunch)
This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.html#DataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.html#DataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you.
```
show_doc(LabelLists.databunch)
```
## Inner classes
```
show_doc(LabelList, title_level=3)
```
Optionally apply `tfms` to `y` if `tfm_y` is `True`.
```
show_doc(LabelList.export)
show_doc(LabelList.transform_y)
show_doc(LabelList.get_state)
show_doc(LabelList.load_empty)
show_doc(LabelList.load_state)
show_doc(LabelList.process)
show_doc(LabelList.set_item)
show_doc(LabelList.to_df)
show_doc(LabelList.to_csv)
show_doc(LabelList.transform)
show_doc(ItemLists, title_level=3)
show_doc(ItemLists.label_from_lists)
show_doc(ItemLists.transform)
show_doc(ItemLists.transform_y)
show_doc(LabelLists, title_level=3)
show_doc(LabelLists.get_processors)
show_doc(LabelLists.load_empty)
show_doc(LabelLists.load_state)
show_doc(LabelLists.process)
```
## Helper functions
```
show_doc(get_files)
```
## Undocumented Methods - Methods moved below this line will intentionally be hidden
```
show_doc(CategoryList.new)
show_doc(LabelList.new)
show_doc(CategoryList.get)
show_doc(LabelList.predict)
show_doc(ItemList.new)
show_doc(ItemList.process_one)
show_doc(ItemList.process)
show_doc(MultiCategoryProcessor.process_one)
show_doc(FloatList.get)
show_doc(CategoryProcessor.process_one)
show_doc(CategoryProcessor.create_classes)
show_doc(CategoryProcessor.process)
show_doc(MultiCategoryList.get)
show_doc(FloatList.new)
show_doc(FloatList.reconstruct)
show_doc(MultiCategoryList.analyze_pred)
show_doc(MultiCategoryList.reconstruct)
show_doc(CategoryList.reconstruct)
show_doc(CategoryList.analyze_pred)
show_doc(EmptyLabelList.reconstruct)
show_doc(EmptyLabelList.get)
show_doc(LabelList.databunch)
```
## New Methods - Please document or move to the undocumented section
```
show_doc(ItemList.add)
```
| github_jupyter |
# 11 Pressure Measurement
Pressure can be measured in many ways. Here we will see:
- Mechanical
- Electronic
## Absolute vs Gage Pressure
\begin{align}
P_{gage} = P_{abs}-P_{atm}
\end{align}
<img src="img/GagePressure.png" width="200">
## Dynamic Response
<img src="img/PressureTubes_NASA.png" width="300">
https://www.grc.nasa.gov/www/k-12/airplane/tunpsm.html
<img src="img/ScanivalvePressureScanner.png" width="300">
<img src="img/PT_tube.png" width="300">
\begin{align}
\left| \frac{P}{P_0} \right| = \frac{1}{\sqrt{\left( 1 - \left( \frac{f}{f_n} \right)^2 \right)^2 + 4h^2 \left( \frac{f}{f_n} \right)^2 }}
\end{align}
with $f_n$ natural frequency, $h$ damping ratio
\begin{align}
f_n = \sqrt{\frac{3 \pi r^2 c^2}{4L V}} \quad h = \frac{2 \mu}{\rho c r^3} \sqrt{3LV}{\pi}
\end{align}
with the for the fluid in the pressure transmitting tube: $c$ speed of sound, $\mu$ dynamic viscosity, $\rho$ density.
Phase angle:
\begin{align}
\phi = \tan^{-1} \left( \frac{-2 h (f/f_n)}{1-(f/f_n)^2} \right)
\end{align}
_example_
A pressure transducer is connected to measurement test section through a thin tube. The tube has radius $r=0.25$ mm and length $L=75$ mm. It connects to a cavity of volume $V=3500$ mm$^3$. The gas is air at 1 atm and 293 K.
- Calculate natural frequency and damping ratio of this system
- Calculate the attenuation for a 100 Hz pressure wave.
```
import numpy
r = 1E-4 #m
c = 1000 #m/s
L = 1 #m
V = 10000E-9 #m3
print(V)
fn = numpy.sqrt((3*numpy.pi* r**2 * c**2)/(4*L *V))
print('f_n = ',fn,' Hz')
```
## Mechanical pressure measurement
### Manometer
\begin{align}
P_0 = \rho \text{g} \Delta z - \rho_0 \text{g} h + P_{atm}
\end{align}
The sensitivity of a manometer is defined as: $ K = \frac{\Delta z}{P_0 - P_{atm}}$. It can be increased by changing $\rho$ or also by inclining leg of manometer on which we do pressure measurement to increase readibility of $\Delta z$.
### Mechanical pressure gages
## Electronic pressure transducers
### Strain gage
### Capacitance
### LVDT
### Optical
## Piezoelectric pressure transducers
## Examples
### Pitot-Static Probe
A Pitot-Static Probe is placed in an air jet to measure air speed at sea level, $15^\circ$C, and 1 atmosphere (101,325 Pa), $\rho_{air} = 1.225$ kg/m$^3$. The differential pressure is measured with a U-tube manometer that uses water ($\rho_{water}=1002$ kg/m$^3$). The height difference in the manometer is 1.00 m.
- Calculate the air speed measured by the Pitot-static tube at sea level.
- What would be the effect of altitude on velocity measurements with a Pitot-static tube? Would your reading under- or over-estimate the air speed?
- Remember from your aerodynamics class that the troposphere region of the earth atmosphere (0-11 km) follows a temperature gradient regime with $dT/dh = a = -6.5$ K/km. You can show (do it at home) that the air density varies as:
\begin{align}
\frac{\rho}{\rho_\circ} = \left( \frac{T}{T_\circ} \right)^{-\left( \frac{g}{a R_s} +1 \right)} \quad \textrm{where}\, R_s = 287 \frac{\mathrm{J}}{\mathrm{kg} \cdot \mathrm{K}} \textrm{air specific gas constant}
\end{align}
The height difference in the manometer is still 1 m. Calculate the air speed at a 3,000 m altitude. Neglect the effect of gravity on altitude, and assume the water in the manometer is still at $15^\circ$ C.
```
rho_h20 = 1002 # kg/m3
rho_infty = 1.225 #kg/m3
DeltaZ = 1.0 #m
g = 9.81 # m/s2
U_infty = numpy.sqrt(2*rho_h20/rho_infty*DeltaZ*g)
print('U_infty = ',U_infty,' m/s')
T_0 = 288 # K
T_3 = T_0-6.5*3
rho_3 = rho_infty*(T_3/T_0)**(-(g/287*(-6.5))+1)
U_3 = numpy.sqrt(2*rho_h20/rho_3*DeltaZ*g)
print('T_3 = ', T_3, ' K',', ', 'rho_3 = ', rho_3,' kg/m3',', ', 'U_3 = ',U_3,' m/s')
```
### Strain Gage Pressure Measurement
One entrepreneurial MAE-3120 student decides to create a pressure transducer by mounting a strain gage to the side wall of a soda can. The soda can has 66.0 mm-diameter, 0.105 mm-wall thickness (side walls), and is made of aluminum with $E = 70.0$ GPa Young's modulus, and $\nu=0.35$ Poisson ratio. The strain gage has resistance $120\,\Omega$ and $S = 2.05$ strain-gage factor.
- What circuit should the student use to measure the can deformation?
- The can opening is connected to a pressure line that the student want to monitor. What kind of pressure will be recorded, i.e. $P_{abs}, P_{gage}$?
- Remember that the Hoop Stress, $\sigma_H$, on a thin wall cylinder that is pressurized is given by:
\begin{align}
\sigma_H &= \frac{P_g D}{2t}, \quad \textrm{where}\; P_g = P_{in}-P_{atm}, \quad D:\textrm{diameter},\; \quad t:\textrm{wall thickness}\\
\sigma_L &= \frac{P_g D}{4t}, \quad \textrm{where}\; \sigma_L : \textrm{Longitudinal Stress}
\end{align}
Remember also that stress and strain are related by:
\begin{align}
\epsilon_H &= \frac{1}{E} (\sigma_H - \nu \sigma_L)\\
\epsilon_L &= \frac{1}{E} (\sigma_L - \nu \sigma_H)
\end{align}
Assume he/she used a quarter-wave Wheatstone bridge with 5.00 V excitation and that the bridge is balanced at no load. The student measures a voltage of 3.95 mV, calculate the applied gage pressure in kPa.
\begin{align}
\epsilon_H & = \frac{P_g D}{2E t }\cdot \left( 1-\frac{\nu}{2} \right)\\
P_g &= \frac{\epsilon_H 2E t}{D(1-\nu /2)}
\end{align}
Remember for 1/4 Wheatstone bridge
\begin{align*}
\frac{V_o}{V_s} = \pm \frac{1}{4} \epsilon_H S
\end{align*}
```
V_o = 3.95E-3; V_s = 5; S = 2.05
eps_H = 4*V_o/V_s/S
print('eps_H = ',eps_H*1E6, ' micro-strain')
E = 70E9; t = 0.105E-3; D = 66E-3; nu = 0.35
P_g = 2* eps_H*E*t/(D*(1-nu/2))
print('P_g = ', P_g/1000, ' kPa', ' = ', P_g/101325 , ' atm')
```
| github_jupyter |
# Python Intrinsic Data Type
## Overview
Python has these common built-in `Data Types`:
https://docs.python.org/2/library/stdtypes.html
* [Numeric](#Numeric)
* [String](#String)
* [Boolean](#Boolean)
* [List](#List)
* [Tuple](#Tuple)
* [Dictionary](#Dictionary)
* [Set](#Set)
Python will set the variable type once it is assigned. You can change the variable type if the new variable value is set to another type. For example:
```
var = 321 # Create an integer assignment.
print type(var)
var = 'HKU' # The new `var` variable is now become a string type.
print type(var)
```
## Numeric
Numeric Types are basic data storage in nearly every programming languages. In python, they can be : `int`, `long`, `float` and `complex`.
| Type | Format | Description |
|:----:|:------:|:--------|
| int | ```a = 10```| Signed Integer |
| long | ```a = 345L``` | (L) Long integers |
| float | ```a = 45.67``` | (.) Floating point real values |
| complex | ```a = 3.14j``` | (j) Complex number, Python use `j` to denote the imaginary part. |
Python numbers variables are created by the standard assign method:
```
a = 68
print a,"is a",type(a)
b = 32385848583853453453255459395439L
print b,"is a",type(b)
c = 45.67
print c,"is a",type(c)
d = 3.14j
print d,"is a",type(d)
```
### Arithematic
Basic arithmatic functions
You can print different variables at the same time with comma-separated, and the different arthimatic operations are showned
```
a = 10
b = 15
print a + b , a - b , a * b , a / b
```
## String
String types are created by enclosing characters in quotes. Single quotes `'` double quotes `"` can be used to denote a string in one-line. The triple quotes `"""` can define a multi-line string. They shall be in pairs.
```
Name = 'Stephen Ng'
Subject = "Lunar eclipse open house today"
Email = "ncy@astro.physics.hku.hk"
Content='''There will be a lunar eclipse observable in Hong Kong on Jan 31 starting at 19:48pm. The Department will arrange an open house for the observatory at the roof from 7pm.
'''
print "from:", Name, '<'+Email+'>'
print "subject:", Subject
print "-------------------------------------------"
print Content
print Content[0] # this will print the first character
print Content[1:5] # this will print the substring
```
### Boolean
Boolean
```
b1 = True
b2 = False
b3 = (b1 and b2) or (b1 or b2)
print b1,b2,b3
not True
```
### More on Integer
Because Python has arbitrarily large integers, there will not be integer overflow happened. An an integer overflow occurs when an arithmetic operation attempts to create a numeric value that is outside of the range that can be represented with a given number of bits – either larger than the maximum or lower than the minimum representable value. Integer can also be defined using octal or hexadecimial notation, theyare the same thing from the computer point-of-view
```
# Octal integers
a = 01
b = 027
c = 06645
print a,"is a",type(a)
# Hexadecimal integers
a = 0x1
b = 0x17
c = 0xDA5
print b,"is a",type(b)
```
| github_jupyter |
# Calls
*Call expressions* invoke functions, which are named operations. The name of the function appears first, followed by expressions in parentheses.
```
abs(-12)
round(5 - 1.3)
max(2, 2 + 3, 4)
```
In this last example, the `max` function is *called* on three *arguments*: 2, 5, and 4. The value of each expression within parentheses is passed to the function, and the function *returns* the final value of the full call expression. The `max` function can take any number of arguments and returns the maximum.
A few functions are available by default, such as `abs` and `round`, but most functions that are built into the Python language are stored in a collection of functions called a *module*. An *import statement* is used to provide access to a module, such as `math` or `operator`.
```
import math
import operator
math.sqrt(operator.add(4, 5))
```
An equivalent expression could be expressed using the `+` and `**` operators instead.
```
(4 + 5) ** 0.5
```
Operators and call expressions can be used together in an expression. The *percent difference* between two values is used to compare values for which neither one is obviously `initial` or `changed`. For example, in 2014 Florida farms produced 2.72 billion eggs while Iowa farms produced 16.25 billion eggs (http://quickstats.nass.usda.gov/). The percent difference is 100 times the absolute value of the difference between the values, divided by their average. In this case, the difference is larger than the average, and so the percent difference is greater than 100.
```
florida = 2.72
iowa = 16.25
100*abs(florida-iowa)/((florida+iowa)/2)
```
Learning how different functions behave is an important part of learning a programming language. A Jupyter notebook can assist in remembering the names and effects of different functions. When editing a code cell, press the *tab* key after typing the beginning of a name to bring up a list of ways to complete that name. For example, press *tab* after `math.` to see all of the functions available in the `math` module. Typing will narrow down the list of options. To learn more about a function, place a `?` after its name. For example, typing `math.log?` will bring up a description of the `log` function in the `math` module.
```
math.log?
```
log(x[, base])
Return the logarithm of x to the given base.
If the base not specified, returns the natural logarithm (base e) of x.
The square brackets in the example call indicate that an argument is optional. That is, `log` can be called with either one or two arguments.
```
math.log(16, 2)
math.log(16)/math.log(2)
```
The list of [Python's built-in functions](https://docs.python.org/3/library/functions.html) is quite long and includes many functions that are never needed in data science applications. The list of [mathematical functions in the `math` module](https://docs.python.org/3/library/math.html) is similarly long. This text will introduce the most important functions in context, rather than expecting the reader to memorize or understand these lists.
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Visualization/styled_layer_descriptors.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Visualization/styled_layer_descriptors.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Visualization/styled_layer_descriptors.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
```
## Create an interactive map
The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
```
Map = geemap.Map(center=[40,-100], zoom=4)
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
cover = ee.Image('MODIS/051/MCD12Q1/2012_01_01').select('Land_Cover_Type_1')
# Define an SLD style of discrete intervals to apply to the image.
sld_intervals = \
'<RasterSymbolizer>' + \
' <ColorMap type="intervals" extended="false" >' + \
'<ColorMapEntry color="#aec3d4" quantity="0" label="Water"/>' + \
'<ColorMapEntry color="#152106" quantity="1" label="Evergreen Needleleaf Forest"/>' + \
'<ColorMapEntry color="#225129" quantity="2" label="Evergreen Broadleaf Forest"/>' + \
'<ColorMapEntry color="#369b47" quantity="3" label="Deciduous Needleleaf Forest"/>' + \
'<ColorMapEntry color="#30eb5b" quantity="4" label="Deciduous Broadleaf Forest"/>' + \
'<ColorMapEntry color="#387242" quantity="5" label="Mixed Deciduous Forest"/>' + \
'<ColorMapEntry color="#6a2325" quantity="6" label="Closed Shrubland"/>' + \
'<ColorMapEntry color="#c3aa69" quantity="7" label="Open Shrubland"/>' + \
'<ColorMapEntry color="#b76031" quantity="8" label="Woody Savanna"/>' + \
'<ColorMapEntry color="#d9903d" quantity="9" label="Savanna"/>' + \
'<ColorMapEntry color="#91af40" quantity="10" label="Grassland"/>' + \
'<ColorMapEntry color="#111149" quantity="11" label="Permanent Wetland"/>' + \
'<ColorMapEntry color="#cdb33b" quantity="12" label="Cropland"/>' + \
'<ColorMapEntry color="#cc0013" quantity="13" label="Urban"/>' + \
'<ColorMapEntry color="#33280d" quantity="14" label="Crop, Natural Veg. Mosaic"/>' + \
'<ColorMapEntry color="#d7cdcc" quantity="15" label="Permanent Snow, Ice"/>' + \
'<ColorMapEntry color="#f7e084" quantity="16" label="Barren, Desert"/>' + \
'<ColorMapEntry color="#6f6f6f" quantity="17" label="Tundra"/>' + \
'</ColorMap>' + \
'</RasterSymbolizer>'
Map.addLayer(cover.sldStyle(sld_intervals), {}, 'IGBP classification styled')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
```
import itertools
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import supervisely_lib as sly
from tqdm import tqdm
from collections import defaultdict
%matplotlib inline
address = 'http://192.168.1.69:5555'
token = 'YGPDnuBkhFmcQ7VNzSEjhgavjg4eFR4Eq1C3jIY4HgV3SQq2JgkXCNtgZy1Fu2ftd4IKui8DsjrdtXjB853cMtBevpSJqFDYiaG1A5qphlH6fFiYYmcVZ5fMR8dDrt5l'
team_name = 'dima'
workspace_name = 'work'
src_project_name = 'lemons_annotated'
api = sly.Api(address, token)
team_id = api.team.get_info_by_name(team_name)['id']
workspace_id = api.workspace.get_info_by_name(workspace_name, team_id)['id']
src_project_id = api.project.get_info_by_name(src_project_name, workspace_id)['id']
src_meta_json = api.project.get_meta(src_project_id)
src_meta = sly.ProjectMeta.from_json(src_meta_json)
classes_mapping = {
'kiwi': 'kiwi',
'lemon': 'lemon'
}
iou_threshold = 0.5
def compute_iou(mask_1, mask_2):
intersection = (mask_1 * mask_2).sum()
union = mask_1.sum() + mask_2.sum() - intersection
if union == 0:
return 0.0
return intersection / union
def add_to_conf_matrix(confusion_matrix, gt_masks, gt_classes, pred_masks, pred_classes, iou_thresh):
matches = []
for i in range(len(gt_masks)):
for j in range(len(pred_masks)):
iou = compute_iou(gt_masks[i], pred_masks[j])
if iou > iou_thresh:
matches.append([i, j, iou])
matches = np.array(matches)
if matches.shape[0] > 0:
matches = matches[matches[:, 2].argsort()[::-1][:len(matches)]]
matches = matches[np.unique(matches[:, 1], return_index=True)[1]]
matches = matches[matches[:, 2].argsort()[::-1][:len(matches)]]
matches = matches[np.unique(matches[:, 0], return_index=True)[1]]
for i in range(len(gt_masks)):
if matches.shape[0] > 0 and matches[matches[:, 0] == i].shape[0] == 1:
confusion_matrix[gt_classes[i] - 1][pred_classes[int(matches[matches[:, 0] == i, 1][0])] - 1] += 1
else:
confusion_matrix[gt_classes[i] - 1][confusion_matrix.shape[1] - 1] += 1
for i in range(len(pred_masks)):
if matches.shape[0] > 0 and matches[matches[:, 1] == i].shape[0] == 0:
confusion_matrix[confusion_matrix.shape[0] - 1][pred_classes[i] - 1] += 1
def process(ann, confusion_matrix, gt_cls_map, pred_cls_map, iou_threshold):
img_size = ann.img_size
for cls_gt, cls_pred in classes_mapping.items():
masks_gt, masks_pred = [], []
classes_gt, classes_pred = [], []
for label in ann.labels:
if label.obj_class.name == cls_gt:
mask = np.zeros(img_size, np.uint8)
label.geometry.draw(mask, 1)
masks_gt.append(mask)
classes_gt.append(gt_cls_map[label.obj_class.name])
if label.obj_class.name == cls_pred:
mask = np.zeros(img_size, np.uint8)
label.geometry.draw(mask, 1)
masks_pred.append(mask)
classes_pred.append(pred_cls_map[label.obj_class.name])
add_to_conf_matrix(confusion_matrix, masks_gt, classes_gt, masks_pred, classes_pred, iou_threshold)
gt_class_mapping, pred_class_mapping = {}, {}
for i, (k, v) in enumerate(classes_mapping.items()):
gt_class_mapping[k] = i + 1
pred_class_mapping[v] = i + 1
confusion_matrix = np.zeros((len(classes_mapping) + 1, len(classes_mapping) + 1), dtype=np.int32)
for dataset_info in api.dataset.get_list(src_project_id):
src_dataset_id = dataset_info['id']
src_dataset_name = dataset_info['name']
print('Project/Dataset: {}/{}'.format(src_project_name, src_dataset_name))
for image_info in tqdm(api.image.get_list(src_dataset_id)):
src_image_ext = image_info['meta']['mime'].split('/')[1]
ann_json = api.annotation.download(src_dataset_id, image_info['id'])
ann = sly.Annotation.from_json(ann_json, src_meta)
process(ann, confusion_matrix, gt_class_mapping, pred_class_mapping, iou_threshold)
targets_names = list(gt_class_mapping.keys())
targets_names.append('False Positives')
pred_names = list(pred_class_mapping.keys())
pred_names.append('False Negatives')
df = pd.DataFrame(confusion_matrix, columns=pred_names, index=targets_names)
options = dict(selector="th", props=[('text-align', 'center')])
df.style.set_properties(**{'width':'10em', 'text-align':'center'}).set_table_styles([options])
def plot_confusion_matrix(cm, gt_classes, pred_classes, cmap=plt.cm.Blues):
_ = plt.figure(figsize=(10, 10))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title('Confusion matrix')
plt.colorbar()
tick_marks = np.arange(len(gt_classes))
plt.xticks(tick_marks, pred_classes, rotation=45)
plt.yticks(tick_marks, gt_classes)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], 'd'),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True classes')
plt.xlabel('Predicted classes')
plt.tight_layout()
plot_confusion_matrix(confusion_matrix, targets_names, pred_names)
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Eager Execution
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/guide/eager"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看 </a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/eager.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/eager.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/eager.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
</table>
TensorFlow 的 Eager Execution 是一种命令式编程环境,可立即评估运算,无需构建计算图:运算会返回具体的值,而非构建供稍后运行的计算图。这样能使您轻松入门 TensorFlow 并调试模型,同时也减少了样板代码。要跟随本指南进行学习,请在交互式 `python` 解释器中运行以下代码示例。
Eager Execution 是用于研究和实验的灵活机器学习平台,具备以下特性:
- *直观的界面* - 自然地组织代码结构并使用 Python 数据结构。快速迭代小模型和小数据。
- *更方便的调试功能* - 直接调用运算以检查正在运行的模型并测试更改。使用标准 Python 调试工具立即报告错误。
- *自然的控制流* - 使用 Python 而非计算图控制流,简化了动态模型的规范。
Eager Execution 支持大部分 TensorFlow 运算和 GPU 加速。
注:启用 Eager Execution 后可能会增加某些模型的开销。我们正在持续改进其性能,如果您遇到问题,请[提交错误报告](https://github.com/tensorflow/tensorflow/issues)并分享您的基准。
## 设置和基本用法
```
import os
import tensorflow as tf
import cProfile
```
在 Tensorflow 2.0 中,默认启用 Eager Execution。
```
tf.executing_eagerly()
```
现在您可以运行 TensorFlow 运算,结果将立即返回:
```
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
```
启用 Eager Execution 会改变 TensorFlow 运算的行为方式—现在它们会立即评估并将值返回给 Python。`tf.Tensor` 对象会引用具体值,而非指向计算图中节点的符号句柄。由于无需构建计算图并稍后在会话中运行,可以轻松使用 `print()` 或调试程序检查结果。评估、输出和检查张量值不会中断计算梯度的流程。
Eager Execution 可以很好地配合 [NumPy](http://www.numpy.org/) 使用。NumPy 运算接受 `tf.Tensor` 参数。TensorFlow `tf.math` 运算会将 Python 对象和 NumPy 数组转换为 `tf.Tensor` 对象。`tf.Tensor.numpy` 方法会以 NumPy `ndarray` 的形式返回该对象的值。
```
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
```
## 动态控制流
Eager Execution 的一个主要优势是,在执行模型时,主机语言的所有功能均可用。因此,编写 [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz) 之类的代码会很容易:
```
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
```
这段代码具有依赖于张量值的条件语句并会在运行时输出这些值。
## Eager 训练
### 计算梯度
[自动微分](https://en.wikipedia.org/wiki/Automatic_differentiation)对实现机器学习算法(例如用于训练神经网络的[反向传播](https://en.wikipedia.org/wiki/Backpropagation))十分有用。在 Eager Execution 期间,请使用 `tf.GradientTape` 跟踪运算以便稍后计算梯度。
您可以在 Eager Execution 中使用 `tf.GradientTape` 来训练和/或计算梯度。这对复杂的训练循环特别有用。
由于在每次调用期间都可能进行不同运算,所有前向传递的运算都会记录到“条带”中。要计算梯度,请反向播放条带,然后丢弃。特定 `tf.GradientTape` 只能计算一个梯度;后续调用会引发运行时错误。
```
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
```
### 训练模型
以下示例创建了一个多层模型,该模型会对标准 MNIST 手写数字进行分类。示例演示了在 Eager Execution 环境中构建可训练计算图的优化器和层 API。
```
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
```
即使没有训练,也可以在 Eager Execution 中调用模型并检查输出:
```
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
```
虽然 Keras 模型有内置训练循环(使用 `fit` 方法),但有时您需要进行更多自定义。下面是一个使用 Eager Execution 实现训练循环的示例:
```
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
```
注:请在 `tf.debugging` 中使用断言函数检查条件是否成立。这在 Eager Execution 和计算图执行中均有效。
```
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train(epochs = 3)
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
```
### 变量和优化器
`tf.Variable` 对象会存储在训练期间访问的可变、类似于 `tf.Tensor` 的值,以更简单地实现自动微分。
变量的集合及其运算方法可以封装到层或模型中。有关详细信息,请参阅[自定义 Keras 层和模型](https://render.githubusercontent.com/view/keras/custom_layers_and_models.ipynb)。层和模型之间的主要区别在于模型添加了如下方法:`Model.fit`、`Model.evaluate` 和 `Model.save`。
例如,上面的自动微分示例可以改写为:
```
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
```
下一步:
1. 创建模型。
2. 损失函数对模型参数的导数。
3. 基于导数的变量更新策略。
```
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
steps = 300
for i in range(steps):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
```
注:变量将一直存在,直至删除对 Python 对象的最后一个引用,并删除该变量。
### 基于对象的保存
`tf.keras.Model` 包括一个方便的 `save_weights` 方法,您可以通过该方法轻松创建检查点:
```
model.save_weights('weights')
status = model.load_weights('weights')
```
您可以使用 `tf.train.Checkpoint` 完全控制此过程。
本部分是[检查点训练指南](https://render.githubusercontent.com/view/checkpoint.ipynb)的缩略版。
```
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
```
要保存和加载模型,`tf.train.Checkpoint` 会存储对象的内部状态,而无需隐藏变量。要记录 `model`、`optimizer` 和全局步骤的状态,请将它们传递到 `tf.train.Checkpoint`:
```
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
```
注:在许多训练循环中,会在调用 `tf.train.Checkpoint.restore` 后创建变量。这些变量将在创建后立即恢复,并且可以使用断言来确保检查点已完全加载。有关详细信息,请参阅[检查点训练指南](https://render.githubusercontent.com/view/checkpoint.ipynb)。
### 面向对象的指标
`tf.keras.metrics` 会被存储为对象。可以通过将新数据传递给可调用对象来更新指标,并使用 `tf.keras.metrics.result` 方法检索结果,例如:
```
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
```
### 摘要和 TensorBoard
[TensorBoard](https://tensorflow.google.cn/tensorboard) 是一种可视化工具,用于了解、调试和优化模型训练过程。它使用在执行程序时编写的摘要事件。
您可以在 Eager Execution 中使用 `tf.summary` 记录变量摘要。例如,要每 100 个训练步骤记录一次 `loss` 的摘要,请运行以下代码:
```
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
steps = 1000
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(steps):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
```
## 自动微分高级主题
### 动态模型
`tf.GradientTape` 也可以用于动态模型。下面这个[回溯线搜索](https://wikipedia.org/wiki/Backtracking_line_search)算法示例看起来就像普通的 NumPy 代码,但它的控制流比较复杂,存在梯度且可微分:
```
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
```
### 自定义梯度
自定义梯度是重写梯度的一种简单方法。在前向函数中,定义相对于输入、输出或中间结果的梯度。例如,下面是在后向传递中裁剪梯度范数的一种简单方法:
```
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
```
自定义梯度通常用来为运算序列提供数值稳定的梯度:
```
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
```
在此例中,`log1pexp` 函数可以通过自定义梯度进行分析简化。下面的实现重用了在前向传递期间计算的 `tf.exp(x)` 值,通过消除冗余计算使其变得更加高效:
```
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
```
## 性能
在 Eager Execution 期间,计算会自动分流到 GPU。如果想控制计算运行的位置,可将其放在 `tf.device('/gpu:0')` 块(或 CPU 等效块)中:
```
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
```
可以将 `tf.Tensor` 对象复制到不同设备来执行其运算:
```
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
```
### 基准
对于计算量繁重的模型(如在 GPU 上训练的 [ResNet50](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/resnet50)),Eager Execution 性能与 `tf.function` 执行相当。但是对于计算量较小的模型来说,这种性能差距会越来越大,并且在为有大量小运算的模型优化热代码路径方面,其性能还有待提升。
## 使用函数
虽然 Eager Execution 增强了开发和调试的交互性,但 TensorFlow 1.x 样式的计算图执行在分布式训练、性能优化和生产部署方面具有优势。为了弥补这一差距,TensorFlow 2.0 通过 `tf.function` API 引入了 `function`。有关详细信息,请参阅 [tf.function](https://render.githubusercontent.com/view/function.ipynb) 指南。
| github_jupyter |
# Book Genre Clustering
In this notebook, we will walk through some **text clustering** with open source Python libraries. Let's think about the following scenario: Mr. Cricket, the owner of the best children bookshop in Walldorf, would like to put some order in his book inventory. He would like to classify the books into different categories based on topic similarity. This would allow him to improve customer experience both in his physical store and his web store, by grouping similar items into homogeneous shelves. Pretty nice idea, but how to do that?! 🤔
Mr. John asks for help to his SAP trusted partner. Their cosultants, after taking a look at the bookshop book inventory, come up with a plan. Each book in the inventory comes with a very concise description field. They are going to implement some text analysis on this field and group books with a similar content using an unsupervised clustering strategy. Their project will be then based on the following steps:
* **1- Text Preprocessing**
* **2- Word Embedding**
* **3- Text Clustering**

Let's put this into practice!
First, we will make sure the required libraries are installed. We will use a set of very common python libraries, for dataframe handling and visualization (pandas, numpy, matplotlib, seaborn), regex and nltk for text cleaning and preprocessing, gensim for the word embedding and sklearn for the clustering. hana_ml will be used in this notebook only to access the book inventory data, that are stored into Mr. Cricket HANA Cloud database.
```
# basic Python
!pip install pandas
!pip install numpy
!pip install matplotlib
!pip install seaborn
# text preprocessing
!pip install regex
!pip install nltk
# word embedding
!pip install gensim
# clustering
!pip install sklearn
# connection to data source in Hana Cloud
!pip install hana_ml
```
Then, we use the import statement to import the packages and define aliases, like this:
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
```
Then, we want to load our book dataset. We use *hana_ml* to create a connection to the HANA Cloud database, and load the book inventory table in a hana dataframe:
```
import hana_ml.dataframe as dataframe
from notebook_hana_connector.notebook_hana_connector import NotebookConnectionContext
conn = NotebookConnectionContext(connectionId = 'DWC_D2VUXXXX' )
df_hana = (conn.table('Book_Author_Dimension_View', schema='D2VUXXXX'))
```
With the **collect** command, we can copy the data from hana cloud to the Jupyter client, in the form of a pandas dataframe. We are now ready to massage our data using all sorts of python tricks!
```
books= df_hana.collect()
del df_hana
books
```
We will be focusing our analysis on the description field in particular.
```
books[['Book_ID','Book_Title','Book_Description']]
```
## 1 - Text Preprocessing
In order to use textual data for predictive modeling, the text must be parsed and transformed to a list of words (called “tokenization”). In this process, special characters and punctuation have to be removed. We should also ged rid of the so called 'stop words', that is to say commonly used words without any specific connotation (such as “the”, “a”, “an”, “in”) etc. These are not predictive and you would not want them to be considered in the predictive model. We will use Natural Language Toolkit (NLTK) and Regular Expressions (RegEx) to clean up and **tokenize** our text.
If you fancy digging deeper into these techniques, here are a few refecences:
* https://tutorialspoint.dev/language/python/removing-stop-words-nltk-python
* https://en.wikipedia.org/wiki/Regular_expression
Let's first drop missing values in the book description books:
```
books=books.dropna(axis=0,subset=['Book_Description'])
```
Prepare the book description as follows:
```
import nltk
from nltk.corpus import stopwords
nltk.download("stopwords")
stopwords=set(stopwords.words("english"))
import regex as re
#Transform to lower case
books['tokens']=books['Book_Description'].apply(lambda x: x.lower())
#Remove punctuation
books['tokens']=books['tokens'].map(lambda x: re.sub("[-,\.!?;\'\(\)]", ' ', x))
#Remove stopwords
books['tokens']=books['tokens'].apply(lambda x: ' '.join([ t for t in x.split() if not t in stopwords]))
# Remove short tokens
books['tokens']=books['tokens'].apply(lambda x:' '.join( [t for t in x.split() if len(t) > 1] ))
#Remove extra spaces
books['tokens']=books['tokens'].map(lambda x: re.sub(' +', ' ', x))
# Remove duplicate tokens
books['tokens']=books['tokens'].apply(lambda x: ' '.join(list(dict.fromkeys(x.split()))))
```
Drop duplicates:
```
books=books.drop_duplicates('tokens')
books[['Book_ID','Book_Title','Book_Description']]
```
## Word Embedding
Global Vectors for Word Representation (**GloVe**), is a **word2vec** model, that is to say an unsupervised learning algorithm for obtaining vector representations for words. It allows you to take a corpus of text, and transform each word in that corpus into a position in a high-dimensional space.
We could choose to train our word2vec model on our own corpus of book description, but to semplify the process while taking advantage of the precious machine learning open community, we will download a pretrained model that was developed usin a muuuuch broader corpus: Wikipedia.
The gensim python library allows us to to that in 2 lines of code. When you drun them, it will take a minute or two. Each line of the text file contains a word, followed by N numbers. The N numbers describe the vector of the word’s position. N may vary depending on which model you choosed to donwload. For us, N is 100, since we are using glove.6B.100d.
To learn more about how to use GloVe see:
* https://faculty.ai/tech-blog/glove/
* https://medium.com/analytics-vidhya/basics-of-using-pre-trained-glove-vectors-in-python-d38905f356db
```
import gensim.downloader as gensim_api
model = gensim_api.load("glove-wiki-gigaword-100")
```
Now that we have downloaded tour word2vec model, we can apply it to every book description, to **embed** our book in a multidimensional space:
```
features=[]
for i,book in books.iterrows():
tokens_features=[]
for word in book['tokens'].split():
try:
tokens_features.append(model[word])
except:
continue
features.append(np.mean(np.array(tokens_features),axis=0))
for i in range(100):
feature='f_'+str(i)
books[feature]=[f[i] for f in features]
del features
embedding=['f_'+str(i) for i in range(100)]
```
You can see the results of our text embedding: we associated each book to a 100-dim numerical vector:
```
books[['Book_Title']+embedding]
```
## KMeans Clustering
Now that each book is represented by a point in a multidimentional space, we can use the *distance* between these points to find out which books are similar to each other. More specifically, we will be performing a cluster analysis: our book sample will be divided into a certain number of groups such that books in the same groups are more similar (close) to books in the same group than those in other groups.
In this exercise, the number of groups or clusters is set to 10. We will use sklearn KMeans clustering algorith, in the form of MiniBatchKmeans
```
from sklearn.cluster import MiniBatchKMeans
n_clus=10
km = MiniBatchKMeans(n_clusters = n_clus, batch_size=50, random_state=42, max_iter=1000)
y_kmeans = km.fit_predict(books[embedding])
books['kmeans_cluster']=y_kmeans
```
### Cluster Profiling
To understand if the clustering worked effectively, let's examing the most representative books for each cluster, the books closest to the cluster centroid If you run the following cell and scroll down to see the result, you will notice a certain homogeneity between descriptions belonging to the same cluster.
```
for cluster in range(n_clus):
print('************* ')
print('- CLUSTER ',str(cluster))
print('*************')
bks=books[books['kmeans_cluster']==cluster]
most_representative_docs = np.argsort(
np.linalg.norm(bks[embedding] - km.cluster_centers_[cluster], axis=1)
)
del bks
centroid_index= most_representative_docs[0]
centroid=[]
for i in range(100):
feature='f_'+str(i)
for d in most_representative_docs[:10]:
print(books.reset_index().Book_Description[d])
print("--")
```
### Cluster Visualization
Most datasets have a large number of variables or dimensions along which the data is distributed. Visually exploring the data is challenging. In our case, for instance, our book embedding has **100 dimensions**. To visualize high-dimensional datasets you can use techniques known as **dimensionality reduction**.
**t-Distributed Stochastic Neighbor Embedding (t-SNE)*** is a technique for dimensionality reduction that allows to map an high-dimensional distribution to a 2-dim plane. Since this is computationally quite heavy, another dimensionality reduction technique is used in conjunction with it, e.g. **Principal Component Analysis** or **PCA**. PCA is a technique for reducing the number of dimensions in a dataset while retaining most information. It analyzes the correlation between dimensions and attempts to provide a minimum number of variables that keeps the maximum amount of variation or information about the original data distribution.
In our example, we will first reduce our dimensions from 100 to 50 using PCA, and eventually using t-sne to visualize our clusters in 2 dimensions. Here is the code.
#### Reduce variables with PCA
```
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
pca_50 = PCA(n_components=50)
pca_result_50 = pca_50.fit_transform(books[embedding])
print('Cumulative explained variation for 50 principal components: {}'.format(np.sum(pca_50.explained_variance_ratio_)))
```
#### Execute tsne model
```
tsne = TSNE(n_components=2, verbose=0, perplexity=30, n_iter=2000)
tsne_pca_results = tsne.fit_transform(pca_result_50)
books["tsne-1"] = tsne_pca_results[:,0]
books["tsne-2"] = tsne_pca_results[:,1]
```
#### Plot clusters in a 2-dim plane
```
plt.figure(figsize=(10,8))
sns.scatterplot(x="tsne-1", y="tsne-2", hue="kmeans_cluster", s=30, palette="Paired",
data=books).set(title="Glove 100 projection")
sample_titles=['Shadow','Seventeen coffins','Stronger than magic']
for t in sample_titles:
x=books.loc[books['Book_Title']==t,'tsne-1'].tolist()[0]
y=books.loc[books['Book_Title']==t,'tsne-2'].tolist()[0]
plt.annotate(t,(x,y),xytext=(x-70,y),arrowprops={'arrowstyle':'fancy'})
plt.show()
```
As you can see, we were able to display the clusters in a 2-dim plane. Samples of books dealing with different themes were assigned to different clusters and lay far from each other in the plot. Now that we are satisfied with our clustering analysis, we are only left with the task of saving the clustering model:
```
import pickle
pickle.dump(km, open("description.pickle.dat", "wb"))
```
| github_jupyter |

# Feature Engineering Guide
```
import pandas as pd
import numpy as np
np.random.seed(0)
import kts
from kts import *
train = pd.read_csv('../input/train.csv', index_col='PassengerId')
test = pd.read_csv('../input/test.csv', index_col='PassengerId')
train.head()
```
Use `kts.save` to put objects or dataframes to user cache:
```
kts.save(train, 'train')
kts.save(test, 'test')
```
## Modular Feature Engineering in 30 seconds
Instead of sequentially adding new columns to one dataframe, you define functions called feature blocks, which take a raw dataframe as input and produce a new dataframe containing only new columns. Then these blocks are collected into feature sets. Such encapsulation enables your features to be computed in parallel, cached, and automatically applied during inference stage, making your experiments executable end-to-end out of the box.
<div style="margin-left: 10%; margin-right: 10%; margin-top: 50px;">
<img src="https://raw.githubusercontent.com/konodyuk/kts/master/docs/static/modularity_diagram.png" style="width: 600px;"/>
</div>
Feature block is defined as a function taking one dataframe as an argument and returning a dataframe, too. Indices of input and output should be identical:
```
def dummy_feature_a(df):
res = pd.DataFrame(index=df.index)
res['a'] = 'a'
return res
dummy_feature_a(train[:2])
dummy_feature_a(train[2:5])
```
`@preview(frame, size_1, size_2, ...)` does almost the same thing as above: it runs your feature constructor on `frame.head(size_1), frame.head(size_2), ...`.
*In addition, you can test out parallel execution. By default all of your features will be parallel, but if you want to change this behavior, use `parallel=False`.*
```
@preview(train, 2, 5, parallel=True)
def dummy_feature_a(df):
res = stl.empty_like(df) # kts.stl is a standard library of feature constructors. Now you need to know
res['a'] = 'a' # only that stl.empty_like(df) is identical to pd.DataFrame(index=df.index)
return res
```
Feature blocks usually consist of more than one feature:
```
@preview(train, 3, 6)
def dummy_feature_age_mean(df):
res = stl.empty_like(df)
res['Age'] = df['Age']
res['mean'] = df['Age'].mean()
return res
```
Functions are registered and converted into feature constructors using `@feature` decorator:
```
@feature
def dummy_feature_a(df):
res = stl.empty_like(df)
res['a'] = 'a'
return res
@feature
def dummy_feature_bcd(df):
res = stl.empty_like(df)
res['b'] = 'b'
res['c'] = 'c'
res['d'] = 'd'
return res
@feature
def dummy_feature_age_mean(df):
res = stl.empty_like(df)
res['mean'] = df['Age'].mean()
return res
```
Then a feature set is defined by a list of feature constructors. Use slicing syntax to preview it:
```
dummy_fs = FeatureSet([dummy_feature_a, dummy_feature_bcd, dummy_feature_age_mean], train_frame=train)
dummy_fs[30:35]
```
Let's clean up our namespace a bit:
```
delete(dummy_feature_a, force=True)
delete(dummy_feature_bcd, force=True)
delete(dummy_feature_age_mean, force=True)
```
Now let's get to the real things.
## Decorators
Almost all of the functions that you'll use have rich docstrings with examples.
Although it is not necessary, I'll demonstrate them throughout this tutorial.
<!-- Decorators are the most frequently used part of KTS API. Don't be confused by decorators and use cases that you haven't seen yet, as they'll be shown in -->
Let's first take a closer look at the decorators that you have already seen.
Don't be confused if you can't understand something, as it will be better explained in the [Feature Types](#Feature-Types) section.
### @preview
```
preview
```
### @feature
```
feature
```
### @generic
```
generic
```
### delete
```
delete
```
## Feature Types
### Regular Features
This type of FCs should already look quite familiar:
```
@preview(train, 5)
def simple_feature(df):
res = stl.empty_like(df)
res['is_male'] = (df.Sex == 'male') + 0
return res
@feature
def simple_feature(df):
res = stl.empty_like(df)
res['is_male'] = (df.Sex == 'male') + 0
return res
```
Feature constructors can print anything to stdout and it will be shown in your report in real time, even if your features are computed in separate processes:
```
@preview(train, 2)
def feature_with_stdout(df):
res = stl.empty_like(df)
res['a'] = 'a'
print('some logs')
return res
```
Use `kts.pbar` to track progress of long-running features:
```
import time
@preview(train, 2)
def feature_with_pbar(df):
res = stl.empty_like(df)
res['a'] = 'a'
for i in pbar(['a', 'b', 'c']):
time.sleep(1)
return res
```
They can also be nested and titled:
```
@preview(train, 2)
def feature_with_nested_pbar(df):
res = stl.empty_like(df)
res['a'] = 'a'
for i in pbar(['a', 'b', 'c']):
for j in pbar(range(6), title=i):
time.sleep(0.5)
return res
```
### Features Using External Frames
Sometimes datasets consist of more than one dataframe. To get an external dataframe into you feature constructor's scope, you need to save it with `kts.save()` and then use the following syntax:
```
external = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
kts.save(external, 'external')
@preview(train, 7)
def feature_using_external(df, somename='external'):
"""
To get an external dataframe, you should set its name in user cache as a default value.
Inside it will look like a usual dataframe.
"""
print(somename.__class__.__name__)
time.sleep(1) # a short delay to receive stdout
res = stl.empty_like(df)
res['Pclass'] = df['Pclass']
res['somefeat'] = somename.set_index('a').loc[df['Pclass']]['b'].values
return res
```
### Stateful Features
Some features may need their state to be saved between training and inference stages. In this case you can use `df.train` or `df._train` to identify which stage it is and `df.state` or `df._state` as a dictionary to write and read the state:
*Unfortunately, so far you can preview only training stage using @preview. Later we'll add @preview_train_test to emulate both stages.*
```
@preview(train, 5)
def stateful_feature(df):
"""A simple standardizer"""
res = stl.empty_like(df)
if df.train:
print('this is a training stage')
df.state['mean'] = df['Age'].mean()
df.state['std'] = df['Age'].std()
mean = df.state['mean']
std = df.state['mean']
res['Age'] = df['Age']
res['age_std'] = (df['Age'] - mean) / std
return res
```
### Generic Features
You can also create reusable functions with `@generic(arg1=default, arg2=default, ...)`. For preview, default arguments are used.
```
@preview(train, 5)
@generic(left="Pclass", right="SibSp")
def interactions(df):
res = stl.empty_like(df)
res[f"{left}_add_{right}"] = df[left] + df[right]
res[f"{left}_sub_{right}"] = df[left] - df[right]
res[f"{left}_mul_{right}"] = df[left] * df[right]
return res
```
Let's register a couple of generic features:
```
@feature
@generic(left="Pclass", right="SibSp")
def interactions(df):
res = stl.empty_like(df)
res[f"{left}_add_{right}"] = df[left] + df[right]
res[f"{left}_sub_{right}"] = df[left] - df[right]
res[f"{left}_mul_{right}"] = df[left] * df[right]
return res
@feature
@generic(col="Parch")
def num_aggs(df):
"""Descriptions are also supported."""
res = pd.DataFrame(index=df.index)
mean = df[col].mean()
std = df[col].std()
res[f"{col}_div_mean"] = df[col] / mean
res[f"{col}_sub_div_mean"] = (df[col] - mean) / mean
res[f"{col}_div_std"] = df[col] / std
return res
```
A combination of generic and stateful feature. It also returns a numpy array instead of dataframe. In this case, KTS will attach input index to result dataframe automatically.
```
from sklearn.feature_extraction.text import TfidfVectorizer
@preview(train, 10)
@generic(col='Name')
def tfidf(df):
if df.train:
enc = TfidfVectorizer(analyzer='char', ngram_range=(1, 3), max_features=5)
res = enc.fit_transform(df[col])
df.state['enc'] = enc
else:
enc = df.state['enc']
res = enc.transform(df[col])
return res.todense()
```
Don't forget to change `@preview` to `@feature` to register generics:
```
@feature
@generic(col='Name')
def tfidf(df):
if df.train:
enc = TfidfVectorizer(analyzer='char', ngram_range=(1, 3), max_features=5)
res = enc.fit_transform(df[col])
df.state['enc'] = enc
else:
enc = df.state['enc']
res = enc.transform(df[col])
return res.todense()
tfidf
```
Note that KTS added sklearn to dependencies. Right now it is not very useful, but later it may be used to dockerize experiments automatically.
## Standard Library
KTS provides the most essential feature constructors as a standard library, i.e. `kts.stl` submodule. All of the STL features have rich docstrings.
### stl.empty_like
```
stl.empty_like
@preview(train, 5)
def preview_stl(df):
return stl.empty_like(df)
```
### stl.identity
```
stl.identity
@preview(train, 5)
def preview_stl(df):
return stl.identity(df)
```
### stl.select
```
stl.select
@preview(train, 5)
def preview_stl(df):
return stl.select(['Name', 'Sex'])(df)
```
### stl.drop
```
stl.drop
@preview(train, 5)
def preview_stl(df):
return stl.drop(['Survived'])(df)
```
### stl.concat
```
stl.concat
@preview(train, 5)
def preview_stl(df):
res = stl.concat([
stl.select(['Sex', 'Name']),
simple_feature,
tfidf('Name')
])(df)
return res
```
### stl.apply
```
stl.apply
@preview(train, 700, parallel=True)
def preview_stl(df):
def func(row):
"""A regular row-wise function with any logic."""
time.sleep(0.1)
if row.Embarked == 'S':
return row.SibSp
return row.Age
res = stl.empty_like(df)
res['col'] = stl.apply(df, func, parts=7, verbose=True)
return res
```
### stl.category_encode
```
stl.category_encode
from category_encoders import CatBoostEncoder, WOEEncoder, TargetEncoder
@preview(train, 100)
def preview_stl(df):
encoder = CatBoostEncoder(sigma=3, random_state=0)
return stl.category_encode(encoder, columns=['Cabin', 'Embarked'], targets='Survived')(df)
@preview(train, 100)
def preview_stl(df):
return stl.concat([
stl.select(['Cabin', 'Survived']),
stl.category_encode(CatBoostEncoder(random_state=0), columns='Cabin', targets='Survived'),
stl.category_encode(WOEEncoder(), columns='Cabin', targets='Survived'),
stl.category_encode(TargetEncoder(), columns='Cabin', targets='Survived'),
])(df)
```
### stl.mean_encode
```
stl.mean_encode
@preview(train, 100)
def preview_stl(df):
"""An alias for stl.category_encode(TargetEncoder())"""
return stl.mean_encode('Cabin', 'Survived', smoothing=3)(df)
```
### stl.one_hot_encode
```
stl.one_hot_encode
@preview(train, 100, parallel=False) # One hot encoder produces a lot of columns, but is computationally cheap, that's why we don't compute it in parallel
def preview_stl(df):
"""An alias for stl.category_encode(OneHotEncoder())"""
return stl.one_hot_encode('Embarked')(df)
```
## Feature Set
```
FeatureSet
fs = FeatureSet([simple_feature, interactions('Pclass', 'Age'), num_aggs('Fare'), tfidf('Name')],
[stl.category_encode(TargetEncoder(), 'Embarked', 'Survived'),
stl.category_encode(WOEEncoder(), 'Embarked', 'Survived')],
train_frame=train,
targets='Survived')
```
Each feature set is given a unique identifier. It also contains source code of all the features right in its repr:
```
fs
```
Use slicing to preview your feature sets. Slicing calls are not cached and do not leak dataframes to IPython namespace, so you can run them as many times as you need. For stateful features, slicing calls always trigger a training stage.
```
fs[:10]
```
| github_jupyter |
## This Notebook is a part of [EDA & Viz Project](./3_EDA_&_Viz_Project.ipynb)
* This Notebook is created to get better understading of data to be used for EDA_Project (only for observational purpose).
* This Notebook can be used separately.
__By:__ <a href="https://www.blogger.com/profile/01288628031125822619" target="_blank">Neeraj Singh Rawat</a>
```
import pandas as pd
import numpy as np
##############################
# World Data #
##############################
URL1='https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_daily_reports/05-02-2020.csv'
URL2='https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv'
URL3='https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv'
URL4='https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv'
URL5='https://raw.githubusercontent.com/CSSEGISandData/COVID-19/web-data/data/cases_country.csv'
URL6='https://raw.githubusercontent.com/CSSEGISandData/COVID-19/web-data/data/cases_time.csv'
latest_data = pd.read_csv(URL1) # Last Updated Data
confirmed_df = pd.read_csv(URL2) #Confirmed Case
recovered_df = pd.read_csv(URL3) # Recoverd Case
deaths_df = pd.read_csv(URL4) # Number of Deaths
covid19_df = pd.read_csv(URL5)
table_df = pd.read_csv(URL6,parse_dates=['Last_Update'])
world_population = pd.read_csv('population_by_country_2020.csv')
##############################
# INDIA Data #
##############################
URL7 = 'https://www.mohfw.gov.in/'
df_india = pd.read_html(URL7)[-1]
#india_covid_19 = pd.read_csv('covid_19_india.csv')
#india_covid_19['Date'] = pd.to_datetime(india_covid_19['Date'],dayfirst = True)
hospital_beds = pd.read_csv('HospitalBedsIndia.csv')
ICMR_details = pd.read_csv('ICMRTestingDetails.csv')
ICMR_details['DateTime'] = pd.to_datetime(ICMR_details['DateTime'],dayfirst = True)
ICMR_details = ICMR_details.dropna(subset=['TotalSamplesTested', 'TotalPositiveCases'])
ICMR_labs = pd.read_csv('ICMRTestingLabs.csv')
state_testing = pd.read_csv('StatewiseTestingDetails.csv')
state_testing['Date'] = pd.to_datetime(state_testing['Date'])
population = pd.read_csv('population_india_census2011.csv')
age_details = pd.read_csv('AgeGroupDetails.csv')
individual_details = pd.read_csv('IndividualDetails.csv')
```
## GLOBALLY_LATEST RECORDS
```
latest_data.head(3)
latest_data.tail(3)
```
## GLOBALLY_CONFIRMED
```
confirmed_df.head(3)
confirmed_df.tail(3)
```
## GLOBALLY_RECOVERED
```
recovered_df.head(3)
recovered_df.tail(3)
```
## GLOBALLY_DEATHS
```
deaths_df.head(3)
deaths_df.tail(3)
```
## GLOBALLY_COUNTRY CASES
```
#TEST
url = 'https://raw.githubusercontent.com/imdevskp/covid_19_jhu_data_web_scrap_and_cleaning/master/covid_19_clean_complete.csv'
world = pd.read_csv(url, parse_dates=['Date'])
world.head()
covid19_df.head(3)
covid19_df.tail(3)
```
## GLOBALLY_TABLE DATA
```
table_df.head(3)
table_df.tail(3)
```
## WORLD POPULATION
```
world_population.tail(3)
```
## INDIA COVID-19
```
df_india.head(3)
df_india.tail(3)
#india_covid_19.head(3)
#india_covid_19.tail(3)
```
## INDIAN_INDIVIDUAL DETAILS
```
individual_details.head(3)
individual_details.tail(3)
```
## INDIAN_AGE DETAILS
```
age_details.head(3)
age_details.tail(3)
```
## INDIAN_POPULATION
```
population.head(3)
population.tail(3)
```
## INDIAN_STATE TESTING
```
state_testing.head(3)
state_testing.tail(3)
```
## INDIAN_ICMR LABS
```
ICMR_labs.head(3)
ICMR_labs.tail(3)
```
## INDIAN_ICMR DETAILS
```
ICMR_details.head(3)
ICMR_details.tail(3)
```
## INDIA_No. of BEDS In HOSPITAL
```
hospital_beds.head(3)
hospital_beds.tail(3)
#float(world_population[world_population['Country (or dependency)'] == country]['Population (2020)'])
```
# TEST
world.head()
# TEST
world = pd.DataFrame(world.sum()).transpose()
world.index = ['Total']
world["Mortality Rate (per 100)"] = np.round(100*world["Deaths"]/world["Confirmed"],2)
world["Active"] = world["Confirmed"]-(world["Deaths"]+world["Recovered"])
print('\033[1m' + '\033[4m'+ '\n\t\t\t WORLD Total COVID-19 Cases ')
display(world.style.background_gradient(cmap='Wistia',axis=1))
| github_jupyter |
```
# Makes print and division act like Python 3
from __future__ import print_function, division
# Import the usual libraries
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# Enable inline plotting at lower left
%matplotlib inline
matplotlib.rcParams['image.origin'] = 'lower'
# seaborn package for making pretty plots, but not necessary
try:
import seaborn as sns
params = {'xtick.direction': 'in', 'ytick.direction': 'in', 'font.family': ['serif'],
'text.usetex': True, 'text.latex.preamble': ['\usepackage{gensymb}']}
sns.set_style("ticks", params)
except ImportError:
print('Seaborn module is not installed.')
from IPython.display import display, Latex
try: # Desktop
import pynrc
from pynrc import nrc_utils
from pynrc.nrc_utils import (webbpsf, poppy, pix_noise)
except ImportError: # Laptop
import sys
sys.path.insert(0, '/Users/jarron/Dropbox/NIRCam/')
import pyNRC as pynrc
from pyNRC import nrc_utils
from pyNRC.nrc_utils import (webbpsf, poppy, pix_noise)
poppy.conf.use_multiprocessing=False
import astropy.io.fits as fits
import multiprocessing as mp
pynrc.setup_logging('WARNING', verbose=False)
data_path = webbpsf.utils.get_webbpsf_data_path() + '/'
opd_path = data_path + 'NIRCam/OPD/'
opd_file = 'OPD_RevV_nircam_132.fits'
opds, header = fits.getdata(opd_path + opd_file, header=True)
nopd = opds.shape[0]
nproc = int(np.min([nopd,mp.cpu_count()*0.75]))
from poppy import zernike
from poppy.optics import MultiHexagonAperture
from poppy.utils import pad_to_size
class OPD_extract(object):
def __init__(self, opd, header, verbose=False):
self.opd = opd
self.npix = opd.shape[0]
self.pix_scale = header['PUPLSCAL'] # pupil scale in meters/pixel
# Create a list of segment masks
self.nseg = 18
# Flat to flat is 1.32 meters with a 3 mm air gain between mirrors
# Supposedly there's a 15 mm effective gap due to polishing
f2f = 1.308
gap = 0.015
# Analytic pupil mask for all segments
# To work with the OPD Rev V masks, use npix=1016 and pad to 1024
self._mask_pupil = MultiHexagonAperture(rings=2, flattoflat=f2f, gap=gap)
# Individual segments in a list
self._mask_segs = []
for i in np.arange(self.nseg)+1:
self._mask_segs.append(MultiHexagonAperture(rings=2, flattoflat=f2f, gap=gap, segmentlist=[i]))
# Get the x/y positions
self.segment_positions = [self._get_seg_xy(i) for i in range(self.nseg)]
self.basis_zernike = zernike.zernike_basis_faster
self.basis_hexike = zernike.hexike_basis
if verbose: print('Fitting Zernike coefficients for entire pupil...')
self._coeff_pupil = self._get_coeff_pupil()
if verbose: print('Constructing OPD from entire pupil coefficients...')
self.opd_new_pupil = self._get_opd_pupil()
self.opd_diff_pupil = self.opd - self.opd_new_pupil
if verbose: print('Fitting Hexike coefficients for each segment...')
self._coeff_segs = self._get_coeff_segs()
if verbose: print('Constructing OPD for each segment...')
self.opd_new_segs = self._get_opd_new_segs()
print('Finished.')
@property
def mask_pupil(self):
outmask, _ = self._sample_mask(self._mask_pupil)
return outmask
@property
def mask_opd(self):
mask = np.ones(self.opd.shape)
mask[self.opd==0] = 0
return mask
@property
def coeff_pupil(self):
return self._coeff_pupil
@property
def coeff_segs(self):
"""Hexike coefficients for a given segment index"""
return self._coeff_segs
def mask_seg(self, i):
"""Return a sampled subsection of the analytic segment mask"""
# Mask off everything but the segment of interest
pmask = self._mask_segs[i]
outmask, _ = self._sample_mask(pmask)
# Multiply by larger OPD pupil mask
outmask *= self.mask_opd
return self.opd_seg(i,outmask)
def opd_seg(self, i, opd_pupil=None):
"""Return a subsection of some OPD image for the provided segment index"""
if opd_pupil is None: opd_pupil = self.opd_diff_pupil
# Subsection the image, then pad to equal xy size
x1,x2,y1,y2 = self.segment_positions[i]
imsub = opd_pupil[y1:y2,x1:x2]
(ny,nx) = imsub.shape
npix_max = np.max(imsub.shape)
# Make x and y the same size
imsub = pad_to_size(imsub, [npix_max,npix_max])
# If nx was larger than ny, then center y
if nx > ny: # xhift in y
shift = (ny-nx)//2 if y1>0 else (nx-ny)//2
imsub = np.roll(imsub, shift, axis=0)
elif ny > nx: # shift in x
shift = (nx-ny)//2 if x1>0 else (ny-nx)//2
imsub = np.roll(imsub, shift, axis=1)
return imsub
def _get_coeff_pupil(self):
"""Calculate Zernike coefficients for full OPD"""
return self._get_coeff(self.opd, self.mask_opd, self.basis_zernike, iterations=2)
def _get_opd_pupil(self):
"""Generate OPD for entire pupil based on coefficients"""
opd = self._opd_from_coeff(self.coeff_pupil, self.basis_zernike, self.mask_pupil)
return opd * self.mask_opd
def _get_coeff_segs(self, opd_pupil=None):
"""Calculate Hexike coeffiencts each individual segment"""
coeff_list = []
for i in range(self.nseg): #, pmask in enumerate(self._mask_segs):
mask_sub = self.mask_seg(i)
opd_sub = self.opd_seg(i, opd_pupil)
coeff = self._get_coeff(opd_sub, mask_sub, self.basis_hexike, nterms=30)
coeff_list.append(coeff)
return coeff_list
def _get_opd_new_segs(self, coeff_segs=None):
"""Generate segment OPDs in a list"""
if coeff_segs is None: coeff_segs = self.coeff_segs
opd_list = []
for i, pmask in enumerate(self._mask_segs):
mask_sub = self.mask_seg(i)
opd = self._opd_from_coeff(coeff_segs[i], self.basis_hexike, mask_sub)
opd *= mask_sub
opd_list.append(opd)
return opd_list
def combine_opd_segs(self, opd_segs=None):
"""Combine list of OPD segments into a single image"""
if opd_segs is None: opd_segs = self.opd_new_segs
opd = np.zeros(self.opd.shape)
for i, opd_sub in enumerate(opd_segs):
pmask = self._mask_segs[i]
mask, _ = self._sample_mask(pmask)
mask *= self.mask_opd
opd[mask==1] = opd_sub[opd_sub!=0]
return opd
def _get_coeff(self, opd, mask, basis, **kwargs):
return zernike.opd_expand_nonorthonormal(opd, mask, basis=basis, **kwargs)
def _opd_from_coeff(self, coeff, basis, mask):
"""Generate OPD image from a set of coefficients, basis function, and mask"""
npix = mask.shape[0]
return zernike.opd_from_zernikes(coeff, basis=basis, npix=npix, outside=0)
def _sample_mask(self, mask):
"""Sample an analytic pupil mask at 1024x1024"""
npix = self.npix
outmask, pixelscale = mask.sample(npix=npix-8, return_scale=True)
outmask = pad_to_size(outmask, [npix,npix])
return outmask, pixelscale
def _get_seg_xy(self, i):
"""
Get the xy pixel indices (range) of a particular segment mask.
Returns (x1,x2,y1,y2)
"""
pmask = self._mask_segs[i]
outmask, pixelscale = self._sample_mask(pmask)
pix_rad = np.int(np.ceil(((pmask.side+pmask.gap) / pixelscale).value))
pix_cen = pmask._hex_center(i+1)
xc = pix_cen[1] / pixelscale.value + outmask.shape[1] / 2
yc = pix_cen[0] / pixelscale.value + outmask.shape[0] / 2
# Grab the pixel ranges
x1 = np.int(xc - pix_rad)
x2 = x1 + 2*pix_rad
y1 = np.int(yc - pix_rad)
y2 = y1 + 2*pix_rad
# Limits on x/y positions
x1 = np.max([x1,0])
x2 = np.min([x2,self.npix])
y1 = np.max([y1,0])
y2 = np.min([y2,self.npix])
return (x1,x2,y1,y2)
# Multiprocessing or each OPD
def opd_extract_helper(args):
return OPD_extract(args[0], args[1], verbose=False)
pool = mp.Pool(nproc)
worker_arguments = [(opds[i,:,:],header) for i in range(nopd)]
%time opds_all = pool.map(opd_extract_helper, worker_arguments)
pool.close()
# For the pupil OPD and each segment OPD, find the stdev of each Zern/Hexike coefficient
pup_cf_std = np.array([opds_all[i].coeff_pupil for i in range(9)]).std(axis=0)
nseg = 18
seg_cf_std_all = []
for j in range(nseg):
std = np.array([opds_all[i].coeff_segs[j] for i in range(9)]).std(axis=0)
seg_cf_std_all.append(std)
seg_cf_std = np.median(seg_cf_std_all, axis=0)
# These values will be used to vary RMS WFE
# Set the piston values to 0
pup_cf_std[0] = 0.0
seg_cf_std[0] = 0.0
# Zern/Hexikes to vary:
# tip, tilt, defocus,
# oblique_astigmatism, vertical_astigmatism,
# vertical_coma, horizontal_coma
# These are Z = 2-8 (indices 1-7)
pup_cf_std[8:] = 0
seg_cf_std[8:] = 0
znum_pup = np.arange(len(pup_cf_std))+1
znum_seg = np.arange(len(seg_cf_std))+1
f, (ax1,ax2) = plt.subplots(1,2,figsize=(16,5))
ax1.plot(znum_pup, pup_cf_std*1000, color='steelblue', marker='o')
ax1.plot(znum_seg, seg_cf_std*1000, color='indianred', marker='o')
ax1.set_title('Pupil and Segment Coefficients')
for seg in seg_cf_std_all:
ax2.plot(znum_seg, seg*1000, color='indianred', alpha=0.15)
ax2.plot(znum_seg, seg_cf_std*1000, color='indianred', marker='o', lw=3)
for ax in (ax1,ax2):
ax.set_xlabel('Zern/Hexike #')
ax.set_ylabel('Coefficient Sigma (nm)')
# Each Zern/Hexike component will get some portion of the total wfe_drift
# Decide based on the variance
#pup_var = pup_cf_std**2
#seg_var = seg_cf_std**2
#all_var = np.array(pup_var.tolist() + seg_var.tolist())
#delta_var = ((wfe_drift/1000)**2) * all_var/all_var.sum()
#delta_coeff = np.sqrt(delta_var)
#print(seg_cf_std.sum())
#f, (ax1,ax2) = plt.subplots(1,2,figsize=(16,5))
#ax1.plot(pup_cf_std*1000)
#ax1.plot(seg_cf_std*1000)
#ax2.plot(delta_coeff[:pup_var.size]*1000)
#ax2.plot(delta_coeff[pup_var.size:]*1000)
#ax2.set_xlabel('Coefficient #')
#ax2.set_ylabel('Coefficient Drift (nm)')
#opd_object = opds_all[0] #OPD_extract(opds[0],header)
def opd_sci_gen(opd):
"""Function to go through an OPD class and generate a sci OPD image"""
mask_pupil = opd.mask_pupil * opd.mask_opd
opd_pupil = opd.opd_new_pupil * mask_pupil
opd_segs = opd.combine_opd_segs(opd.opd_new_segs) * mask_pupil
opd_resid = (opd.opd - (opd_pupil+opd_segs)) * mask_pupil
# OPD for science target observation
opd_sci = opd_pupil + opd_segs + opd_resid
# Return the residuals for use in drifted
return (opd_sci, opd_resid)
def opd_ref_gen(args, verbose=False, case=1):
"""Generate a drifted OPD image"""
opd, wfe_drift, pup_cf_std, seg_cf_std, opd_resid = args
np.random.seed()
# Pupil mask (incl. segment spaces and secondary struts)
mask_pupil = opd.mask_pupil * opd.mask_opd
# Case of Pupil Only
if case==1:
var = pup_cf_std**2
std = np.sqrt(var/var.sum()) * (wfe_drift/1000)
rand = np.random.randn(std.size) * std
coeff_pupil_new = opd.coeff_pupil + rand #delta_coeff[:pup_var.size]
ncf = pup_cf_std[pup_cf_std!=0].size
opd_pupil_new = opd._opd_from_coeff(coeff_pupil_new, opd.basis_zernike, mask_pupil) * mask_pupil
opd_segs_new = opd.combine_opd_segs(opd.opd_new_segs) * mask_pupil
# Total
opd_sci = opd.opd * mask_pupil
opd_ref = opd_pupil_new + opd_segs_new + opd_resid
# Take the difference and make sure the RMS WFE difference is correct
# Add this to the overall pupil coefficients
opd_diff = (opd_sci - opd_ref) * mask_pupil
rms_diff = opd_diff[opd_diff!=0].std()
delta_rms = wfe_drift/1000 - rms_diff
ind = var>0
coeff_pupil_new[ind] += 1.1 * delta_rms * (rand[ind]/np.abs(rand[ind])) / np.sqrt(var[ind].size)
opd_pupil_new = opd._opd_from_coeff(coeff_pupil_new, opd.basis_zernike, mask_pupil) * mask_pupil
opd_ref = opd_pupil_new + opd_segs_new + opd_resid
# Case of Segments Only
elif case==2:
opd_pupil_new = opd.opd_new_pupil # No drift to overall pupil
# Segments
# Random Gaussian noise distributed across each Zernike coeff
rand_all = []
coeff_segs_new = []
var = seg_cf_std**2
std = np.sqrt(var/var.sum()) * (wfe_drift/1000)
for cf in opd.coeff_segs:
rand = np.random.randn(std.size) * std
rand_all.append(rand)
coeff_segs_new.append(cf + rand)
opd_segs_new_list = opd._get_opd_new_segs(coeff_segs_new)
opd_segs_new = opd.combine_opd_segs(opd_segs_new_list) * mask_pupil
# Total
opd_sci = opd.opd * mask_pupil
opd_ref = opd_pupil_new + opd_segs_new + opd_resid
# Take the difference and make sure the RMS WFE difference is correct
opd_diff = (opd_sci - opd_ref) * mask_pupil
# Add this to the overall segment coefficients
rms_diff = opd_diff[opd_diff!=0].std()
delta_rms = wfe_drift/1000 - rms_diff
ind = var>0; nind = var[ind].size
for i,cf in enumerate(coeff_segs_new):
rand = rand_all[i]
cf[ind] += delta_rms * (rand[ind]/np.abs(rand[ind])) / np.sqrt(nind)
opd_segs_new_list = opd._get_opd_new_segs(coeff_segs_new)
opd_segs_new = opd.combine_opd_segs(opd_segs_new_list) * mask_pupil
opd_ref = opd_pupil_new + opd_segs_new + opd_resid
# Case of Pupil and Segments distributed evenly
elif case==3:
# Pupil
var = pup_cf_std**2
std = np.sqrt(var/var.sum()) * (wfe_drift/1000)
rand_pup = np.random.randn(std.size) * std / np.sqrt(2.0)
coeff_pupil_new = opd.coeff_pupil + rand_pup #delta_coeff[:pup_var.size]
opd_pupil_new = opd._opd_from_coeff(coeff_pupil_new, opd.basis_zernike, mask_pupil) * mask_pupil
# Segments
# Random Gaussian noise distributed across each Zernike coeff
coeff_segs_new = []
var = seg_cf_std**2
std = np.sqrt(var/var.sum()) * (wfe_drift/1000) / np.sqrt(2.0)
for cf in opd.coeff_segs:
rand = np.random.randn(std.size) * std
coeff_segs_new.append(cf + rand)
#coeff_segs_new = [cf + np.random.normal(0,wfe_drift/1000,cf.shape) for cf in opd.coeff_segs]
opd_segs_new_list = opd._get_opd_new_segs(coeff_segs_new)
opd_segs_new = opd.combine_opd_segs(opd_segs_new_list) * mask_pupil
# Total
opd_sci = opd.opd * mask_pupil
opd_ref = opd_pupil_new + opd_segs_new + opd_resid
# Take the difference and make sure the RMS WFE difference is correct
# Add this to the overall pupil coefficients
opd_diff = (opd_sci - opd_ref) * mask_pupil
rms_diff = opd_diff[opd_diff!=0].std()
delta_rms = wfe_drift/1000 - rms_diff
ind = (pup_cf_std**2)>0; nind = rand_pup[ind].size
coeff_pupil_new[ind] += 1.1 * delta_rms * (rand_pup[ind]/np.abs(rand_pup[ind])) / np.sqrt(nind)
#coeff_pupil_new += delta_rms
#coeff_pupil_new[pup_cf_std!=0] -= delta_rms
opd_pupil_new = opd._opd_from_coeff(coeff_pupil_new, opd.basis_zernike, mask_pupil) * mask_pupil
opd_ref = opd_pupil_new + opd_segs_new + opd_resid
# Jeremy's Method
elif case==4:
opd_sci = opd.opd * mask_pupil
opd_ref = opd_drift_nogood(opd_sci, wfe_drift) * mask_pupil
if verbose:
opd_sci = opd.opd * mask_pupil
opd_diff = (opd_sci - opd_ref) * mask_pupil
print('Sci RMS: {:.3f}, Ref RMS: {:.3f}, RMS diff: {:.4f}' \
.format(opd_sci[opd_sci!=0].std(), opd_ref[opd_ref!=0].std(), opd_diff[opd_diff!=0].std()))
return opd_ref
# Function to drift a list of OPDs
def ODP_drift_all(wfe_drift, opds_all, pup_cf_std, seg_cf_std, opd_resid_list):
"""
Drift a list of OPDs by some RMS WFE (multiprocess function)
Args:
opds_all - List of OPD objects
wfe_drift - In nm
pup_cf_std - Zernike sigma for overall pupil
seg_cf_std - Hexike sigma for segments
opd_resid_list - List of residual images (from opd_sci_gen) for each OPD
"""
#opds_all, wfe_drift, pup_cf_std, seg_cf_std, opd_resid_list = args
nopd = len(opds_all)
nproc = int(np.min([nopd,mp.cpu_count()*0.75]))
pool = mp.Pool(nproc)
worker_arguments = [(opds_all[i], wfe_drift, pup_cf_std, seg_cf_std, opd_resid_list[i]) for i in range(nopd)]
out = pool.map(opd_ref_gen, worker_arguments)
pool.close()
pool.join()
print('Finished: {:.0f} nm'.format(wfe_drift))
return out
def get_psf_sci(opd_sci, filter='F410M', mask=None, pupil=None):
# Convert OPD to HDU list for use in WebbPSF
hdu1 = fits.PrimaryHDU(opd_sci)
hdu1.header = header.copy()
opd_sci_hdulist = fits.HDUList([hdu1])
# Planet PSF
nc0 = webbpsf.NIRCam()
nc0.pupilopd = opd_sci_hdulist
nc0.filter = filter
nc0.image_mask = None
nc0.pupil_mask = pupil
#if mask is None:
# nc0.options['jitter'] = 'gaussian'
# nc0.options['jitter_sigma'] = 0.01
psf0 = nc0.calc_psf(fov_arcsec=10)
if mask is None: return psf0, psf0
# Stellar PSF
nc1 = webbpsf.NIRCam()
nc1.filter = filter
nc1.image_mask = mask
nc1.pupil_mask = pupil
nc1.pupilopd = opd_sci_hdulist
psf1 = nc1.calc_psf(fov_arcsec=10)
return psf0, psf1
def get_psf_ref(opd_ref, filter='F410M', mask=None, pupil=None):
hdu2 = fits.PrimaryHDU(opd_ref)
hdu2.header = header.copy()
opd_ref_hdulist = fits.HDUList([hdu2])
# Reference PSF
nc2 = webbpsf.NIRCam()
nc2.filter = filter
nc2.image_mask = mask
nc2.pupil_mask = pupil
nc2.pupilopd = opd_ref_hdulist
#if mask is None:
# nc2.options['jitter'] = 'gaussian'
# nc2.options['jitter_sigma'] = 0.01
return nc2.calc_psf(fov_arcsec=10)
def get_contrast(psf0,psf1,psf2):
"""
For science and reference PSFs, return the contrast curve.
Assumes no noise other than residual speckle noise.
psf0 is the planet PSF
psf1 is the stellar PSF
psf2 is the reference PSF
"""
# PSF subtraction
from copy import deepcopy
psf_diff = deepcopy(psf1)
psf_diff[0].data = (psf1[0].data - psf2[0].data)
psf_diff[1].data = (psf1[1].data - psf2[1].data)
# Radial noise profiles of PSF difference
rr0, stds0 = webbpsf.radial_profile(psf_diff, ext=0, stddev=True)
#rr1, stds1 = webbpsf.radial_profile(psf_diff, ext=1, stddev=True)
## Total planet signal at a radius of 0.5"
#rr_psf0, mn_psf0, ee_psf0 = webbpsf.radial_profile(psf0, ext=0, EE=True)
#rad_asec = 0.5
#npix = np.pi * (rad_asec / psf0[0].header['PIXELSCL'])**2
## Get the encircled energy of planet at radius
#planet_signal = np.interp(rad_asec, rr_psf0, ee_psf0)
## nsigma of planet signal relative to noise
#contrast = np.sqrt(stds0**2 * npix) / planet_signal
contrast = stds0 / np.max(psf0[0].data)
return rr0, contrast
def contrast_drift(wfe_drift, psf_sci_all, filter, mask, pupil, opd_ref_list=None, *args):
"""
Perform WFE drifts on a series of OPDs
"""
if opd_ref_list is None:
print('Generating opd_ref_list')
opd_ref_list = ODP_drift_all(wfe_drift, *args)
psf_ref_all = [get_psf_ref(opd, filter, mask, pupil) for opd in opd_ref_list]
print('Finished: {:.0f} nm'.format(wfe_drift))
return [get_contrast(psf_sci_all[i][0],psf_sci_all[i][1],psf_ref_all[i]) for i in range(nopd)]
def opd_drift_nogood(opd, drift, nterms=8, defocus_frac=0.8):
"""
Add some WFE drift (in nm) to an OPD image.
Parameters
------------
opd : OPD images (can be an array of images).
header : Header file
drift : WFE drift in nm
Returns
--------
Returns an HDUList, which can be passed to webbpsf
"""
# Various header info for coordinate grid
diam = 6.55118110236
pix_m = 0.00639763779528
wfe_rm = 132.
# Create x/y carteesian grid
sh = opd.shape
if len(sh) == 3:
nz,ny,nx = sh
opd_sum = opd.sum(axis=0)
mask0 = (opd_sum == 0)
mask1 = (opd_sum != 0)
else:
nz = 1
ny,nx = sh
# Masks for those values equal (and not) to 0
mask0 = (opd == 0)
mask1 = (opd != 0)
y,x = np.indices((ny,nx))
center = tuple((a - 1) / 2.0 for a in [nx,ny])
y = y - center[1]; x = x - center[0]
y *= pix_m; x *= pix_m
# Convert to polar coordinates
rho = np.sqrt(x**2 + y**2) / (diam / 2)
theta = np.arctan2(y,x)
# Generate Zernike maps
# Drop the piston
zall = (poppy.zernike.zernike_basis(nterms, rho=rho, theta=theta))[1:,:,:]
zall[:,mask0] = 0
# Sum Zernikes and normalize to total
# Exclude defocus term
zmost = np.concatenate((zall[0:2,:,:], zall[3:,:,:]))
ztot = zmost.sum(axis=0)
ztot /= ztot.sum()
# Normalize defocus
zfoc = zall[2,:,:]
zfoc /= zfoc.sum()
# Fraction of total that goes into defocus
zsum = (1.0-defocus_frac)*ztot + defocus_frac*zfoc
# Set masked pixels to 0 and normalize to unmasked sigma
zsum[mask0] = 0
zsum /= zsum[mask1].std()
# RMS factor measured versus ideal
# Accounts for unit differences as well (meters vs nm)
# header['WFE_RMS'] is in nm, as is drift
if len(sh) == 3:
rms_opd = np.array([(opd[i,:,:])[mask1].std() for i in range(nz)])
rms_fact = rms_opd / wfe_rm
drift_act = rms_fact * drift
zadd = zsum * drift_act.reshape([nz,1,1]) # Array broadcasting
else:
drift_act = drift * opd[mask1].std() / wfe_rm
zadd = zsum * drift_act
return opd + zadd
```
# Testing
```
# Generate list of science OPDs and residuals for use in reference drift.
pool = mp.Pool(nproc)
%time output = pool.map(opd_sci_gen, opds_all)
opd_sci_list, opd_resid_list = zip(*output)
pool.close()
pool.join()
#wfe_drift = 10.0
#%time opd_ref_list = [opd_drift_nogood(opd,wfe_drift) for opd in opd_sci_list]
diff = np.asarray(opd_sci_list) - np.asarray(opd_ref_list)
diff.tolist()
wfe_drift = 10.0
#i=0
#test = opd_ref_gen((opds_all[i], wfe_drift, pup_cf_std, seg_cf_std, opd_resid_list[i]))
args = (opds_all, pup_cf_std, seg_cf_std, opd_resid_list)
%time opd_ref_list = ODP_drift_all(wfe_drift, *args)
vlim = 50
fig, axes = plt.subplots(2,5,figsize=(15,6.2))
for i,ax in enumerate(axes.flat):
im = ax.imshow((opd_sci_list[i]-opd_ref_list[i])*1000, cmap='RdBu', vmin=-vlim, vmax=vlim)
ax.set_aspect('equal')
if i % 5 > 0: ax.set_yticklabels([])
if i < 5: ax.set_xticklabels([])
#fig.tight_layout()
fig.subplots_adjust(wspace=0.05, hspace=0.05, top=0.925, bottom=0.18)
cbar_ax = fig.add_axes([0.15, 0.05, 0.7, 0.025])
fig.colorbar(im, cax=cbar_ax, orientation = 'horizontal')
cbar_ax.set_xlabel('WFE Difference (nm)')
cbar_ax.xaxis.set_label_position('top');
outdir = '/Users/jwstnircam/Desktop/NRC_Coronagraph/WFE_models/'
fig.suptitle('WFE Drift Maps (Lebreton Method)')
#fig.savefig(outdir+'wfe_diff_10nm_lebreton.pdf')
```
## Science OPDs and PSFs
```
# Generate list of science OPDs and residuals for use in reference drift.
pool = mp.Pool(nproc)
%time output = pool.map(opd_sci_gen, opds_all)
opd_sci_list, opd_resid_list = zip(*output)
pool.close()
pool.join()
# Filters and masks
filt_coron, mask, pupil = ('F335M', 'MASK335R','CIRCLYOT') # Coronagraphic observations
filt_direct = 'F323N' # Direct Imaging
# Get all planet and stellar PSFs for coronagraphic observations
%time psf_sci_all = [get_psf_sci(opd, filt_coron, mask, pupil) for opd in opd_sci_list]
# Get all planet and stellar PSFs for direct imaging
%time psf_sci_direct = [get_psf_sci(opd, filt_direct, None, None) for opd in opd_sci_list]
```
## Drift OPDs and Reference PSFs
```
# For a series of WFE drift values:
# - Generate a new set of OPDs
# - Generate a new set of reference PSFs
# - Calculate the contrast
drift_list = [1.0,2.0,5.0,10.0]
args = (opds_all, pup_cf_std, seg_cf_std, opd_resid_list) # Arguments to pass
# OPDs for all four drift values (10x4)
%time opd_ref_list_all = [ODP_drift_all(wfe_drift, *args) for wfe_drift in drift_list]
# Coronagraphic contrast
%time contrast_spot = [contrast_drift(wfe_drift, psf_sci_all, filt_coron, mask, pupil, opd_ref_list_all[i], \
*args) for i,wfe_drift in enumerate(drift_list)]
# Direct imaging contrast
%time contrast_direct = [contrast_drift(wfe_drift, psf_sci_direct, filt_direct, None, None, opd_ref_list_all[i], \
*args) for i,wfe_drift in enumerate(drift_list)]
```
## Plot Contrast Curves
```
#f, (ax1,ax2,ax3) = plt.subplots(1,3,figsize=(14,3))
import matplotlib.gridspec as gridspec
fig = plt.figure(figsize=(12, 8))
gs = gridspec.GridSpec(2, 2, height_ratios=[2,3])
ax1 = plt.subplot(gs[0, 0])
ax2 = plt.subplot(gs[0, 1])
ax3 = plt.subplot(gs[1, :])
current_palette = sns.color_palette()
pal1 = sns.color_palette("deep")
pal2 = sns.color_palette("muted")
for j,drift in enumerate(drift_list):
contrast = contrast_spot[j]
r,c = zip(*contrast_spot[j])
med = np.median(c, axis=0)
for rc in contrast:
ax1.semilogy(rc[0], rc[1], color=current_palette[j], alpha=0.2)
ax1.semilogy(r[0], med, color=pal1[j], label='Spot - {:.0f} nm'.format(drift))
ax3.semilogy(r[0], med, color=pal1[j], label='Spot - {:.0f} nm'.format(drift), lw=3)
for j,drift in enumerate(drift_list):
contrast2 = contrast_direct[j]
r2,c2 = zip(*contrast_direct[j])
med2 = np.median(c2, axis=0)
for rc in contrast2:
ax2.semilogy(rc[0], rc[1], color=current_palette[j], alpha=0.2)
ax2.semilogy(r2[0], med2, color=pal1[j], label='Direct - {:.0f} nm'.format(drift))
ax3.semilogy(r2[0], med2, color=pal1[j], label='Direct - {:.0f} nm'.format(drift), lw=3, ls='--')
for ax in (ax1,ax2,ax3):
ax.legend()
ax.set_xlim([0,5]);
ax.set_ylim([1e-8,1e-3]);
ax.set_ylabel('Contrast Ratio')
ax.minorticks_on()
#ax.set_axis_bgcolor('blue')
ax3.legend(ncol=2)
ax3.set_xlabel("Radius ('')")
ax1.set_title('Coronagraphic Filter ' + filt_coron)
ax2.set_title('Direct Imaging Filter ' + filt_direct)
ax3.set_title('Contrast Curves (Lebreton Method)')
fig.tight_layout()
outdir = '/Users/jwstnircam/Desktop/NRC_Coronagraph/WFE_models/'
#fig.savefig(outdir+filt_coron+'_contrast_lebreton.pdf', facecolor='none')
# WFE drift for reference star
# Pupil
#coeff_pupil_new = opd_object.coeff_pupil + delta_coeff[:pup_var.size]
#opd_pupil_new = opd_object._opd_from_coeff(coeff_pupil_new, opd_object.basis_zernike, mask_pupil) * mask_pupil
# Segments
#coeff_segs_new = [cf + delta_coeff[pup_var.size:] for cf in opd_object.coeff_segs]
#opd_segs_new_list = opd_object._get_opd_new_segs(coeff_segs_new)
#opd_segs_new = opd_object.combine_opd_segs(opd_segs_new_list) * mask_pupil
# Total
#opd_ref = opd_pupil_new + opd_segs_new + opd_resid
rad_all, con_all = zip(*res_all)
f, axes= plt.subplots(1,3, figsize=[16,3])
(ax1,ax2,ax3) = axes
# PSF profiles for star and planet PSFs (occulted and unocculted)
# Plot both resolutions (pixel-sampled and 4x oversampled)
ax1.semilogy(rr0, mn0*16, label='Stellar PSf')
#ax1.plot(rr1, mn1)
ax1.plot(rr_psf0, mn_psf0*16, label='Planet PSF')
#ax1.plot(rr_psf1, mn_psf1)
ax1.set_ylim([1e-8,1e-2])
ax1.set_title('Radial Profile')
# Encircled energy for star and planet PSFs
ax2.semilogy(rr0, ee0, label='Stellar PSF')
#ax2.plot(rr1, ee1)
ax2.plot(rr_psf0, ee_psf0, label='Planet PSF')
#ax2.plot(rr_psf1, ee_psf1)
ax2.set_title('Encircled Energy')
ax3.semilogy(rr1,contrast1, label='Method 1')
ax3.plot(rr1,contrast2, label='Method 2')
ax3.plot(rr0,contrast3, label='Method 3')
ax3.set_ylim([1e-8,1e-3])
ax3.set_title('Contrast')
for ax in axes:
ax.set_xlim([0,5])
ax.set_xlabel('Arcsec')
ax.legend(loc=0)
mask0 = (opd0==0) & (opd_pupil==0) & (opd_segs==0)
mask1 = ~mask0
print('OPD RMS Variations: \n\tOriginal: {:.3f} \n\tPupil: {:.3f} \n\tSegments: {:.3f} \n\tResiduals: {:.3f}'
.format(opd0[mask1].std(), opd_pupil[mask1].std(), opd_segs[mask1].std(), opd_resid[mask1].std()))
vlim = 0.5
f, (ax1,ax2,ax3) = plt.subplots(1,3, figsize=[16,5])
ax1.imshow(opd_pupil, vmin=-vlim, vmax=vlim, cmap='gist_heat') #seg*seg_mask+
ax2.imshow(opd_segs, vmin=-vlim, vmax=vlim, cmap='gist_heat') #seg*seg_mask+
ax3.imshow(opd_resid, vmin=-vlim, vmax=vlim, cmap='gist_heat') #seg*seg_mask+
jopd = '/Users/jwstnircam/Desktop/JWST_SIMULATOR/PSFs/' + \
'OPD_RevV_nircam_132_jitter_01_025_05_10_20_50_100_zernike.fits'
hdulist = fits.open(jopd)
noise = [0.1, 0.25, 0.5, 1.0, 2.0, 5.0, 10.]
i=6
opd_new = pynrc.nrc_utils.opd_drift(opds[0,:,:], header, noise[i])
diff = opd_new[0].data - hdulist[0].data[i,:,:]
nterms = 8
diam = header['PUPLDIAM']
pix_m = header['PUPLSCAL']
# Create x/y carteesian grid
sh = opds.shape
if len(sh) == 3:
nz,ny,nx = sh
opd_sum = opds.sum(axis=0)
mask0 = (opd_sum == 0)
mask1 = (opd_sum != 0)
else:
nz = 1
ny,nx = sh
# Masks for those values equal (and not) to 0
mask0 = (opds == 0)
mask1 = (opds != 0)
y,x = np.indices((ny,nx))
center = tuple((a - 1) / 2.0 for a in [nx,ny])
y = y - center[1]; x = x - center[0]
y *= pix_m; x *= pix_m
# Convert to polar coordinates
rho = np.sqrt(x**2 + y**2) / diam
theta = np.arctan2(y,x)
# Generate Zernike maps
zall = (poppy.zernike.zernike_basis(nterms, rho=rho, theta=theta))[1:,:,:]
# Mask out defocus
zmasked = np.ma.array(zall, mask=False)
zmasked.mask[2,:,:] = True
ztot = zmasked.sum(axis=0)
#ztot = (1.0-defocus_frac)*ztot + defocus_frac*zall[2,:,:]
#ztot /= ztot.sum()
defocus_frac = 0
print(ztot.sum(),zall[2,:,:].sum())
ztot = (1.0-defocus_frac)*ztot + defocus_frac*(zall[2,:,:]/zall[2,:,:].sum())
ztot[mask0] = 0
ztot /= ztot[mask1].std()
ztot[mask1] -= ztot[mask1].mean()
print(ztot[mask1].mean(), ztot[mask1].sum(), ztot[mask1].min(), ztot[mask1].max())
ztot = poppy.zernike.opd_from_zernikes([0,0,0,1,0,0,0,0], rho=rho, theta=theta, basis=poppy.zernike.zernike_basis)
ztot[mask0]=0
#ztot /= ztot[mask1].std()
print(ztot[mask1].mean(), ztot[mask1].sum(), ztot[mask1].min(), ztot[mask1].max())
for i in range(7):
print('{:.3f} {:.3f} {:.3f} {:.3f}'.\
format(zall[i,:,:].mean(), zall[i,:,:].sum(), zall[i,:,:].min(), zall[i,:,:].max()))
ax = plt.imshow(ztot, cmap='gist_heat')
plt.colorbar(ax)
def pix_noise_contrast(ngroup=2, nf=1, nd2=0, tf=10.737, rn=15.0, ktc=29.0, p_excess=(0,0),
fsrc=0.0, idark=0.003, fzodi=0, fbg=0, **kwargs):
"""
Similar to pix_noise(), except
Parameters
===========
n (int) : Number of groups in integration ramp
m (int) : Number of frames in each group
s (int) : Number of dropped frames in each group
tf (float) : Frame time
rn (float) : Read Noise per pixel
ktc (float) : kTC noise only valid for single frame (n=1)
p_excess: An array or list of two elements that holding the
parameters that describe the excess variance observed in
effective noise plots. By default these are both 0.
Recommended values are [1.0,5.0] or SW and [1.5,10.0] for LW.
fsrc (float) : Flux of source in e-/sec/pix
idark (float) : Dark current in e-/sec/pix
fzodi (float) : Zodiacal light emission in e-/sec/pix
fbg (float) : Any additional background (telescope emission or scattered light?)
Various parameters can either be single values or numpy arrays.
If multiple inputs are arrays, make sure their array sizes match.
Variables that need to have the same array sizes (or a single value):
- n, m, s, & tf
- rn, idark, ktc, fsrc, fzodi, & fbg
Array broadcasting also works:
For Example
n = np.arange(50)+1 # An array of groups to test out
# Create 2D Gaussian PSF with FWHM = 3 pix
npix = 20 # Number of pixels in x and y direction
x = np.arange(0, npix, 1, dtype=float)
y = x[:,np.newaxis]
x0 = y0 = npix // 2 # Center position
fwhm = 3.0
fsrc = np.exp(-4*np.log(2.) * ((x-x0)**2 + (y-y0)**2) / fwhm**2)
fsrc /= fsrc.max()
fsrc *= 10 # Total source counts/sec (arbitrarily scaled)
fsrc = fsrc.reshape(npix,npix,1) # Necessary for broadcasting
# Represents pixel array w/ different RN/pix
rn = np.ones([npix,npix,1])*15.
# Results is a (20x20)x50 showing the noise in e-/sec/pix at each group
noise = pix_noise(ngroup=n, rn=rn, fsrc=fsrc)
"""
n = np.array(ngroup)
m = np.array(nf)
s = np.array(nd2)
tf = np.array(tf)
max_size = np.max([n.size,m.size,s.size,tf.size])
if n.size != max_size: n = n.repeat(max_size)
if m.size != max_size: m = m.repeat(max_size)
if s.size != max_size: s = s.repeat(max_size)
if tf.size != max_size: tf = tf.repeat(max_size)
# Total flux (e-/sec/pix)
ftot = fsrc + idark + fzodi + fbg
# Special case if n=1
if (n==1).any():
# Variance after averaging m frames
var = ktc**2 + (rn**2 + ftot*tf) / m
noise = np.sqrt(var)
noise /= tf # In terms of e-/sec
if (n==1).all(): return noise
noise_n1 = noise
ind_n1 = (n==1)
temp = np.array(rn+ktc+ftot)
temp_bool = np.zeros(temp.shape, dtype=bool)
ind_n1_all = (temp_bool | ind_n1)
# Group time
tg = tf * (m + s)
# Read noise, group time, and frame time variances
var_rn = rn**2 * 12. * (n - 1.) / (m * n * (n + 1.))
var_gp = ftot * tg * 6. * (n**2. + 1.) * (n - 1.) / (5 * n * (n + 1.))
var_fm = ftot * tf * 2. * (m**2. - 1.) * (n - 1.) / (m * n * (n + 1.))
# Functional form for excess variance above theoretical
var_ex = 12. * (n - 1.)/(n + 1.) * p_excess[0]**2 - p_excess[1] / m**0.5
# Variance of total signal
var = var_rn + var_gp - var_fm + var_ex
sig = np.sqrt(var)
# Noise in e-/sec
noise = sig / (tg * (n - 1))
#print(ind_n1_all.shape,noise.shape,noise_n1.shape)
if (n==1).any():
noise[ind_n1_all] = noise_n1[ind_n1_all]
# Include flat field noise
# JWST-CALC-003894
noise_ff = 1E-4 # Uncertainty in the flat field
factor = 1 + noise_ff*np.sqrt(ftot)
noise *= factor
return noise
image = np.zeros([500,1000])
center = tuple((a - 1) / 2.0 for a in image.shape[::-1])
y,x = np.indices(image.shape)
y = y - center[1]
x = x - center[0]
r = np.sqrt(x**2 + y**2)
ang = np.arctan2(-x,y) * (180/np.pi)
print(ang.min(), ang.max())
plt.imshow(ang)
print(ang[0,999])
```
| github_jupyter |
```
# reload packages
%load_ext autoreload
%autoreload 2
```
### Choose GPU (this may not be needed on your computer)
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=''
```
### load packages
```
from tfumap.umap import tfUMAP
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
import umap
import pandas as pd
```
### Load dataset
```
from tfumap.paths import ensure_dir, MODEL_DIR, DATA_DIR
syllable_df = pd.read_pickle(DATA_DIR/'cassins'/ 'cassins.pickle')
syllable_df[:3]
top_labels = (
pd.DataFrame(
{i: [np.sum(syllable_df.labels.values == i)] for i in syllable_df.labels.unique()}
)
.T.sort_values(by=0, ascending=False)[:20]
.T
)
top_labels
sylllable_df = syllable_df[syllable_df.labels.isin(top_labels.columns)]
sylllable_df[:3]
sylllable_df = sylllable_df.reset_index()
specs = np.array(list(sylllable_df.spectrogram.values))
specs.shape
sylllable_df['subset'] = 'train'
sylllable_df.loc[:1000, 'subset'] = 'valid'
sylllable_df.loc[1000:1999, 'subset'] = 'test'
len(sylllable_df)
Y_train = np.array(list(sylllable_df.labels.values[sylllable_df.subset == 'train']))
Y_valid = np.array(list(sylllable_df.labels.values[sylllable_df.subset == 'valid']))
Y_test = np.array(list(sylllable_df.labels.values[sylllable_df.subset == 'test']))
X_train = np.array(list(sylllable_df.spectrogram.values[sylllable_df.subset == 'train'])) #/ 255.
X_valid = np.array(list(sylllable_df.spectrogram.values[sylllable_df.subset == 'valid']))# / 255.
X_test = np.array(list(sylllable_df.spectrogram.values[sylllable_df.subset == 'test'])) #/ 255.
X_train_flat = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
from sklearn.preprocessing import OrdinalEncoder
enc = OrdinalEncoder()
Y_train = enc.fit_transform([[i] for i in Y_train]).astype('int').flatten()
plt.matshow(X_train[10])
```
### Train PCA model
```
from sklearn.decomposition import PCA
pca = PCA(n_components=64)
z = pca.fit_transform(X_train_flat)
```
### plot output
```
fig, ax = plt.subplots( figsize=(8, 8))
sc = ax.scatter(
z[:, 0],
z[:, 1],
c=Y_train.astype(int)[:len(z)],
cmap="tab10",
s=0.1,
alpha=0.5,
rasterized=True,
)
ax.axis('equal')
ax.set_title("PCA embedding", fontsize=20)
plt.colorbar(sc, ax=ax);
```
### Save model
```
import os
import pickle
from tfumap.paths import ensure_dir, MODEL_DIR
output_dir = MODEL_DIR/'projections'/ 'cassins_dtw'/ '64' / 'PCA'
ensure_dir(output_dir)
with open(os.path.join(output_dir, "model.pkl"), "wb") as output:
pickle.dump(pca, output, pickle.HIGHEST_PROTOCOL)
np.save(output_dir / 'z.npy', z)
```
## tsne
```
from openTSNE import TSNE
tsne = TSNE(
n_components = 64,
negative_gradient_method = 'bh'
)
embedding_train = tsne.fit(X_train_flat)
z = np.array(embedding_train)
fig, ax = plt.subplots( figsize=(8, 8))
sc = ax.scatter(
z[:, 0],
z[:, 1],
c=Y_train.astype(int)[:len(z)],
cmap="tab10",
s=0.1,
alpha=0.5,
rasterized=True,
)
ax.axis('equal')
ax.set_title("PCA embedding", fontsize=20)
plt.colorbar(sc, ax=ax);
```
#### save model
```
import os
import pickle
from tfumap.paths import ensure_dir, MODEL_DIR
output_dir = MODEL_DIR/'projections'/ 'cassins_dtw'/ '64' / 'TSNE'
ensure_dir(output_dir)
with open(os.path.join(output_dir, "model.pkl"), "wb") as output:
pickle.dump(pca, output, pickle.HIGHEST_PROTOCOL)
np.save(output_dir / 'z.npy', z)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.